Sample records for evaluation program evaluation

  1. Non-formal educator use of evaluation results.

    PubMed

    Baughman, Sarah; Boyd, Heather H; Franz, Nancy K

    2012-08-01

    Increasing demands for accountability in educational programming have resulted in increasing calls for program evaluation in educational organizations. Many organizations include conducting program evaluations as part of the job responsibilities of program staff. Cooperative Extension is a complex organization offering non-formal educational programs through land grant universities. Many Extension services require non-formal educational program evaluations be conducted by field-based Extension educators. Evaluation research has focused primarily on the efforts of professional, external evaluators. The work of program staff with many responsibilities including program evaluation has received little attention. This study examined how field based Extension educators (i.e. program staff) in four Extension services use the results of evaluations of programs that they have conducted themselves. Four types of evaluation use are measured and explored; instrumental use, conceptual use, persuasive use and process use. Results indicate that there are few programmatic changes as a result of evaluation findings among the non-formal educators surveyed in this study. Extension educators tend to use evaluation results to persuade others about the value of their programs and learn from the evaluation process. Evaluation use is driven by accountability measures with very little program improvement use as measured in this study. Practical implications include delineating accountability and program improvement tasks within complex organizations in order to align evaluation efforts and to improve the results of both. There is some evidence that evaluation capacity building efforts may be increasing instrumental use by educators evaluating their own programs. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. The Evaluation Handbook: Guidelines for Evaluating Dropout Prevention Programs.

    ERIC Educational Resources Information Center

    Smink, Jay; Stank, Peg

    This manual, developed in an effort to take the mysticism out of program evaluation, discusses six phases of the program evaluation process. The introduction discusses reasons for evaluation, process and outcome evaluation, the purpose of the handbook, the evaluation process, and the Sequoia United School District Dropout Prevention Program. Phase…

  3. Effective Practices for Evaluating Education and Public Outreach Programs

    NASA Astrophysics Data System (ADS)

    Wilkerson, S.

    2013-12-01

    Stephanie Baird Wilkerson, PhD Carol Haden EdD Magnolia Consulting,LLC Education and public outreach (EPO) program developers and providers seeking insights regarding effective practices for evaluating EPO activities programs benefit from understanding why evaluation is critical to the success of EPO activities and programs, what data collection methods are appropriate, and how to effectively communicate and report findings. Based on our extensive experience evaluating EPO programs, we will share lessons learned and examples of how these practices play out in actual evaluation studies. EPO program developers, providers, and evaluators must consider several factors that influence which evaluation designs and data collection methods will be most appropriate, given the nature of EPO programs. Effective evaluation practices of EPO programs take into account a program's phase of development, duration, and budget as well as a program's intended outcomes. EPO programs that are just beginning development will have different evaluation needs and priorities than will well-established programs. Effective evaluation practices consider the 'life' of a program with an evaluation design that supports a program's growth through various phases including development, revision and refinement, and completion. It would be premature and inappropriate to expect the attainment of longer-term outcomes of activities during program development phases or early stages of implementation. During program development, EPO providers should clearly define program outcomes that are feasible and appropriate given a program's scope and expected reach. In many respects, this directly relates to the amount of time, or duration, intended audiences participate in EPO programs. As program duration increases so does the likelihood that the program can achieve longer-term outcomes. When choosing which outcomes are reasonable to impact and measure, program duration should be considered. Effective evaluation practices include selecting appropriate data collection methods given a program's duration and corresponding intended outcomes. Data collection methods for programs of short duration might involve simple evaluation activities, whereas programs of longer duration might involve ongoing data collection measures including longitudinal student surveys, implementation logs, student journals, and student achievement measures. During our presentation, we will share examples from our own experience to illustrate how effective evaluation practices can be applied to various EPO programs based on program duration. Irrespective of duration, we find that EPO program developers and providers want both formative feedback to guide improvements and summative feedback on outcomes. More often than not, evaluation budgets for EPO programs are meager at best, yet come with the same information needs and priorities as programs with larger evaluation budgets. So how do program providers get the information they need given their limited funds for evaluation? We will offer several recommendations for helping EPO program providers work with evaluators to become better-informed consumers of evaluation by maximizing evaluation offerings and minimizing costs. During our presentation we also will share examples of communicating and reporting results for EPO program developers, EPO facilitators and practitioners, and funders.

  4. The Spiral-Interactive Program Evaluation Model.

    ERIC Educational Resources Information Center

    Khaleel, Ibrahim Adamu

    1988-01-01

    Describes the spiral interactive program evaluation model, which is designed to evaluate vocational-technical education programs in secondary schools in Nigeria. Program evaluation is defined; utility oriented and process oriented models for evaluation are described; and internal and external evaluative factors and variables that define each…

  5. A novel resident-as-teacher training program to improve and evaluate obstetrics and gynecology resident teaching skills.

    PubMed

    Ricciotti, Hope A; Dodge, Laura E; Head, Julia; Atkins, K Meredith; Hacker, Michele R

    2012-01-01

    Residents play a significant role in teaching, but formal training, feedback, and evaluation are needed. Our aims were to assess resident teaching skills in the resident-as-teacher program, quantify correlations of faculty evaluations with resident self-evaluations, compare resident-as-teacher evaluations with clinical evaluations, and evaluate the resident-as-teacher program. The resident-as-teacher training program is a simulated, videotaped teaching encounter with a trained medical student and standardized teaching evaluation tool. Evaluations from the resident-as-teacher training program were compared to evaluations of resident teaching done by faculty, residents, and medical students from the clinical setting. Faculty evaluation of resident teaching skills in the resident-as-teacher program showed a mean total score of 4.5 ± 0.5 with statistically significant correlations between faculty assessment and resident self-evaluations (r = 0.47; p < 0.001). However, resident self-evaluation of teaching skill was lower than faculty evaluation (mean difference: 0.4; 95% CI 0.3-0.6). When compared to the clinical setting, resident-as-teacher evaluations were significantly correlated with faculty and resident evaluations, but not medical student evaluations. Evaluations from both the resident-as-teacher program and the clinical setting improved with duration of residency. The resident-as-teacher program provides a method to train, give feedback, and evaluate resident teaching.

  6. Steps to a HealthierUS Cooperative Agreement Program: foundational elements for program evaluation planning, implementation, and use of findings.

    PubMed

    MacDonald, Goldie; Garcia, Danyael; Zaza, Stephanie; Schooley, Michael; Compton, Don; Bryant, Terry; Bagnol, Lulu; Edgerly, Cathy; Haverkate, Rick

    2006-01-01

    The Steps to a HealthierUS Cooperative Agreement Program (Steps Program) enables funded communities to implement chronic disease prevention and health promotion efforts to reduce the burden of diabetes, obesity, asthma, and related risk factors. At both the national and community levels, investment in surveillance and program evaluation is substantial. Public health practitioners engaged in program evaluation planning often identify desired outcomes, related indicators, and data collection methods but may pay only limited attention to an overarching vision for program evaluation among participating sites. We developed a set of foundational elements to provide a vision of program evaluation that informs the technical decisions made throughout the evaluation process. Given the diversity of activities across the Steps Program and the need for coordination between national- and community-level evaluation efforts, our recommendations to guide program evaluation practice are explicit yet leave room for site-specific context and needs. Staff across the Steps Program must consider these foundational elements to prepare a formal plan for program evaluation. Attention to each element moves the Steps Program closer to well-designed and complementary plans for program evaluation at the national, state, and community levels.

  7. Guidelines for the Evaluation of Bilingual Education Programs.

    ERIC Educational Resources Information Center

    Cardoza, Desdemona

    Principles of program evaluation research are outlined so that bilingual education program coordinators can conduct methodologically acceptable program evaluations. The three basic principles of evaluation research are: identification of the program participants, definition of the program intervention, and assessment of program effectiveness.…

  8. Interfacing theories of program with theories of evaluation for advancing evaluation practice: Reductionism, systems thinking, and pragmatic synthesis.

    PubMed

    Chen, Huey T

    2016-12-01

    Theories of program and theories of evaluation form the foundation of program evaluation theories. Theories of program reflect assumptions on how to conceptualize an intervention program for evaluation purposes, while theories of evaluation reflect assumptions on how to design useful evaluation. These two types of theories are related, but often discussed separately. This paper attempts to use three theoretical perspectives (reductionism, systems thinking, and pragmatic synthesis) to interface them and discuss the implications for evaluation practice. Reductionism proposes that an intervention program can be broken into crucial components for rigorous analyses; systems thinking view an intervention program as dynamic and complex, requiring a holistic examination. In spite of their contributions, reductionism and systems thinking represent the extreme ends of a theoretical spectrum; many real-world programs, however, may fall in the middle. Pragmatic synthesis is being developed to serve these moderate- complexity programs. These three theoretical perspectives have their own strengths and challenges. Knowledge on these three perspectives and their evaluation implications can provide a better guide for designing fruitful evaluations, improving the quality of evaluation practice, informing potential areas for developing cutting-edge evaluation approaches, and contributing to advancing program evaluation toward a mature applied science. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Maximizing the Impact of Program Evaluation: A Discrepancy-Based Process for Educational Program Evaluation.

    ERIC Educational Resources Information Center

    Cantor, Jeffrey A.

    This paper describes a formative/summative process for educational program evaluation, which is appropriate for higher education programs and is based on M. Provus' Discrepancy Evaluation Model and the principles of instructional design. The Discrepancy Based Methodology for Educational Program Evaluation facilitates systematic and detailed…

  10. Curated Collections for Educators: Five Key Papers about Program Evaluation.

    PubMed

    Thoma, Brent; Gottlieb, Michael; Boysen-Osborn, Megan; King, Andrew; Quinn, Antonia; Krzyzaniak, Sara; Pineda, Nicolas; Yarris, Lalena M; Chan, Teresa

    2017-05-04

    The evaluation of educational programs has become an expected part of medical education. At some point, all medical educators will need to critically evaluate the programs that they deliver. However, the evaluation of educational programs requires a very different skillset than teaching. In this article, we aim to identify and summarize key papers that would be helpful for faculty members interested in exploring program evaluation. In November of 2016, the 2015-2016 Academic life in emergency medicine (ALiEM) Faculty Incubator program highlighted key papers in a discussion of program evaluation. This list of papers was augmented with suggestions by guest experts and by an open call on Twitter. This resulted in a list of 30 papers on program evaluation. Our authorship group then engaged in a process akin to a Delphi study to build consensus on the most important papers about program evaluation for medical education faculty. We present our group's top five most highly rated papers on program evaluation. We also summarize these papers with respect to their relevance to junior medical education faculty members and faculty developers. Program evaluation is challenging. The described papers will be informative for junior faculty members as they aim to design literature-informed evaluations for their educational programs.

  11. Curated Collections for Educators: Five Key Papers about Program Evaluation

    PubMed Central

    Gottlieb, Michael; Boysen-Osborn, Megan; King, Andrew; Quinn, Antonia; Krzyzaniak, Sara; Pineda, Nicolas; Yarris, Lalena M; Chan, Teresa

    2017-01-01

    The evaluation of educational programs has become an expected part of medical education. At some point, all medical educators will need to critically evaluate the programs that they deliver. However, the evaluation of educational programs requires a very different skillset than teaching. In this article, we aim to identify and summarize key papers that would be helpful for faculty members interested in exploring program evaluation. In November of 2016, the 2015-2016 Academic life in emergency medicine (ALiEM) Faculty Incubator program highlighted key papers in a discussion of program evaluation. This list of papers was augmented with suggestions by guest experts and by an open call on Twitter. This resulted in a list of 30 papers on program evaluation. Our authorship group then engaged in a process akin to a Delphi study to build consensus on the most important papers about program evaluation for medical education faculty. We present our group’s top five most highly rated papers on program evaluation. We also summarize these papers with respect to their relevance to junior medical education faculty members and faculty developers. Program evaluation is challenging. The described papers will be informative for junior faculty members as they aim to design literature-informed evaluations for their educational programs. PMID:28589073

  12. Developing Your Evaluation Plans: A Critical Component of Public Health Program Infrastructure.

    PubMed

    Lavinghouze, S Rene; Snyder, Kimberly

    A program's infrastructure is often cited as critical to public health success. The Component Model of Infrastructure (CMI) identifies evaluation as essential under the core component of engaged data. An evaluation plan is a written document that describes how to monitor and evaluate a program, as well as how to use evaluation results for program improvement and decision making. The evaluation plan clarifies how to describe what the program did, how it worked, and why outcomes matter. We use the Centers for Disease Control and Prevention's (CDC) "Framework for Program Evaluation in Public Health" as a guide for developing an evaluation plan. Just as using a roadmap facilitates progress on a long journey, a well-written evaluation plan can clarify the direction your evaluation takes and facilitate achievement of the evaluation's objectives.

  13. Lessons from the trenches: meeting evaluation challenges in school health education.

    PubMed

    Young, Michael; Denny, George; Donnelly, Joseph

    2012-11-01

    Those involved in school health education programs generally believe that health-education programs can play an important role in helping young people make positive health decisions. Thus, it is to document the effects of such programs through rigorous evaluations published in peer-reviewed journals. This paper helps the reader understand the context of school health program evaluation, examines several problems and challenges, shows how problems can often be fixed, or prevented, and demonstrates ways in which challenges can be met. A number of topics are addressed, including distinguishing between curricula evaluation and evaluation of outcomes, types of evaluation, identifying stakeholders in school health evaluation, selection of a program evaluator, recruiting participants, design issues, staff training, parental consent, instrumentation, program implementation and treatment fidelity, participant retention, data collection, data analysis and interpretation, presentation of results, and manuscript preparation and submission. Although there is a lack of health-education program evaluation, rigorous evaluations that have been conducted have, at least in some cases, led to wider dissemination of effective programs. These suggestions will help those interested in school health education understand the importance of evaluation and will provide important guidelines for those conducting evaluations of school health-education programs. © 2012, American School Health Association.

  14. Program Evaluation of a Special Education Day School for Conduct Problem Adolescents.

    ERIC Educational Resources Information Center

    Maher, Charles A.

    1981-01-01

    Describes a procedure for program evaluation of a special education day school. The procedure enables a program evaluator to: (1) identify priority evaluation information needs of a school staff, (2) involve those persons in evaluation design and implementation, and (3) determine the utility of the evaluation for program decision-making purposes.…

  15. Evaluation Planning, Evaluation Management, and Utilization of Evaluation Results within Adult Literacy Campaigns, Programs and Projects (with Implications for Adult Basic Education and Nonformal Education Programs in General). A Working Paper.

    ERIC Educational Resources Information Center

    Bhola, H. S.

    Addressed to professionals involved in program evaluation, this working paper covers various aspects of evaluation planning, including the following: planning as a sociotechnical process, steps in evaluation planning, program planning and implementation versus evaluation planning and implementation, the literacy system and its subsystems, and some…

  16. The State of Evaluation in Internal Medicine Residency

    PubMed Central

    Holmboe, Eric; Beasley, Brent W.

    2008-01-01

    Background There are no nationwide data on the methods residency programs are using to assess trainee competence. The Accreditation Council for Graduate Medical Education (ACGME) has recommended tools that programs can use to evaluate their trainees. It is unknown if programs are adhering to these recommendations. Objective To describe evaluation methods used by our nation’s internal medicine residency programs and assess adherence to ACGME methodological recommendations for evaluation. Design Nationwide survey. Participants All internal medicine programs registered with the Association of Program Directors of Internal Medicine (APDIM). Measurements Descriptive statistics of programs and tools used to evaluate competence; compliance with ACGME recommended evaluative methods. Results The response rate was 70%. Programs were using an average of 4.2–6.0 tools to evaluate their trainees with heavy reliance on rating forms. Direct observation and practice and data-based tools were used much less frequently. Most programs were using at least 1 of the Accreditation Council for Graduate Medical Education (ACGME)’s “most desirable” methods of evaluation for all 6 measures of trainee competence. These programs had higher support staff to resident ratios than programs using less desirable evaluative methods. Conclusions Residency programs are using a large number and variety of tools for evaluating the competence of their trainees. Most are complying with ACGME recommended methods of evaluation especially if the support staff to resident ratio is high. PMID:18612734

  17. Evaluation readiness: improved evaluation planning using a data inventory framework.

    PubMed

    Cohen, A B; Hall, K C; Cohodes, D R

    1985-01-01

    Factors intrinsic to many programs, such as ambiguously stated objectives, inadequately defined performance measures, and incomplete or unreliable databases, often conspire to limit the evaluability of these programs. Current evaluation planning approaches are somewhat constrained in their ability to overcome these obstacles and to achieve full preparedness for evaluation. In this paper, the concept of evaluation readiness is introduced as a complement to other evaluation planning approaches, most notably that of evaluability assessment. The basic products of evaluation readiness--the formal program definition and the data inventory framework--are described, along with a guide for assuring more timely and appropriate evaluation response capability to support the decision making needs of program managers. The utility of evaluation readiness for program planning, as well as for effective management, is also discussed.

  18. Educational Evaluation: Key Characteristics. ACER Research Series No. 102.

    ERIC Educational Resources Information Center

    Maling-Keepes, Jillian

    A set of 13 key characteristics is presented as a framework for educational evaluation studies: (1) program's stage of development when evaluator is appointed; (2) program's openness to revision; (3) program uniformity from site to site; (4) specificity of program objectives; (5) evaluator's independence; (6) evaluator's orientation to value…

  19. 42 CFR 491.11 - Program evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Program evaluation. 491.11 Section 491.11 Public... Certification; and FQHCs Conditions for Coverage § 491.11 Program evaluation. (a) The clinic or center carries out, or arranges for, an annual evaluation of its total program. (b) The evaluation includes review of...

  20. Evaluating Educational Programs. ERIC Digest Series Number EA 54.

    ERIC Educational Resources Information Center

    Beswick, Richard

    In this digest, readers are introduced to the scope of instructional program evaluation and evaluators' changing roles in school districts. A program evaluation measures outcomes based on student-attainment goals, implementation levels, and external factors such as budgetary restraints and community support. Instructional program evaluation may be…

  1. A Culturally Responsive Evaluation Approach Applied to the Talent Development School-to-Career Intervention Program

    ERIC Educational Resources Information Center

    Manswell-Butty, Jo-Anne L.; Reid, Malva Daniel; LaPoint, Velma

    2004-01-01

    Program evaluation has long been used to reveal program characteristics, merits, and challenges. While providing information about program effectiveness, evaluations can also ensure understanding of program outcomes, efficiency, and quality. Furthermore, evaluations can analyze and examine a program's political and social environment as well as…

  2. Evaluation of NASA space grant consortia programs

    NASA Technical Reports Server (NTRS)

    Eisenberg, Martin A.

    1990-01-01

    The meaningful evaluation of the NASA Space Grant Consortium and Fellowship Programs must overcome unusual difficulties: (1) the program, in its infancy, is undergoing dynamic change; (2) the several state consortia and universities have widely divergent parochial goals that defy a uniform evaluative process; and (3) the pilot-sized consortium programs require that the evaluative process be economical in human costs less the process of evaluation comprise the effectiveness of the programs they are meant to assess. This paper represents an attempt to assess the context in which evaluation is to be conducted, the goals and limitations inherent to the evaluation, and to recommend appropriate guidelines for evaluation.

  3. EVALUE : a computer program for evaluating investments in forest products industries

    Treesearch

    Peter J. Ince; Philip H. Steele

    1980-01-01

    EVALUE, a FORTRAN program, was developed to provide a framework for cash flow analysis of investment opportunities. EVALUE was designed to assist researchers in evaluating investment feasibility of new technology or new manufacturing processes. This report serves as user documentation for the EVALUE program. EVALUE is briefly described and notes on preparation of a...

  4. Using Evaluability Assessment to Improve Program Evaluation for the Blue-Throated Macaw Environmental Education Project in Bolivia

    ERIC Educational Resources Information Center

    Salvatierra da Silva, Daniela; Jacobson, Susan K.; Monroe, Martha C.; Israel, Glenn D.

    2016-01-01

    An evaluability assessment of a program to save a critically endangered bird helped prepare the Blue-throated Macaw Environmental Education Project for evaluation and program improvement. The evaluability assessment facilitated agreement among key stakeholders on evaluation criteria and intended uses of evaluation information in order to maximize…

  5. Planning Evaluation through the Program Life Cycle

    ERIC Educational Resources Information Center

    Scheirer, Mary Ann; Mark, Melvin M.; Brooks, Ariana; Grob, George F.; Chapel, Thomas J.; Geisz, Mary; McKaughan, Molly; Leviton, Laura

    2012-01-01

    Linking evaluation methods to the several phases of a program's life cycle can provide evaluation planners and funders with guidance about what types of evaluation are most appropriate over the trajectory of social and educational programs and other interventions. If methods are matched to the needs of program phases, evaluation can and should…

  6. The Evaluator's Role in Recommending Program Closure: A Model for Decision Making and Professional Responsibility

    ERIC Educational Resources Information Center

    Eddy, Rebecca M.; Berry, Tiffany

    2009-01-01

    Evaluators face challenges when programs consistently fail to meet expectations for performance or improvement and consequently, evaluators may recommend that closing a program is the most prudent course of action. However, the evaluation literature provides little guidance regarding when an evaluator might recommend program closure. Given…

  7. A Qualitative Program Evaluation of a Structured Leadership Mentoring Program at a Large Aerospace Corporation

    ERIC Educational Resources Information Center

    Teller, Romney P.

    2011-01-01

    The researcher utilized a qualitative approach to conduct a program evaluation of the organization where he is employed. The study intended to serve as a program evaluation for the structured in-house mentoring program at a large aerospace corporation (A-Corp). This program evaluation clarified areas in which the current mentoring program is…

  8. Let's get technical: Enhancing program evaluation through the use and integration of internet and mobile technologies.

    PubMed

    Materia, Frank T; Miller, Elizabeth A; Runion, Megan C; Chesnut, Ryan P; Irvin, Jamie B; Richardson, Cameron B; Perkins, Daniel F

    2016-06-01

    Program evaluation has become increasingly important, and information on program performance often drives funding decisions. Technology use and integration can help ease the burdens associated with program evaluation by reducing the resources needed (e.g., time, money, staff) and increasing evaluation efficiency. This paper reviews how program evaluators, across disciplines, can apply internet and mobile technologies to key aspects of program evaluation, which consist of participant registration, participant tracking and retention, process evaluation (e.g., fidelity, assignment completion), and outcome evaluation (e.g., behavior change, knowledge gain). In addition, the paper focuses on the ease of use, relative cost, and fit with populations. An examination on how these tools can be integrated to enhance data collection and program evaluation is discussed. Important limitations of and considerations for technology integration, including the level of technical skill, cost needed to integrate various technologies, data management strategies, and ethical considerations, are highlighted. Lastly, a case study of technology use in an evaluation conducted by the Clearinghouse for Military Family Readiness at Penn State is presented and illustrates how technology integration can enhance program evaluation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. The opportunities and challenges of multi-site evaluations: lessons from the jail diversion and trauma recovery national cross-site evaluation.

    PubMed

    Stainbrook, Kristin; Penney, Darby; Elwyn, Laura

    2015-06-01

    Multi-site evaluations, particularly of federally funded service programs, pose a special set of challenges for program evaluation. Not only are there contextual differences related to project location, there are often relatively few programmatic requirements, which results in variations in program models, target populations and services. The Jail Diversion and Trauma Recovery-Priority to Veterans (JDTR) National Cross-Site Evaluation was tasked with conducting a multi-site evaluation of thirteen grantee programs that varied along multiple domains. This article describes the use of a mixed methods evaluation design to understand the jail diversion programs and client outcomes for veterans with trauma, mental health and/or substance use problems. We discuss the challenges encountered in evaluating diverse programs, the benefits of the evaluation in the face of these challenges, and offer lessons learned for other evaluators undertaking this type of evaluation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Critical evaluation of international health programs: Reframing global health and evaluation.

    PubMed

    Chi, Chunhuei; Tuepker, Anaïs; Schoon, Rebecca; Núñez Mondaca, Alicia

    2018-04-01

    Striking changes in the funding and implementation of international health programs in recent decades have stimulated debate about the role of communities in deciding which health programs to implement. An important yet neglected piece of that discussion is the need to change norms in program evaluation so that analysis of community ownership, beyond various degrees of "participation," is seen as central to strong evaluation practices. This article challenges mainstream evaluation practices and proposes a framework of Critical Evaluation with 3 levels: upstream evaluation assessing the "who" and "how" of programming decisions; midstream evaluation focusing on the "who" and "how" of selecting program objectives; and downstream evaluation, the focus of current mainstream evaluation, which assesses whether the program achieved its stated objectives. A vital tenet of our framework is that a community possesses the right to determine the path of its health development. A prerequisite of success, regardless of technical outcomes, is that programs must address communities' high priority concerns. Current participatory methods still seldom practice community ownership of program selection because they are vulnerable to funding agencies' predetermined priorities. In addition to critiquing evaluation practices and proposing an alternative framework, we acknowledge likely challenges and propose directions for future research. Copyright © 2018 John Wiley & Sons, Ltd.

  11. Feedback Improvement in Automatic Program Evaluation Systems

    ERIC Educational Resources Information Center

    Skupas, Bronius

    2010-01-01

    Automatic program evaluation is a way to assess source program files. These techniques are used in learning management environments, programming exams and contest systems. However, use of automated program evaluation encounters problems: some evaluations are not clear for the students and the system messages do not show reasons for lost points.…

  12. Evaluating Educational Programs.

    ERIC Educational Resources Information Center

    Ball, Samuel

    The activities of Educational Testing Service (ETS) in evaluating educational programs are described. Program evaluations are categorized as needs assessment, formative evaluation, or summative evaluation. Three classic efforts which illustrate the range of ETS' participation are the Pennsylvania Goals Study (1965), the Coleman Report--Equality of…

  13. Evaluating Faculty Development and Clinical Training Programs in Substance Abuse: A Guide Book.

    ERIC Educational Resources Information Center

    Klitzner, Michael; Stewart, Kathryn

    Intended to provide an overview of program evaluation as it applies to the evaluation of faculty development and clinical training programs in substance abuse for health and mental health professional schools, this guide enables program developers and other faculty to work as partners with evaluators in the development of evaluation designs that…

  14. The Discrepancy Evaluation Model: A Systematic Approach for the Evaluation of Career Planning and Placement Programs.

    ERIC Educational Resources Information Center

    Buttram, Joan L.; Covert, Robert W.

    The Discrepancy Evaluation Model (DEM), developed in 1966 by Malcolm Provus, provides information for program assessment and program improvement. Under the DEM, evaluation is defined as the comparison of an actual performance to a desired standard. The DEM embodies five stages of evaluation based upon a program's natural development: program…

  15. Learning From Small-Scale Experimental Evaluations of After School Programs. Snapshot Number 8

    ERIC Educational Resources Information Center

    Harvard Family Research Project, Harvard University, 2006

    2006-01-01

    The Harvard Family Research Project (HFRP) Out-of-School Time Program Evaluation Database contains profiles of out-of-school time (OST) program evaluations. Its purpose is to provide accessible information about previous and current evaluations to support the development of high quality evaluations and programs in the OST field. Types of Programs…

  16. The Software Line-up: What Reviewers Look for When Evaluating Software.

    ERIC Educational Resources Information Center

    ELECTRONIC Learning, 1982

    1982-01-01

    Contains a check list to aid teachers in evaluating software used in computer-assisted instruction on microcomputers. The evaluation form contains three sections: program description, program evaluation, and overall evaluation. A brief description of a software evaluation program in use at the Granite School District in Utah is included. (JJD)

  17. Using Program Theory-Driven Evaluation Science to Crack the Da Vinci Code

    ERIC Educational Resources Information Center

    Donaldson, Stewart I.

    2005-01-01

    Program theory-driven evaluation science uses substantive knowledge, as opposed to method proclivities, to guide program evaluations. It aspires to update, clarify, simplify, and make more accessible the evolving theory of evaluation practice commonly referred to as theory-driven or theory-based evaluation. The evaluator in this chapter provides a…

  18. An Evaluation System for the Online Training Programs in Meteorology and Hydrology

    ERIC Educational Resources Information Center

    Wang, Yong; Zhi, Xiefei

    2009-01-01

    This paper studies the current evaluation system for the online training program in meteorology and hydrology. CIPP model that includes context evaluation, input evaluation, process evaluation and product evaluation differs from Kirkpatrick model including reactions evaluation, learning evaluation, transfer evaluation and results evaluation in…

  19. Evaluating programs that address ideological issues: ethical and practical considerations for practitioners and evaluators.

    PubMed

    Lieberman, Lisa D; Fagen, Michael C; Neiger, Brad L

    2014-03-01

    There are important practical and ethical considerations for organizations in conducting their own, or commissioning external, evaluations and for both practitioners and evaluators, when assessing programs built on strongly held ideological or philosophical approaches. Assessing whether programs "work" has strong political, financial, and/or moral implications, particularly when expending public dollars, and may challenge objectivity about a particular program or approach. Using a case study of the evaluation of a school-based abstinence-until-marriage program, this article discusses the challenges, lessons learned, and ethical responsibilities regarding decisions about evaluation, specifically associated with ideologically driven programs. Organizations should consider various stakeholders and views associated with their program to help identify potential pitfalls in evaluation. Once identified, the program or agency needs to carefully consider its answers to two key questions: Do they want the answer and are they willing to modify the program? Having decided to evaluate, the choice of evaluator is critical to assuring that ethical principles are maintained and potential skepticism or criticism of findings can be addressed appropriately. The relationship between program and evaluator, including agreements about ownership and eventual publication and/or promotion of data, should be addressed at the outset. Programs and organizations should consider, at the outset, their ethical responsibility when findings are not expected or desired. Ultimately, agencies, organizations, and programs have an ethical responsibility to use their data to provide health promotion programs, whether ideologically founded or not, that appropriately and effectively address the problems they seek to solve.

  20. Collaborative Evaluation within a Framework of Stakeholder-Oriented Evaluation Approaches

    ERIC Educational Resources Information Center

    O'Sullivan, Rita G.

    2012-01-01

    Collaborative Evaluation systematically invites and engages stakeholders in program evaluation planning and implementation. Unlike "distanced" evaluation approaches, which reject stakeholder participation as evaluation team members, Collaborative Evaluation assumes that active, on-going engagement between evaluators and program staff,…

  1. Evaluation of Career Guidance Programs: Models, Methods, and Microcomputers. Information Series No. 317.

    ERIC Educational Resources Information Center

    Crites, John O.

    Evaluating the effectiveness of career guidance programs is a complex process, and few comprehensive models for evaluating such programs exist. Evaluation of career guidance programs has been hampered by the myth that program outcomes are uniform and monolithic. Findings from studies of attribute treatment interactions have revealed only a few…

  2. Challenges to Evaluating Physical Activity Programs in American Indian/Alaska Native Communities

    ERIC Educational Resources Information Center

    Roberts, Erica Blue; Butler, James; Green, Kerry M.

    2018-01-01

    Despite the importance of evaluation to successful programming, a lack of physical activity program (PAP) evaluation for American Indian/Alaska Native (AI/AN) programs exists, which is significant given the high rates of obesity and diabetes in this population. While evaluation barriers have been identified broadly among AI/AN programs, challenges…

  3. Evaluating Evaluations: The Case of Parent Involvement Programs

    ERIC Educational Resources Information Center

    Mattingly, Doreen J.; Prislin, Radmila; McKenzie, Thomas L.; Rodriguez, James L.; Kayzar, Brenda

    2002-01-01

    This article analyzes 41 studies that evaluated K-12 parent involvement programs in order to assess claims that such programs are an effective means of improving student learning. It examines the characteristics of the parent involvement programs, as well as the research design, data, and analytical techniques used in program evaluation. Our…

  4. Evaluation of programs to improve complementary feeding in infants and young children.

    PubMed

    Frongillo, Edward A

    2017-10-01

    Evaluation of complementary feeding programs is needed to enhance knowledge on what works, to document responsible use of resources, and for advocacy. Evaluation is done during program conceptualization and design, implementation, and determination of effectiveness. This paper explains the role of evaluation in the advancement of complementary feeding programs, presenting concepts and methods and illustrating them through examples. Planning and investments for evaluations should occur from the beginning of the project life cycle. Essential to evaluation is articulation of a program theory on how change would occur and what program actions are required for change. Analysis of program impact pathways makes explicit the dynamic connections in the program theory and accounts for contextual factors that could influence program effectiveness. Evaluating implementation functioning is done through addressing questions about needs, coverage, provision, and utilization using information obtained from process evaluation, operations research, and monitoring. Evaluating effectiveness is done through assessing impact, efficiency, coverage, process, and causality. Plausibility designs ask whether the program seemed to have an effect above and beyond external influences, often using a nonrandomized control group and baseline and end line measures. Probability designs ask whether there was an effect using a randomized control group. Evaluations may not be able to use randomization, particularly for programs implemented at a large scale. Plausibility designs, innovative designs, or innovative combinations of designs sometimes are best able to provide useful information. Further work is needed to develop practical designs for evaluation of large-scale country programs on complementary feeding. © 2017 John Wiley & Sons Ltd.

  5. Dissemination and implementation science in program evaluation: A telemental health clinical consultation case example.

    PubMed

    Arora, Prerna G; Connors, Elizabeth H; Blizzard, Angela; Coble, Kelly; Gloff, Nicole; Pruitt, David

    2017-02-01

    Increased attention has been placed on evaluating the extent to which clinical programs that support the behavioral health needs of youth have effective processes and result in improved patient outcomes. Several theoretical frameworks from dissemination and implementation (D&I) science have been put forth to guide the evaluation of behavioral health program implemented in the context of real-world settings. Although a strong rationale for the integration of D&I science in program evaluation exists, few examples exist available to guide the evaluator in integrating D&I science in the planning and execution of evaluation activities. This paper seeks to inform program evaluation efforts by outlining two D&I frameworks and describing their integration in program evaluation design. Specifically, this paper seeks to support evaluation efforts by illustrating the use of these frameworks via a case example of a telemental health consultation program in pediatric primary care designed to improve access to behavioral health care for children and adolescents in rural settings. Lessons learned from this effort, as well as recommendations regarding the future evaluation of programs using D&I science to support behavioral health care in community-based settings are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Evaluation of Environmental Education in Schools.

    ERIC Educational Resources Information Center

    Connect, 1984

    1984-01-01

    This newsletter discusses the evaluation of environmental education (EE) in schools, highlighting an introductory chapter of a proposed Unesco-United Nations environmental program guide on evaluating such programs. The benefits of evaluating an EE program (including program improvement, growth in student learning, better environment, and program…

  7. Development, implementation, and evaluation of a multi-addiction prevention program for primary school students in Hong Kong: the B.E.S.T. Teen Program.

    PubMed

    Shek, Daniel T L; Yu, Lu; Leung, Hildie; Wu, Florence K Y; Law, Moon Y M

    Based on the evaluation findings of the B.E.S.T. Teen Program which aimed at promoting behavioral, emotional, social, and thinking competencies in primary school students, it is argued in this paper that promotion of psychosocial competence to prevent addiction in primary school students is a promising strategy. A total of 382 Primary 5 (Grade 5) and 297 Primary 6 (Grade 6) students from five primary schools in Hong Kong participated in the program. Different evaluation strategies were adopted to evaluate the program. First, objective outcome evaluation adopting a non-equivalent group pretest-posttest experimental-control group design was conducted to examine change in the students. Second, to gauge students' perceptions of the program, subjective outcome evaluation was conducted. The evaluation findings basically converged to tentatively suggest that young adolescents benefited from participating in the program. Implications on the development, implementation, and evaluation of addiction prevention programs for teenagers are discussed.

  8. When Unintended Consequences Become the Main Effect: Evaluating the Development of a Foster Parent Training Program.

    ERIC Educational Resources Information Center

    Loesch-Griffin, Deborah A.; Ringstaff, Cathy

    A program of education, training, and support provided to foster parents in a California county through a nonprofit agency is evaluated. The evaluators' experience indicates that: (1) evaluations are gaining in popularity; (2) role shifts by evaluators are sometimes difficult to perceive; (3) program staff are unlikely to use evaluative feedback…

  9. An evaluation capacity building toolkit for principal investigators of undergraduate research experiences: A demonstration of transforming theory into practice.

    PubMed

    Rorrer, Audrey S

    2016-04-01

    This paper describes the approach and process undertaken to develop evaluation capacity among the leaders of a federally funded undergraduate research program. An evaluation toolkit was developed for Computer and Information Sciences and Engineering(1) Research Experiences for Undergraduates(2) (CISE REU) programs to address the ongoing need for evaluation capacity among principal investigators who manage program evaluation. The toolkit was the result of collaboration within the CISE REU community with the purpose being to provide targeted instructional resources and tools for quality program evaluation. Challenges were to balance the desire for standardized assessment with the responsibility to account for individual program contexts. Toolkit contents included instructional materials about evaluation practice, a standardized applicant management tool, and a modulated outcomes measure. Resulting benefits from toolkit deployment were having cost effective, sustainable evaluation tools, a community evaluation forum, and aggregate measurement of key program outcomes for the national program. Lessons learned included the imperative of understanding the evaluation context, engaging stakeholders, and building stakeholder trust. Results from project measures are presented along with a discussion of guidelines for facilitating evaluation capacity building that will serve a variety of contexts. Copyright © 2016. Published by Elsevier Ltd.

  10. General Criteria for Evaluating Social Programs.

    ERIC Educational Resources Information Center

    Shipman, Stephanie

    1989-01-01

    A framework of general evaluation criteria for ensuring the comprehensiveness of program reviews and appropriate and fair comparison of children's programs is outlined. It has two components: (1) descriptive; and (2) evaluative. The framework was developed by researchers at the General Accounting Office for evaluation of federal programs. (TJH)

  11. 29 CFR 1960.80 - Secretary's evaluations of agency occupational safety and health programs.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... EMPLOYEE OCCUPATIONAL SAFETY AND HEALTH PROGRAMS AND RELATED MATTERS Evaluation of Federal Occupational Safety and Health Programs § 1960.80 Secretary's evaluations of agency occupational safety and health... evaluating an agency's occupational safety and health program. To accomplish this, the Secretary shall...

  12. 29 CFR 1960.80 - Secretary's evaluations of agency occupational safety and health programs.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... EMPLOYEE OCCUPATIONAL SAFETY AND HEALTH PROGRAMS AND RELATED MATTERS Evaluation of Federal Occupational Safety and Health Programs § 1960.80 Secretary's evaluations of agency occupational safety and health... evaluating an agency's occupational safety and health program. To accomplish this, the Secretary shall...

  13. 29 CFR 1960.80 - Secretary's evaluations of agency occupational safety and health programs.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... EMPLOYEE OCCUPATIONAL SAFETY AND HEALTH PROGRAMS AND RELATED MATTERS Evaluation of Federal Occupational Safety and Health Programs § 1960.80 Secretary's evaluations of agency occupational safety and health... evaluating an agency's occupational safety and health program. To accomplish this, the Secretary shall...

  14. 29 CFR 1960.80 - Secretary's evaluations of agency occupational safety and health programs.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... EMPLOYEE OCCUPATIONAL SAFETY AND HEALTH PROGRAMS AND RELATED MATTERS Evaluation of Federal Occupational Safety and Health Programs § 1960.80 Secretary's evaluations of agency occupational safety and health... evaluating an agency's occupational safety and health program. To accomplish this, the Secretary shall...

  15. Program Evaluation Interest and Skills of School Counselors

    ERIC Educational Resources Information Center

    Astramovich, Randall L.

    2017-01-01

    School counselors participated in a study examining their program evaluation interest and skills. Findings suggest that school counselors understand the importance of program evaluation, yet they may lack the skills and confidence to successfully engage in program evaluation activities. Professional development training may be an important method…

  16. 29 CFR 1960.80 - Secretary's evaluations of agency occupational safety and health programs.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... EMPLOYEE OCCUPATIONAL SAFETY AND HEALTH PROGRAMS AND RELATED MATTERS Evaluation of Federal Occupational Safety and Health Programs § 1960.80 Secretary's evaluations of agency occupational safety and health... evaluating an agency's occupational safety and health program. To accomplish this, the Secretary shall...

  17. From Implementation to Outcomes to Impacts: Designing a Comprehensive Program Evaluation

    NASA Astrophysics Data System (ADS)

    Shebby, S.

    2015-12-01

    Funders are often interested in learning about the impact of program activities, yet before the impacts are determined, educational evaluations should first examine program implementation and outcomes. Implementation evaluation examines how and the extent to which program activities are delivered as intended, including the extent to which activities reached the targeted participants. Outcome evaluation is comprised of a systematic examination of the effects that a program has on program participants, such as changes in knowledge, attitudes, beliefs, values, and behaviors. In this presentation, presenters will share insights on evaluating the implementation, outcomes, and impacts associated with an online science curriculum for K-2 students. The science curriculum was designed to provide students with access to science concepts and skills in an interactive and innovative environment, and teachers with embedded, aligned, and on-demand professional development. One of the most important—and challenging—steps in this evaluation was to select outcomes that were well-defined, measurable, and aligned to program activities, as well as relevant to program stakeholders. An additional challenge was to measure implementation given limited access to the classroom environment. This presentation will include a discussion of the process evaluators used to select appropriate implementation indicators and outcomes (teacher and student), design an evaluation approach, and craft data collection instruments. Although examples provided are specific to the K-2 science intervention, the best practices discussed are pertinent to all program and event evaluations. Impact evaluation goes beyond implementation and outcome evaluation to inform whether a program is working or not. It requires a comparison group to inform what outcomes would have been in the absence of the intervention. As such, this presentation will also include a discussion of impacts, including how impacts are defined and measured, and some common challenges in evaluating program impact.

  18. Evaluating Social Programs at the State and Local Level. The JTPA Evaluation Design Project.

    ERIC Educational Resources Information Center

    Blalock, Ann Bonar, Ed.; And Others

    This book on evaluating social programs is an outcome of the Job Training Partnership Act (JTPA) Evaluation Design Project, which produced a set of 10 guides for the evaluation of state and local JTPA programs. This book distills ideas from these guides and applies them to a larger context. Part 1 presents a general approach to program evaluation…

  19. Using Curriculum-Based Measurements for Program Evaluation: Expanding Roles for School Psychologists

    ERIC Educational Resources Information Center

    Tusing, Mary E.; Breikjern, Nicholle A.

    2017-01-01

    Educators increasingly need to evaluate schoolwide reform efforts; however, complex program evaluations often are not feasible in schools. Through a case example, we provide a heuristic for program evaluation that is easily replicated in schools. Criterion-referenced interpretations of schoolwide screening data were used to evaluate outcomes…

  20. Using Evaluation and Research Theory to Improve Programs in Applied Settings: An Interview with Thomas D. Cook.

    ERIC Educational Resources Information Center

    Buescher, Thomas M.

    1986-01-01

    An interview with T. Cook, author of works on the use of research and evaluation theory and design, touches on such topics as practical evaluation, planning programs with evaluation or research design, and evaluation of programs for gifted students. (CL)

  1. Modification and Adaptation of the Program Evaluation Standards in Saudi Arabia

    ERIC Educational Resources Information Center

    Alyami, Mohammed

    2013-01-01

    The Joint Committee on Standards for Educational Evaluation's Program Evaluation Standards is probably the most recognized and applied set of evaluation standards globally. The most recent edition of The Program Evaluation Standards includes five categories and 30 standards. The five categories are Utility, Feasibility, Propriety, Accuracy, and…

  2. Evaluation Strategies in Financial Education: Evaluation with Imperfect Instruments

    ERIC Educational Resources Information Center

    Robinson, Lauren; Dudensing, Rebekka; Granovsky, Nancy L.

    2016-01-01

    Program evaluation often suffers due to time constraints, imperfect instruments, incomplete data, and the need to report standardized metrics. This article about the evaluation process for the Wi$eUp financial education program showcases the difficulties inherent in evaluation and suggests best practices for assessing program effectiveness. We…

  3. Library Programs. Evaluating Federally Funded Public Library Programs.

    ERIC Educational Resources Information Center

    Office of Educational Research and Improvement (ED), Washington, DC.

    Following an introduction by Betty J. Turock, nine reports examine key issues in library evaluation: (1) "Output Measures and the Evaluation Process" (Nancy A. Van House) describes measurement as a concept to be understood in the larger context of planning and evaluation; (2) "Adapting Output Measures to Program Evaluation"…

  4. Responsive Meta-Evaluation: A Participatory Approach to Enhancing Evaluation Quality

    ERIC Educational Resources Information Center

    Sturges, Keith M.; Howley, Caitlin

    2017-01-01

    In an era of ever-deepening budget cuts and a concomitant demand for substantiated programs, many organizations have elected to conduct internal program evaluations. Internal evaluations offer advantages (e.g., enhanced evaluator program knowledge and ease of data collection) but may confront important challenges, including credibility threats,…

  5. Experiments in evaluation capacity building: Enhancing brain disorders research impact in Ontario.

    PubMed

    Nylen, Kirk; Sridharan, Sanjeev

    2017-05-08

    This paper is the introductory paper on a forum on evaluation capacity building for enhancing impacts of research on brain disorders. It describes challenges and opportunities of building evaluation capacity among community-based organizations in Ontario involved in enhancing brain health and supporting people living with a brain disorder. Using an example of a capacity building program called the "Evaluation Support Program", which is run by the Ontario Brain Institute, this forum discusses multiple themes including evaluation capacity building, evaluation culture and evaluation methodologies appropriate for evaluating complex community interventions. The goal of the Evaluation Support Program is to help community-based organizations build the capacity to demonstrate the value that they offer in order to improve, sustain, and spread their programs and activities. One of the features of this forum is that perspectives on the Evaluation Support Program are provided by multiple stakeholders, including the community-based organizations, evaluation team members involved in capacity building, thought leaders in the fields of evaluation capacity building and evaluation culture, and the funders. Copyright © 2017. Published by Elsevier Ltd.

  6. Measuring Success: Evaluating Educational Programs

    ERIC Educational Resources Information Center

    Fisher, Yael

    2010-01-01

    This paper reveals a new evaluation model, which enables educational program and project managers to evaluate their programs with a simple and easy to understand approach. The "index of success model" is comprised of five parameters that enable to focus on and evaluate both the implementation and results of an educational program. The…

  7. The Role of Evaluation and Plans for Evaluating the Current Testing Program.

    ERIC Educational Resources Information Center

    Winters, Lynn

    The Palos Verdes Peninsula Unified School District Office of Program Evaluation and Research is responsible for providing information for program development and improvement; providing test information to special programs coordinators; and acting as a clearinghouse for all information concerning tests, evaluation methodology, and educational…

  8. Tools for Formative Evaluation: Gathering the Information Necessary for Program Improvement

    ERIC Educational Resources Information Center

    Jayaratne, K. S. U.

    2016-01-01

    New Extension educators experience a steep learning curve when attempting to develop effective Extension programs. Formative evaluation is helpful to new, and experienced, Extension educators in determining the changes necessary for making programs more effective. Formative evaluation is an essential part of program evaluation. However, its use…

  9. Use of program logic models in the Southern Rural Access Program evaluation.

    PubMed

    Pathman, Donald; Thaker, Samruddhi; Ricketts, Thomas C; Albright, Jennifer B

    2003-01-01

    The Southern Rural Access Program (SRAP) evaluation team used program logic models to clarify grantees' activities, objectives, and timelines. This information was used to benchmark data from grantees' progress reports to assess the program's successes. This article presents a brief background on the use of program logic models--essentially charts or diagrams specifying a program's planned activities, objectives, and goals--for evaluating and managing a program. It discusses the structure of the logic models chosen for the SRAP and how the model concept was introduced to the grantees to promote acceptance and use of the models. The article describes how the models helped clarify the program's objectives and helped lead agencies plan and manage the many program initiatives and subcontractors in their states. Models also provided a framework for grantees to report their progress to the National Program Office and evaluators and promoted the evaluators' visibility and acceptance by the grantees. Program logics, however, increased grantees' reporting requirements and demanded substantial time of the evaluators. Program logic models, on balance, proved their merit in the SRAP through their contributions to its management and evaluation and by providing a better understanding of the program's initiatives, successes, and potential impact.

  10. Evaluating adolescent pregnancy programs: rethinking our priorities.

    PubMed

    Stahler, G J; DuCette, J P

    1991-01-01

    Noting that impact evaluations of adolescent pregnancy programs are characterized by poor quality, the authors recommend using a different standard in assessing the value of programs. While the number of adolescent pregnancy programs has multiplied during the last 3 decades, little is known about their impact in ameliorating the negative consequences of too-early childbearing. An ideal evaluation of these programs would randomly select and randomly assign subjects to experimental and control groups. But evaluations conducted by individual program generally face obstacles that limit the randomness of the study. most individual programs lack the financial resources and do not employ the full-time professional evaluators needed to carry out a valid evaluation. These factors result in too short an evaluation period, incomplete and inaccurate data, and lack of randomness in the assignment of control groups. To more accurately assess the impact of the programs, the authors recommend that individual programs focus on process evaluation and collection of complete and reliable data on their clients. From the onset, a program should have a clear description of its content, logic of intervention, and method of implementation. It should maintain thorough records on client characteristics, service utilization, and should conduct long-term follow-ups. For rigorous impact evaluations, programs should rely on 3rd party entities. These independent organizations -- universities or research institutes -- do not have a stake in the outcome of the evaluation, making the study all the more objective. Furthermore, they provide experienced researchers.

  11. Collaborative evaluation of a high school prevention curriculum: How methods of collaborative evaluation enhanced a randomized control trial to inform program improvement.

    PubMed

    Orsini, Muhsin Michael; Wyrick, David L; Milroy, Jeffrey J

    2012-11-01

    Blending high-quality and rigorous research with pure evaluation practice can often be best accomplished through thoughtful collaboration. The evaluation of a high school drug prevention program (All Stars Senior) is an example of how perceived competing purposes and methodologies can coexist to investigate formative and summative outcome variables that can be used for program improvement. Throughout this project there were many examples of client learning from evaluator and evaluator learning from client. This article presents convincing evidence that collaborative evaluation can improve the design, implementation, and findings of the randomized control trial. Throughout this paper, we discuss many examples of good science, good evaluation, and other practical benefits of practicing collaborative evaluation. Ultimately, the authors created the term pre-formative evaluation to describe the period prior to data collection and before program implementation, when collaborative evaluation can inform program improvement. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Development of a program logic model and evaluation plan for a participatory ergonomics intervention in construction.

    PubMed

    Jaegers, Lisa; Dale, Ann Marie; Weaver, Nancy; Buchholz, Bryan; Welch, Laura; Evanoff, Bradley

    2014-03-01

    Intervention studies in participatory ergonomics (PE) are often difficult to interpret due to limited descriptions of program planning and evaluation. In an ongoing PE program with floor layers, we developed a logic model to describe our program plan, and process and summative evaluations designed to describe the efficacy of the program. The logic model was a useful tool for describing the program elements and subsequent modifications. The process evaluation measured how well the program was delivered as intended, and revealed the need for program modifications. The summative evaluation provided early measures of the efficacy of the program as delivered. Inadequate information on program delivery may lead to erroneous conclusions about intervention efficacy due to Type III error. A logic model guided the delivery and evaluation of our intervention and provides useful information to aid interpretation of results. © 2013 Wiley Periodicals, Inc.

  13. Development of a Program Logic Model and Evaluation Plan for a Participatory Ergonomics Intervention in Construction

    PubMed Central

    Jaegers, Lisa; Dale, Ann Marie; Weaver, Nancy; Buchholz, Bryan; Welch, Laura; Evanoff, Bradley

    2013-01-01

    Background Intervention studies in participatory ergonomics (PE) are often difficult to interpret due to limited descriptions of program planning and evaluation. Methods In an ongoing PE program with floor layers, we developed a logic model to describe our program plan, and process and summative evaluations designed to describe the efficacy of the program. Results The logic model was a useful tool for describing the program elements and subsequent modifications. The process evaluation measured how well the program was delivered as intended, and revealed the need for program modifications. The summative evaluation provided early measures of the efficacy of the program as delivered. Conclusions Inadequate information on program delivery may lead to erroneous conclusions about intervention efficacy due to Type III error. A logic model guided the delivery and evaluation of our intervention and provides useful information to aid interpretation of results. PMID:24006097

  14. Redesigning and aligning assessment and evaluation for a federally funded math and science teacher educational program.

    PubMed

    Hardré, Patricia L; Slater, Janis; Nanny, Mark

    2010-11-01

    This paper examines the redesign of evaluation components for a teacher professional development project funded by the National Science Foundation. It focuses on aligning evaluation instrumentation and strategies with program goals, research goals and program evaluation best practices. The study identifies weaknesses in the original (year 1) program evaluation design and implementation, develops strategies and tracks changes for year 2 implementation, and then reports enhancement of findings and recommendations for year 3. It includes lessons learned about assessment and evaluation over the project lifespan, with implications for research and evaluation of a range of related programs. This study functions as a classic illustration of how critical it is to observe first principles of assessment and evaluation for funded programs, the risks that arise when they are ignored, and the benefits that accrue when they are systematically observed. Copyright (c) 2009. Published by Elsevier Ltd.

  15. Solar energy program evaluation: an introduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    deLeon, P.

    The Program Evaluation Methodology provides an overview of the practice and methodology of program evaluation and defines more precisely the evaluation techniques and methodologies that would be most appropriate to government organizations which are actively involved in the research, development, and commercialization of solar energy systems. Formal evaluation cannot be treated as a single methodological approach for assessing a program. There are four basic types of evaluation designs - the pre-experimental design; the quasi-experimental design based on time series; the quasi-experimental design based on comparison groups; and the true experimental design. This report is organized to first introduce the rolemore » and issues of evaluation. This is to provide a set of issues to organize the subsequent sections detailing the national solar energy programs. Then, these two themes are integrated by examining the evaluation strategies and methodologies tailored to fit the particular needs of the various individual solar energy programs. (MCW)« less

  16. Conceptual framework for development of comprehensive e-health evaluation tool.

    PubMed

    Khoja, Shariq; Durrani, Hammad; Scott, Richard E; Sajwani, Afroz; Piryani, Usha

    2013-01-01

    The main objective of this study was to develop an e-health evaluation tool based on a conceptual framework including relevant theories for evaluating use of technology in health programs. This article presents the development of an evaluation framework for e-health programs. The study was divided into three stages: Stage 1 involved a detailed literature search of different theories and concepts on evaluation of e-health, Stage 2 plotted e-health theories to identify relevant themes, and Stage 3 developed a matrix of evaluation themes and stages of e-health programs. The framework identifies and defines different stages of e-health programs and then applies evaluation theories to each of these stages for development of the evaluation tool. This framework builds on existing theories of health and technology evaluation and presents a conceptual framework for developing an e-health evaluation tool to examine and measure different factors that play a definite role in the success of e-health programs. The framework on the horizontal axis divides e-health into different stages of program implementation, while the vertical axis identifies different themes and areas of consideration for e-health evaluation. The framework helps understand various aspects of e-health programs and their impact that require evaluation at different stages of the life cycle. The study led to the development of a new and comprehensive e-health evaluation tool, named the Khoja-Durrani-Scott Framework for e-Health Evaluation.

  17. Moving the Needle on Program Quality: An Examination of the Organizational Characteristics That Drive Improvement in Expanded Learning Programs

    ERIC Educational Resources Information Center

    McCormick, Silvana

    2017-01-01

    Program evaluation can play a critical role in supporting high quality implementation of social programs to help them achieve their goal of social impact. Evaluation scholars have developed a wide range of strategies to help build programs' internal evaluation capacity in an effort to support meaningful use of evaluation. However, the evaluation…

  18. Evaluation of doctoral nursing programs in Japan by faculty members and their educational and research activities.

    PubMed

    Arimoto, Azusa; Gregg, Misuzu F; Nagata, Satoko; Miki, Yuko; Murashima, Sachiyo

    2012-07-01

    Evaluation of doctoral programs in nursing is becoming more important with the rapid increase in the programs in Japan. This study aimed to evaluate doctoral nursing programs by faculty members and to analyze the relationship of the evaluation with educational and research activities of faculty members in Japan. Target settings were all 46 doctoral nursing programs. Eighty-five faculty members from 28 programs answered the questionnaire, which included 17 items for program evaluation, 12 items for faculty evaluation, 9 items for resource evaluation, 3 items for overall evaluations, and educational and research activities. A majority gave low evaluations for sources of funding, the number of faculty members and support staff, and administrative systems. Faculty members who financially supported a greater number of students gave a higher evaluation for extramural funding support, publication, provision of diverse learning experiences, time of supervision, and research infrastructure. The more time a faculty member spent on advising doctoral students, the higher were their evaluations on the supportive learning environment, administrative systems, time of supervision, and timely feedback on students' research. The findings of this study indicate a need for improvement in research infrastructure, funding sources, and human resources to achieve quality nursing doctoral education in Japan. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Program Evaluation: The Board Game--An Interactive Learning Tool for Evaluators

    ERIC Educational Resources Information Center

    Febey, Karen; Coyne, Molly

    2007-01-01

    The field of program evaluation lacks interactive teaching tools. To address this pedagogical issue, the authors developed a collaborative learning technique called Program Evaluation: The Board Game. The authors present the game and its development in this practitioner-oriented article. The evaluation board game is an adaptable teaching tool…

  20. Program Theory and Quality Matter: Changing the Course of Extension Program Evaluation

    ERIC Educational Resources Information Center

    Arnold, Mary E.; Cater, Melissa

    2016-01-01

    As internal evaluators for the 4-H program in two states, we simultaneously yet independently began to change the way we approached our evaluation practices, turning from evaluation capacity building (ECB) efforts that prepared educators to define and measure program outcomes to strategies that engage educators in defining and measuring program…

  1. Program Evaluation of Community College Learning Assistance Centers: What Do LAC Directors Think?

    ERIC Educational Resources Information Center

    Franklin, Doug; Blankenberger, Bob

    2016-01-01

    Objective: This study seeks to determine the nature of current program evaluation practices for learning assistance centers (LACs), the practices being used for program evaluation, and whether LAC directors believe their practices are appropriate for evaluating program effectiveness. Method: We conducted a survey (n = 61) of community college LAC…

  2. Right timing in formative program evaluation.

    PubMed

    Hall, Jori; Freeman, Melissa; Roulston, Kathy

    2014-08-01

    Since many educational researchers and program developers have limited knowledge of formative evaluation, formative data may be underutilized during the development and implementation of an educational program. The purpose of this article is to explain how participatory, responsive, educative, and qualitative approaches to formative evaluation can facilitate a partnership between evaluators and educational researchers and program managers to generate data useful to inform program implementation and improvement. This partnership is critical, we argue, because it enables an awareness of when to take appropriate action to ensure successful educational programs or "kairos". To illustrate, we use examples from our own evaluation work to highlight how formative evaluation may facilitate opportune moments to (1) define the substance and purpose of a program, (2) develop understanding and awareness of the cultural interpretations of program participants, and (3) show the relevance of stakeholder experiences to program goals. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. A systematic review of evaluated suicide prevention programs targeting indigenous youth.

    PubMed

    Harlow, Alyssa F; Bohanna, India; Clough, Alan

    2014-01-01

    Indigenous young people have significantly higher suicide rates than their non-indigenous counterparts. There is a need for culturally appropriate and effective suicide prevention programs for this demographic. This review assesses suicide prevention programs that have been evaluated for indigenous youth in Australia, Canada, New Zealand, and the United States. The databases MEDLINE and PsycINFO were searched for publications on suicide prevention programs targeting indigenous youth that include reports on evaluations and outcomes. Program content, indigenous involvement, evaluation design, program implementation, and outcomes were assessed for each article. The search yielded 229 articles; 90 abstracts were assessed, and 11 articles describing nine programs were reviewed. Two Australian programs and seven American programs were included. Programs were culturally tailored, flexible, and incorporated multiple-levels of prevention. No randomized controlled trials were found, and many programs employed ad hoc evaluations, poor program description, and no process evaluation. Despite culturally appropriate content, the results of the review indicate that more controlled study designs using planned evaluations and valid outcome measures are needed in research on indigenous youth suicide prevention. Such changes may positively influence the future of research on indigenous youth suicide prevention as the outcomes and efficacy will be more reliable.

  4. Using program evaluation to support knowledge translation in an interprofessional primary care team: a case study.

    PubMed

    Donnelly, Catherine; Shulha, Lyn; Klinger, Don; Letts, Lori

    2016-10-06

    Evaluation is a fundamental component in building quality primary care and is ideally situated to support individual, team and organizational learning by offering an accessible form of participatory inquiry. The evaluation literature has begun to recognize the unique features of KT evaluations and has described attributes to consider when evaluating KT activities. While both disciplines have focused on the evaluation of KT activities neither has explored the role of evaluation in KT. The purpose of the paper is to examine how participation in program evaluation can support KT in a primary care setting. A mixed methods case study design was used, where evaluation was conceptualized as a change process and intervention. A Memory Clinic at an interprofessional primary care clinic was the setting in which the study was conducted. An evaluation framework, Pathways of Influence provided the theoretical foundation to understand how program evaluation can facilitate the translation of knowledge at the level of the individual, inter-personal (Memory Clinic team) and the organization. Data collection included questionnaires, interviews, evaluation log and document analysis. Questionnaires and interviews were administered both before and after the evaluation: Pattern matching was used to analyze the data based on predetermined propositions. Individuals gained program knowledge that resulted in changes to both individual and program practices. One of the key themes was the importance clinicians placed on local, program based knowledge. The evaluation had less influence on the broader health organization. Program evaluation facilitated individual, team and organizational learning. The use of evaluation to support KT is ideally suited to a primary care setting by offering relevant and applicable knowledge to primary care team members while being sensitive to local context.

  5. Practical strategies for nursing education program evaluation.

    PubMed

    Lewallen, Lynne Porter

    2015-01-01

    Self-evaluation is required for institutions of higher learning and the nursing programs within them. The literature provides information on evaluation models and instruments, and descriptions of how specific nursing education programs are evaluated. However, there are few discussions in the nursing education literature of the practical aspects of nursing education program evaluation: how to get started, how to keep track of data, who to involve in data collection, and how to manage challenging criteria. This article discusses the importance of program evaluation in the academic setting and provides information on practical ways to organize the evaluation process and aggregate data, and strategies for gathering data from students, graduates, alumni, and employers of graduates. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Using an Evaluability Assessment To Select Methods for Evaluating State Technology Development Programs: The Case of the Georgia Research Alliance.

    ERIC Educational Resources Information Center

    Youtie, Jan; Bozeman, Barry; Shapira, Philip

    1999-01-01

    Describes an evaluability assessment of the Georgia Research Alliance (GRA), a technology development program. Presents the steps involved in conducting an evaluability assessment, including development of an understanding of the program and its stakeholders. Analyzes and compares different methods by which the GRA could be evaluated. (SLD)

  7. Indian Economic Development: An Evaluation of EDA's Selected Indian Reservation Program. Volume II: Individual Reservation Reports, Appendices.

    ERIC Educational Resources Information Center

    Boise Cascade Center for Community Development, ID.

    As the appendices to an evaluation of the Economic Development Administration's (EDA) Selected Indian Reservation Program, this portion of the evaluation report presents individualized evaluations of each of the 16 reservations originally selected for the program in 1967. Each reservation evaluation is presented in terms of the following format:…

  8. Data Collection Methods for Evaluating Museum Programs and Exhibitions

    ERIC Educational Resources Information Center

    Nelson, Amy Crack; Cohn, Sarah

    2015-01-01

    Museums often evaluate various aspects of their audiences' experiences, be it what they learn from a program or how they react to an exhibition. Each museum program or exhibition has its own set of goals, which can drive what an evaluator studies and how an evaluation evolves. When designing an evaluation, data collection methods are purposefully…

  9. Using Evaluation to Guide and Validate Improvements to the Utah Master Naturalist Program

    ERIC Educational Resources Information Center

    Larese-Casanova, Mark

    2015-01-01

    Integrating evaluation into an Extension program offers multiple opportunities to understand program success through achieving program goals and objectives, delivering programming using the most effective techniques, and refining program audiences. It is less common that evaluation is used to guide and validate the effectiveness of program…

  10. FHWA Research and Technology Evaluation: Public-Private Partnership Capacity Building Program

    DOT National Transportation Integrated Search

    2018-02-01

    This report details the evaluation of the Federal Highway Administrations Office of Innovative Program Delivery Public-Private Partnership (P3) Capacity Building Program (P3 Program). The evaluators focused on the P3 Programs P3 Toolkit as an e...

  11. Teacher Education Program Evaluation: An Annotated Bibliography and Guide to Research.

    ERIC Educational Resources Information Center

    Ayers, Jerry B.; Berney, Mary F.

    This book includes an annotated bibliography of the essentials needed to conduct an effective evaluation of a teacher education program. Specific information on evaluation includes: (1) general evaluation techniques, (2) evaluation of candidates and students, (3) evaluation of the knowledge base, (4) quality controls, (5) evaluation of laboratory…

  12. The Use of Multiple Evaluation Approaches in Program Evaluation

    ERIC Educational Resources Information Center

    Bledsoe, Katrina L.; Graham, James A.

    2005-01-01

    The authors discuss the use of multiple evaluation approaches in conducting program evaluations. Specifically, they illustrate four evaluation approaches (theory-driven, consumer-based, empowerment, and inclusive evaluation) and briefly discuss a fifth (use-focused evaluation) as a side effect of the use of the others. The authors also address the…

  13. State Education Department: Security over Pupil Evaluation Program and Program Evaluation Test Materials Needs Improvement. Report 91-S-2.

    ERIC Educational Resources Information Center

    New York State Office of the Comptroller, Albany.

    Findings of an audit of the New York State Education Department's procedures to maintain security over Pupil Evaluation Program (PEP) and Program Evaluation Test (PET) examination materials are presented in this report. The audit sought to determine whether the department's security procedures adequately prevented unauthorized access to exam…

  14. Austin Independent School District Office of Program Evaluation Agenda 1998-99. Publication Number 98.01.

    ERIC Educational Resources Information Center

    Austin Independent School District, TX. Office of Program Evaluation.

    The Office of Program Evaluation (OPE) of the Austin Independent School District (Texas) (AISD) is charged with evaluating federally, locally, and state funded programs in the AISD. OPE staff carry out mandated reporting for federal and state grants and are increasingly involved in formative evaluations designed for program improvement and…

  15. How To Design a Program Evaluation. CSE Program Evaluation Kit, Volume 3. Second Edition.

    ERIC Educational Resources Information Center

    Fitz-Gibbon, Carol Taylor; Morris, Lynn Lyons

    The "CSE Program Evaluation Kit" is a series of nine books intended to assist people conducting program evaluations. This volume, the third in the kit, discusses the logic underlying the use of quantitative research designs, including the pretest-posttest design, and supplies step-by-step procedures for setting up and interpreting the…

  16. Integrating Data Mining in Program Evaluation of K-12 Online Education

    ERIC Educational Resources Information Center

    Hung, Jui-Long; Hsu, Yu-Chang; Rice, Kerry

    2012-01-01

    This study investigated an innovative approach of program evaluation through analyses of student learning logs, demographic data, and end-of-course evaluation surveys in an online K-12 supplemental program. The results support the development of a program evaluation model for decision making on teaching and learning at the K-12 level. A case study…

  17. 45 CFR 2516.850 - What will the Corporation do to evaluate the overall success of the service-learning program?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... overall success of the service-learning program? 2516.850 Section 2516.850 Public Welfare Regulations...-LEARNING PROGRAMS Evaluation Requirements § 2516.850 What will the Corporation do to evaluate the overall success of the service-learning program? (a) The Corporation will conduct independent evaluations. These...

  18. 45 CFR 2516.850 - What will the Corporation do to evaluate the overall success of the service-learning program?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... overall success of the service-learning program? 2516.850 Section 2516.850 Public Welfare Regulations...-LEARNING PROGRAMS Evaluation Requirements § 2516.850 What will the Corporation do to evaluate the overall success of the service-learning program? (a) The Corporation will conduct independent evaluations. These...

  19. Evaluation of the impact of the drug evaluation and classification program on enforcement and adjudication

    DOT National Transportation Integrated Search

    1992-12-01

    This study examined the effect of the Drug Evaluation and Classification (DEC) Program on impaired driving (DWI) enforcement and adjudication. Drug Recognition Experts (DREs) in DEC programs evaluate suspects when drugs other than alcohol are suspect...

  20. Evaluation of ridesharing programs in Michigan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulp, G.; Tsao, H.J.; Webber, R.E.

    1982-10-01

    The design, implementation, and results of a carpool and vanpool evaluation are described. Objectives of the evaluation were: to develop credible estimates of the energy savings attributable to the ridesharing program, to provide information for improving the performance of the ridesharing program, and to add to a general understanding of the ridesharing process. Previous evaluation work is critiqued and the research methodology adopted for this study is discussed. The ridesharing program in Michigan is described and the basis for selecting Michigan as the evaluation site is discussed. The evaluation methodology is presented, including research design, sampling procedure, data collection, andmore » data validation. Evaluation results are analyzed. (LEW)« less

  1. The Mother and Infant Home Visiting Program Evaluation: Early Findings on the Maternal, Infant, and Early Childhood Home Visiting Program. A Report to Congress. OPRE Report 2015-11

    ERIC Educational Resources Information Center

    Michalopoulos, Charles; Lee, Helen; Duggan, Anne; Lundquist, Erika; Tso, Ada; Crowne, Sarah Shea; Burrell, Lori; Somers, Jennifer; Filene, Jill H.; Knox, Virginia

    2015-01-01

    "The Mother and Infant Home Visiting Program Evaluation: Early Findings on the Maternal, Infant, and Early Childhood Home Visiting Program--A Report to Congress" presents the first findings from the Mother and Infant Home Visiting Program Evaluation (MIHOPE), the legislatively mandated national evaluation of the Maternal, Infant, and…

  2. Evaluation: Review of the Past, Preview of the Future.

    ERIC Educational Resources Information Center

    Smith, M. F.

    1994-01-01

    This paper summarized contributors' ideas about evaluation as a field and where it is going. Topics discussed were qualitative versus quantitative debate; evaluation's purpose; professionalization; program failure; program development; evaluators as advocates; evaluation knowledge; evaluation expansion; and methodology and design. (SLD)

  3. A qualitative case study of evaluation use in the context of a collaborative program evaluation strategy in Burkina Faso.

    PubMed

    D'Ostie-Racine, Léna; Dagenais, Christian; Ridde, Valéry

    2016-05-26

    Program evaluation is widely recognized in the international humanitarian sector as a means to make interventions and policies more evidence based, equitable, and accountable. Yet, little is known about the way humanitarian non-governmental organizations (NGOs) actually use evaluations. The current qualitative evaluation employed an instrumental case study design to examine evaluation use (EU) by a humanitarian NGO based in Burkina Faso. This organization developed an evaluation strategy in 2008 to document the implementation and effects of its maternal and child healthcare user fee exemption program. Program evaluations have been undertaken ever since, and the present study examined the discourses of evaluation partners in 2009 (n = 15) and 2011 (n = 17). Semi-structured individual interviews and one group interview were conducted to identify instances of EU over time. Alkin and Taut's (Stud Educ Eval 29:1-12, 2003) conceptualization of EU was used as the basis for thematic qualitative analyses of the different forms of EU identified by stakeholders of the exemption program in the two data collection periods. Results demonstrated that stakeholders began to understand and value the utility of program evaluations once they were exposed to evaluation findings and then progressively used evaluations over time. EU was manifested in a variety of ways, including instrumental and conceptual use of evaluation processes and findings, as well as the persuasive use of findings. Such EU supported planning, decision-making, program practices, evaluation capacity, and advocacy. The study sheds light on the many ways evaluations can be used by different actors in the humanitarian sector. Conceptualizations of EU are also critically discussed.

  4. An Overview of the National Nutrition Education and Training Program Evaluation.

    ERIC Educational Resources Information Center

    St. Pierre, Robert G.; Rezmovic, Victor

    1982-01-01

    Presents the organizing framework used in evaluating the National Nutrition Education and Training Program (NET) and summarizes the descriptive, evaluative, and meta-evaluative findings. Concludes that positive effects on children's nutrition-related knowledge have resulted from different nutrition education programs. (DC)

  5. An Evaluation Program for the Eckerd Foundation Therapeutic Wilderness Camping Program: An Evaluation of an Atypical Alternative Education Program.

    ERIC Educational Resources Information Center

    Griffin, William H.; Carter, James D.

    The strategy used in evaluating an out-of-doors resident camping program for emotionally disturbed children is outlined. This strategy calls for examining the following elements in the program: (1) program goals and objectives; (2) collection and processing program data; (3) camper progress assessment; (4) program audit; (5) assessment of past…

  6. Program Fair Evaluation--Summative Appraisal of Instructional Sequences with Dissimilar Objectives.

    ERIC Educational Resources Information Center

    Popham, W. James

    A comparative evaluation involving two instructional programs is given, although the approach can easily serve to compare more than two programs. The steps involved in conducting a program fair evaluation of two instructional programs are: (1) Identify objectives (a) common to both programs, (b) unique to one program, and (c) unique to the other…

  7. A merged model of quality improvement and evaluation: maximizing return on investment.

    PubMed

    Woodhouse, Lynn D; Toal, Russ; Nguyen, Trang; Keene, DeAnna; Gunn, Laura; Kellum, Andrea; Nelson, Gary; Charles, Simone; Tedders, Stuart; Williams, Natalie; Livingood, William C

    2013-11-01

    Quality improvement (QI) and evaluation are frequently considered to be alternative approaches for monitoring and assessing program implementation and impact. The emphasis on third-party evaluation, particularly associated with summative evaluation, and the grounding of evaluation in the social and behavioral science contrast with an emphasis on the integration of QI process within programs or organizations and its origins in management science and industrial engineering. Working with a major philanthropic organization in Georgia, we illustrate how a QI model is integrated with evaluation for five asthma prevention and control sites serving poor and underserved communities in rural and urban Georgia. A primary foundation of this merged model of QI and evaluation is a refocusing of the evaluation from an intimidating report card summative evaluation by external evaluators to an internally engaged program focus on developmental evaluation. The benefits of the merged model to both QI and evaluation are discussed. The use of evaluation based logic models can help anchor a QI program in evidence-based practice and provide linkage between process and outputs with the longer term distal outcomes. Merging the QI approach with evaluation has major advantages, particularly related to enhancing the funder's return on investment. We illustrate how a Plan-Do-Study-Act model of QI can (a) be integrated with evaluation based logic models, (b) help refocus emphasis from summative to developmental evaluation, (c) enhance program ownership and engagement in evaluation activities, and (d) increase the role of evaluators in providing technical assistance and support.

  8. Continuous Evaluation in Ethics Education: A Case Study.

    PubMed

    McIntosh, Tristan; Higgs, Cory; Mumford, Michael; Connelly, Shane; DuBois, James

    2018-04-01

    A great need for systematic evaluation of ethics training programs exists. Those tasked with developing an ethics training program may be quick to dismiss the value of training evaluation in continuous process improvement. In the present effort, we use a case study approach to delineate how to leverage formative and summative evaluation measures to create a high-quality ethics education program. With regard to formative evaluation, information bearing on trainee reactions, qualitative data from the comments of trainees, in addition to empirical findings, can ensure that the training program operates smoothly. Regarding summative evaluation, measures examining trainee cognition, behavior, and organization-level results provide information about how much trainees have changed as a result of taking the ethics training. The implications of effective training program evaluation are discussed.

  9. Do State Pre-K Programs Improve Children's Pre-Literacy and Math Learning? Evaluation Science Brief

    ERIC Educational Resources Information Center

    National Forum on Early Childhood Program Evaluation, 2008

    2008-01-01

    "Evaluation Science Briefs" summarize the findings and implications of a recent study evaluating the effects of an early childhood program or environment. This Brief evaluates the study "An Effectiveness-Based Evaluation of Five State Pre-Kindergarten Programs" (V. C. Wong, T. D. Cook, W. S. Barnett, and K. Jung.) States have high aspirations for…

  10. The Value in Evaluating and Communicating Program Impact: The Ohio BR&E Program

    ERIC Educational Resources Information Center

    Daivs, Gregory

    2012-01-01

    Assessing program impact can provide useful program evaluation data. It also provides a basis for program development, marketing, and justification. This article discusses recent impact evaluation efforts and findings of a long-time Extension program; referred to as Business Retention and Expansion (BR&E). How such information can be…

  11. 76 FR 62813 - Pilot Program To Evaluate Proposed Proprietary Name Submissions; Public Meeting on Pilot Program...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-11

    ...] Pilot Program To Evaluate Proposed Proprietary Name Submissions; Public Meeting on Pilot Program Results... voluntary pilot program that enabled participating pharmaceutical firms to evaluate proposed proprietary... public meeting at the end of fiscal year 2011 to discuss the results of the pilot program, but the Agency...

  12. Evaluator competencies in the context of diversity training: The practitioners' point of view.

    PubMed

    Froncek, Benjamin; Mazziotta, Agostino; Piper, Verena; Rohmann, Anette

    2018-04-01

    Evaluator competencies have been discussed since the beginnings of program evaluation literature. More recently, the Essential Competencies for Program Evaluators (Ghere et al., 2006; Stevahn, King, Ghere & Minnema, 2005a) have proven to be a useful taxonomy for learning and improving evaluation practice. Evaluation is critical to diversity training activities, and diversity training providers face the challenge of conducting evaluations of their training programs. We explored what competencies are viewed as instrumental to conducting useful evaluations in this specific field of evaluation practice. In an online survey, N = 172 diversity training providers were interviewed via an open answer format about their perceptions of evaluator competencies, with n = 95 diversity training providers contributing statements. The Essential Competencies for Program Evaluators were used to conduct a deductive qualitative content analysis of the statements. While systematic inquiry, reflective practice, and interpersonal competence were well represented, situational analysis and project management were not. Implications are discussed for evaluation capacity building among diversity training providers and for negotiating evaluation projects with evaluation professionals. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Evaluation of the Academic Occupational Program of the County of Leduc, From September, 1981 to June, 1984. Executive Summary.

    ERIC Educational Resources Information Center

    Nyberg, V. R.

    The paper reports results of an academic occupational program intended for educable mentally handicapped and learning disabled secondary students in Leduc, Alberta. An introduction reviews history of the program and the evaluation process. The evaluation plan, based on R. Stake's model for program evaluation, is described, and sources of data…

  14. Human Services Program Evaluation: "How to Improve Your Accountability and Program Effectiveness"

    ERIC Educational Resources Information Center

    Barrett, Thomas; Sorensen, James

    2015-01-01

    The term "outcome evaluation" has become one of the most popular terms among human service providers and those whose job it is to evaluate the impact of human service programs. In the public sector alone, there are over a hundred instruments in use to evaluate the impact of state human service programs. Most states, many providers, and…

  15. Evaluative Thinking in Practice: The National Asthma Control Program.

    PubMed

    Fierro, Leslie A; Codd, Heather; Gill, Sarah; Pham, Phung K; Grandjean Targos, Piper T; Wilce, Maureen

    2018-01-01

    Although evaluative thinking lies at the heart of what we do as evaluators and what we hope to promote in others through our efforts to build evaluation capacity, researchers have given limited attention to measuring this concept. We undertook a research study to better understand how instances of evaluative thinking may present in practice-based settings-specifically within four state asthma control programs funded by the Centers for Disease Control and Prevention's National Asthma Control Program. Through content analyses of documents as well as interviews and a subsequent focus group with four state asthma control programs' evaluators and program managers we identified and defined twenty-two indicators of evaluative thinking. Findings provide insights about what practitioners may wish to look for when they intend to build evaluative thinking and the types of data sources that may be more or less helpful in such efforts.

  16. A multidimensional evaluation of a nursing information-literacy program.

    PubMed Central

    Fox, L M; Richter, J M; White, N E

    1996-01-01

    The goal of an information-literacy program is to develop student skills in locating, evaluating, and applying information for use in critical thinking and problem solving. This paper describes a multidimensional evaluation process for determining nursing students' growth in cognitive and affective domains. Results indicate improvement in student skills as a result of a nursing information-literacy program. Multidimensional evaluation produces a well-rounded picture of student progress based on formal measurement as well as informal feedback. Developing new educational programs can be a time-consuming challenge. It is important, when expending so much effort, to ensure that the goals of the new program are achieved and benefits to students demonstrated. A multidimensional approach to evaluation can help to accomplish those ends. In 1988, The University of Northern Colorado School of Nursing began working with a librarian to integrate an information-literacy component, entitled Pathways to Information Literacy, into the curriculum. This article describes the program and discusses how a multidimensional evaluation process was used to assess program effectiveness. The evaluation process not only helped to measure the effectiveness of the program but also allowed the instructors to use several different approaches to evaluation. PMID:8826621

  17. Lazy evaluation of FP programs: A data-flow approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Y.H.; Gaudiot, J.L.

    1988-12-31

    This paper presents a lazy evaluation system for the list-based functional language, Backus` FP in data-driven environment. A superset language of FP, called DFP (Demand-driven FP), is introduced. FP eager programs are transformed into DFP lazy programs which contain the notions of demands. The data-driven execution of DFP programs has the same effects of lazy evaluation. DFP lazy programs have the property of always evaluating a sufficient and necessary result. The infinite sequence generator is used to demonstrate the eager-lazy program transformation and the execution of the lazy programs.

  18. Iterative Evaluation in a Mobile Counseling and Testing Program to Reach People of Color at Risk for HIV--New Strategies Improve Program Acceptability, Effectiveness, and Evaluation Capabilities

    ERIC Educational Resources Information Center

    Spielberg, Freya; Kurth, Ann; Reidy, William; McKnight, Teka; Dikobe, Wame; Wilson, Charles

    2011-01-01

    This article highlights findings from an evaluation that explored the impact of mobile versus clinic-based testing, rapid versus central-lab based testing, incentives for testing, and the use of a computer counseling program to guide counseling and automate evaluation in a mobile program reaching people of color at risk for HIV. The program's…

  19. An evaluation toolkit for Florida's Commuter Assistance Programs (CAP) : a companion to the 1999 CAP evaluation manual

    DOT National Transportation Integrated Search

    2001-01-01

    This manual is a companion piece to the Commuter Assistance Program Evaluation Manual that was developed to assist Florida's Commuter Assistance Programs (CAP) in their efforts to measure and evaluate their performance. This manual is intended to pro...

  20. Community outreach: from measuring the difference to making a difference with health information*

    PubMed Central

    Ottoson, Judith M.; Green, Lawrence W.

    2005-01-01

    Background: Community-based outreach seeks to move libraries beyond their traditional institutional boundaries to improve both access to and effectiveness of health information. The evaluation of such outreach needs to involve the community in assessing the program's process and outcomes. Purpose: Evaluation of community-based library outreach programs benefits from a participatory approach. To explain this premise of the paper, three components of evaluation theory are paired with relevant participatory strategies. Concepts: The first component of evaluation theory is also a standard of program evaluation: use. Evaluation is intended to be useful for stakeholders to make decisions. A useful evaluation is credible, timely, and of adequate scope. Participatory approaches to increase use of evaluation findings include engaging end users early in planning the program itself and in deciding on the outcomes of the evaluation. A second component of evaluation theory seeks to understand what is being evaluated, such as specific aspects of outreach programs. A transparent understanding of the ways outreach achieves intended goals, its activities and linkages, and the context in which it operates precedes any attempt to measure it. Participatory approaches to evaluating outreach include having end users, such as health practitioners in other community-based organizations, identify what components of the outreach program are most important to their work. A third component of evaluation theory is concerned with the process by which value is placed on outreach. What will count as outreach success or failure? Who decides? Participatory approaches to valuing include assuring end-user representation in the formulation of evaluation questions and in the interpretation of evaluation results. Conclusions: The evaluation of community-based outreach is a complex process that is not made easier by a participatory approach. Nevertheless, a participatory approach is more likely to make the evaluation findings useful, ensure that program knowledge is shared, and make outreach valuing transparent. PMID:16239958

  1. A Bereavement Support Program for Survivors of Cancer Deaths: A Description and Evaluation.

    ERIC Educational Resources Information Center

    Souter, Susan J.; Moore, Timothy E.

    1990-01-01

    Describes bereavement support program for survivors of cancer deaths developed by Riverdale Hospital in Toronto, Ontario. Presents detailed program evaluation which asked bereaved survivors who were program participants for one year to evaluate program aspects and facilitation of their grief by volunteers. Recommendations for expansion and…

  2. Title V Delinquency Prevention Program. Community Self-Evaluation Workbook.

    ERIC Educational Resources Information Center

    Caliber Associates, Fairfax, VA.

    This workbook is designed to help communities and program administrators assess the success of their Title V delinquency prevention programs, but it may serve as an evaluation tool for other prevention efforts as well. It provides information and resource aids on program planning, conducting evaluations, tracking programs, describing activities,…

  3. The Evaluation and Research of Multi-Project Programs: Program Component Analysis.

    ERIC Educational Resources Information Center

    Baker, Eva L.

    1977-01-01

    It is difficult to base evaluations on concepts irrelevant to state policy making. Evaluation of a multiproject program requires both time and differentiation of method. Data from the California Early Childhood Program illustrate process variables for program component analysis, and research questions for intraprogram comparison. (CP)

  4. LEA Title VII Program Evaluations. Panel Presentations.

    ERIC Educational Resources Information Center

    Balu, Raj

    These panel presentations focus on LEA Title VII Program Evaluations. Raj Balu, an administrator of bilingual programs in Chicago presents information regarding the bilingual education program in the Chicago public schools, as well as information on Title VII programs and what kind of evaluation is being done. Jesus Salazar, who is currently…

  5. The NLM evaluation lecture series: introduction to the special section on evaluating health communication programs.

    PubMed

    Logan, Robert A; Kreps, Gary L

    2014-12-01

    This article introduces the Journal of Health Communication's special section, Evaluating Health Communication Programs. This special section is based on a public lecture series supported by the National Library of Medicine titled "Better Health: Evaluating Health Communication Programs" designed to share best practices for using evaluation research to develop, implement, refine, and institutionalize the best health communication programs for promoting public health. This introduction provides an overview to the series, summarizes the major presentations in the series, and describe implications from the series for translational health communication research, interventions, and programs that can enhance health outcomes.

  6. Tailored program evaluation: Past, present, future.

    PubMed

    Suggs, L Suzanne; Cowdery, Joan E; Carroll, Jennifer B

    2006-11-01

    This paper discusses measurement issues related to the evaluation of computer-tailored health behavior change programs. As the first generation of commercially available tailored products is utilized in health promotion programming, programmers and researchers are becoming aware of the unique challenges that the evaluation of these programs presents. A project is presented that used an online tailored health behavior assessment (HBA) in a worksite setting. Process and outcome evaluation methods are described and include the challenges faced, and strategies proposed and implemented, for meeting them. Implications for future research in tailored program development, implementation, and evaluation are also discussed.

  7. Framework and criteria for program evaluation in the Office of Conservation and Renewable Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This study addresses the development of a framework and generic criteria for conducting program evaluation in the Office of Conservation and Renewable Energy. The evaluation process is intended to provide the Assistant Secretary with comprehensive and consistent evaluation data for management decisions regarding policy and strategy, crosscutting energy impacts and resource allocation and justification. The study defines evaluation objectives, identifies basic information requirements (criteria), and identifies a process for collecting evaluation results at the basic program level, integrating the results, and summarizing information upward through the CE organization to the Assistant Secretary. Methods are described by which initial criteria weremore » tested, analyzed, and refined for CE program applicability. General guidelines pertaining to evaluation and the Sunset Review requirements are examined and various types, designs, and models for evaluation are identified. Existing CE evaluation reports are reviewed and comments on their adequacy for meeting current needs are provided. An inventory and status survey of CE program evaluation activities is presented, as are issues, findings, and recommendations pertaining to CE evaluation and Sunset Review requirements. Also, sources of data for use in evaluation and the Sunset Review response are identified. An inventory of CE evaluation-related documents and reports is provided.« less

  8. Achievement Testing of Disadvantaged and Minority Students for Educational Program Evaluation.

    ERIC Educational Resources Information Center

    Wargo, Michael J., Ed.; Green, Donald Ross, Ed.

    The following papers were delivered: Introductory Remarks, John W. Evans; An Evaluator's Perspective, Michael J. Wargo; Problems of Achievement Tests in Program Evaluation, Donald Ross Green; Diverse Human Populations and Problems in Educational Program Evaluation via Achievement Testing, Edmund W. Gordon; Critical Issues in Achievement Testing of…

  9. Evaluating Federal Social Programs: An Uncertain Act.

    ERIC Educational Resources Information Center

    Levitan, Sar A.; Wurzburg, Gregory K.

    This study of the federal government's evaluation of social programs indicates that it is virtually impossible to establish a bias-free, valid, and reliable system of inquiry to determine the effects of social programs. Divided into five chapters, the document examines the aspirations and limitations of evaluations, methodology, evaluation in the…

  10. Issues Involved in the Evaluation of Gifted Programmes.

    ERIC Educational Resources Information Center

    Ali, A. Sidiq

    2001-01-01

    This article discusses the purpose of gifted program evaluation and identifies some of the pertinent issues involved in evaluating gifted programs. The gifted program delivered by the Peel District Board of Education in Ontario, Canada, is used to illustrate key evaluation considerations, including gifted identification procedures and gifted…

  11. Improve the Quality of Teaching in Your Schools.

    ERIC Educational Resources Information Center

    Greene, Brenda Z.

    1985-01-01

    Teacher quality can be improved through teacher evaluation, intervention programs, incentives or rewards, and counseling. In the Toledo, Ohio, peer evaluation program, evaluation and staff development go hand in hand. The program was developed through a collaborative and cooperative process and uses teacher consultants to evaluate and supervise…

  12. Program Evaluation: Two Management-Oriented Samples

    ERIC Educational Resources Information Center

    Alford, Kenneth Ray

    2010-01-01

    Two Management-Oriented Samples details two examples of the management-oriented approach to program evaluation. Kenneth Alford, a doctorate candidate at the University of the Cumberlands, details two separate program evaluations conducted in his school district and seeks to compare and contrast the two evaluations based upon the characteristics of…

  13. Evaluation Processes Used to Assess the Effectiveness of Vocational-Technical Programs.

    ERIC Educational Resources Information Center

    Bruhns, Arthur E.

    Evaluation is quantitative or qualitative, the criteria determined by or given to the student. The criteria show how close he has come to the program's objectives and the ranking of individual performance. Vocational education programs susceptible to evaluation are listed and relevant evaluative techniques discussed. Graduate interviews concerning…

  14. The Implementation of Program Evaluation Recommendations in Wisconsin Technical Colleges.

    ERIC Educational Resources Information Center

    Ruhland, Sheila K.

    Implementation of program evaluation recommendations should persuade people that the rewards of an evaluation outweigh the reasons for resistance. A study was undertaken with the following purposes: identify facilitators and barriers to the implementation of program evaluation; determine the proportion of recommendations made in each of the nine…

  15. Practical Considerations in Evaluating Patient/Consumer Health Education Programs.

    ERIC Educational Resources Information Center

    Bryant, Nancy H.

    This report contains brief descriptions of seven evaluative efforts and outcomes of health education programs, some considerations of problems encountered in evaluating the programs, and detailed descriptions of two case studies: (1) a process evaluation of preoperative teaching and (2) a retrospective study of visiting nurse association use by…

  16. Experiential Education and Empowerment Evaluation: Mars Rover Educational Program Case Example.

    ERIC Educational Resources Information Center

    Fetterman, David; Bowman, Cassie

    2002-01-01

    Empowerment evaluation helps people improve their programs using self-evaluation. Empowerment evaluation has three steps: establishing a mission; taking stock of the most significant activities; and planning for the future by establishing goals, strategies, and criteria for evidence. A NASA experiential program for small, distributed groups of…

  17. Evaluacion de que consister y por que se lleva acabo? (Evaluation: What Does it Consist of, and for What Purpose?).

    ERIC Educational Resources Information Center

    Austin Independent School District, TX. Office of Research and Evaluation.

    A guide is presented for the evaluation of the bilingual programs in the Austin, Texas, Independent School District. The reasons for an evaluation and a definition of program objectives and evaluation instruments are given. The program components, objectives and evaluation instruments for each grade level (K-4) are listed. The components involved…

  18. Compulsory Project-Level Involvement and the Use of Program-Level Evaluations: Evaluating the Local Systemic Change for Teacher Enhancement Program

    ERIC Educational Resources Information Center

    Johnson, Kelli; Weiss, Iris R.

    2011-01-01

    In 1995, the National Science Foundation (NSF) contracted with principal investigator Iris Weiss and an evaluation team at Horizon Research, Inc. (HRI) to conduct a national evaluation of the Local Systemic Change for Teacher Enhancement program (LSC). HRI conducted the core evaluation under a $6.25 million contract with NSF. This program…

  19. Special Education Program Evaluation: A Planning Guide. An Overview. CASE Commissioned Series.

    ERIC Educational Resources Information Center

    McLaughlin, John A.

    This resource guide is intended to help in planning special education program evaluations. It focuses on: basic evaluation concepts, identification of special education decision makers and their information needs, specific evaluation questions, procedures for gathering relevant information, and evaluation of the evaluation process itself.…

  20. An Interprofessional Program Evaluation Case Study: Utilizing Multiple Measures To Assess What Matters. AIR 1997 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Delaney, Anne Marie

    This paper reviews the first two years of a model program-evaluation case study which is intended to show: (1) how program evaluation can contribute to academic and professional degree programs; (2) how qualitative and quantitative techniques can be used to produce reliable measures for evaluation studies; and (3) how the role of the institutional…

  1. Exploring the Suitability of an English for Health Sciences Program: Model and Report of a Self-Evaluation Process

    ERIC Educational Resources Information Center

    Ferrer, Erica; Pérez, Yuddy

    2017-01-01

    Program evaluation is a process of carefully collecting information in order to make informed decisions to strengthen specific components of a given program. The type of evaluation an institution decides to undertake depends on the purpose as well as on the information the institution wants to find out about its program. Self-evaluation represents…

  2. Resident Evaluation and Remediation: A Comprehensive Approach

    PubMed Central

    Wu, Jim S.; Siewert, Bettina; Boiselle, Phillip M.

    2010-01-01

    Background A comprehensive evaluation and remediation program is an essential component of any residency program. The evaluation system should identify problems accurately and early and allow residents with problems to be assigned to a remediation program that effectively deals with them. Elements of a proactive remediation program include a process for outlining deficiencies, providing resources for improvement, communicating clear goals for acceptable performance, and reevaluating performance against these goals. Intervention In recognition of the importance of early detection and prompt remediation of the struggling resident, we sought to develop a multifaceted approach to resident evaluation with the aim of early identification and prompt remediation of difficulties. This article describes our comprehensive evaluation program and remediation program, which uses resources within our radiology department and institutional graduate medical education office. Discussion An effective evaluation system should identify problems accurately and early, whereas a proactive remediation program should effectively deal with issues once they are identified. PMID:21975628

  3. Summary of Program Evaluation Results: 1985-1986 School Year Pre-Kindergarten Educational Program.

    ERIC Educational Resources Information Center

    Heath, Robert W.; And Others

    Reported are findings of the 1985-86 program evaluation of the prenatal-to-preschool and preschool programs operating under the auspices of the Kamehameha Schools/Bishop Estate. Evaluation of the prenatal-to-preschool program (the Kupulani Program) included item analysis of the Questions about Pregnancy Test, development of a revised data…

  4. Integrating Program Theory and Systems-Based Procedures in Program Evaluation: A Dynamic Approach to Evaluate Educational Programs

    ERIC Educational Resources Information Center

    Grammatikopoulos, Vasilis

    2012-01-01

    The current study attempts to integrate parts of program theory and systems-based procedures in educational program evaluation. The educational program that was implemented, called the "Early Steps" project, proposed that physical education can contribute to various educational goals apart from the usual motor skills improvement. Basic…

  5. Non-Formal Educator Use of Evaluation Results

    ERIC Educational Resources Information Center

    Baughman, Sarah; Boyd, Heather H.; Franz, Nancy K.

    2012-01-01

    Increasing demands for accountability in educational programming have resulted in increasing calls for program evaluation in educational organizations. Many organizations include conducting program evaluations as part of the job responsibilities of program staff. Cooperative Extension is a complex organization offering non-formal educational…

  6. SUPERFUND INNOVATIVE TECHNOLOGY EVALUATION PROGRAM: PROGRESS AND ACCOMPLISHMENTS - FISCAL YEAR 1991

    EPA Science Inventory

    The Superfund Innovative Technology Evaluation (SITE) program was the first major program for demonstrating and evaluating full-scale innovative treatment technologies at hazardous waste sites. Having concluded its fifth year, the SITE program is recognized as a leading advocate ...

  7. Legislative Evaluation.

    ERIC Educational Resources Information Center

    Fox, Harrison

    The speaker discusses Congressional program evaluation. From the Congressional perspective, good evaluators understand the political, social, and economic processes; are familiar with various evaluation methods; and know how to use authority and power within their roles. Program evaluation serves three major purposes: to anticipate social impact…

  8. SOAR 89: Space Station. Space suit test program

    NASA Technical Reports Server (NTRS)

    Kosmo, Joseph J.; West, Philip; Rouen, Michael

    1990-01-01

    The elements of the test program for the space suit to be used on Space Station Freedom are noted in viewgraph form. Information is given on evaluation objectives, zero gravity evaluation, mobility evaluation, extravehicular activity task evaluation, and shoulder joint evaluation.

  9. A new evaluation tool to obtain practice-based evidence of worksite health promotion programs.

    PubMed

    Dunet, Diane O; Sparling, Phillip B; Hersey, James; Williams-Piehota, Pamela; Hill, Mary D; Hanssen, Carl; Lawrenz, Frances; Reyes, Michele

    2008-10-01

    The Centers for Disease Control and Prevention developed the Swift Worksite Assessment and Translation (SWAT) evaluation method to identify promising practices in worksite health promotion programs. The new method complements research studies and evaluation studies of evidence-based practices that promote healthy weight in working adults. We used nationally recognized program evaluation standards of utility, feasibility, accuracy, and propriety as the foundation for our 5-step method: 1) site identification and selection, 2) site visit, 3) post-visit evaluation of promising practices, 4) evaluation capacity building, and 5) translation and dissemination. An independent, outside evaluation team conducted process and summative evaluations of SWAT to determine its efficacy in providing accurate, useful information and its compliance with evaluation standards. The SWAT evaluation approach is feasible in small and medium-sized workplace settings. The independent evaluation team judged SWAT favorably as an evaluation method, noting among its strengths its systematic and detailed procedures and service orientation. Experts in worksite health promotion evaluation concluded that the data obtained by using this evaluation method were sufficient to allow them to make judgments about promising practices. SWAT is a useful, business-friendly approach to systematic, yet rapid, evaluation that comports with program evaluation standards. The method provides a new tool to obtain practice-based evidence of worksite health promotion programs that help prevent obesity and, more broadly, may advance public health goals for chronic disease prevention and health promotion.

  10. Procedures for Comparing Instructional Programs.

    ERIC Educational Resources Information Center

    Klein, Stephen

    This paper examines comparative educational program evaluation. Suggested evaluative criteria and evaluation techniques and their weaknesses are discussed. An evaluation formula is proposed, and an example of its operation is provided. (DG)

  11. Flight program language requirements. Volume 2: Requirements and evaluations

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The efforts and results are summarized for a study to establish requirements for a flight programming language for future onboard computer applications. Several different languages were available as potential candidates for future NASA flight programming efforts. The study centered around an evaluation of the four most pertinent existing aerospace languages. Evaluation criteria were established, and selected kernels from the current Saturn 5 and Skylab flight programs were used as benchmark problems for sample coding. An independent review of the language specifications incorporated anticipated future programming requirements into the evaluation. A set of detailed language requirements was synthesized from these activities. The details of program language requirements and of the language evaluations are described.

  12. Reaching Your Program Goals: The Secret to a Successful Relationship with Your Evaluator

    NASA Astrophysics Data System (ADS)

    Warburton, J.; Crowley, S.; Larson, A.

    2012-12-01

    PolarTREC (Teachers and Researchers Exploring and Collaborating) is a National Science Foundation (NSF) funded program in which K-12 teachers participate in hands-on field research experiences in the Polar Regions. PolarTREC has become a leader in the evaluation of teacher research experiences (TREs) and offers a model for outside evaluation for education and public outreach. Programs such as TRE's offer a direct package of outreach opportunities for effective broader impacts. This presentation offers a model for the evaluative process beginning with project design and goals, which involve an outside evaluator, and that provide funders with an outlined scope of projects as well as outcomes and products for use by the public to advance scientific understanding. Outcomes are determined before the program is developed to ensure that all components are striving for program efficacy. Formative and summative evaluations for all audiences ensure robust program reports via survey's, knowledge assessments, conference calls, outreach implementation plans for multiple diverse audiences and in-depth case studies with teachers back in their classrooms. PolarTREC has become part of a larger, international working group of evaluators and program managers on program assessment. As part of this working group, sharing best practices for effective evaluation to better support science efforts, inform funders, and communicate with the public, has been integral to the evolution and advancement of our successful evaluation model.

  13. Process Evaluation for Improving K12 Program Effectiveness: Case Study of a National Institutes of Health Building Interdisciplinary Research Careers in Women's Health Research Career Development Program.

    PubMed

    Raymond, Nancy C; Wyman, Jean F; Dighe, Satlaj; Harwood, Eileen M; Hang, Mikow

    2018-06-01

    Process evaluation is an important tool in quality improvement efforts. This article illustrates how a systematic and continuous evaluation process can be used to improve the quality of faculty career development programs by using the University of Minnesota's Building Interdisciplinary Research Careers in Women's Health (BIRCWH) K12 program as an exemplar. Data from a rigorous process evaluation incorporating quantitative and qualitative measurements were analyzed and reviewed by the BIRCWH program leadership on a regular basis. Examples are provided of how this evaluation model and processes were used to improve many aspects of the program, thereby improving scholar, mentor, and advisory committee members' satisfaction and scholar outcomes. A rigorous evaluation plan can increase the effectiveness and impact of a research career development plan.

  14. The Cost-Income Compenent of Program Evaluation.

    ERIC Educational Resources Information Center

    Miner, Norris

    Cost-income studies are designed to serve two functions in instructional program evaluation. First, they act as the indicator of the economic value of a program. This economic value in conjunction with the other educational values needed in program evaluation allow for the most realistic appraisal of program worth. Second, if the studies show a…

  15. Program Theory Evaluation: Logic Analysis

    ERIC Educational Resources Information Center

    Brousselle, Astrid; Champagne, Francois

    2011-01-01

    Program theory evaluation, which has grown in use over the past 10 years, assesses whether a program is designed in such a way that it can achieve its intended outcomes. This article describes a particular type of program theory evaluation--logic analysis--that allows us to test the plausibility of a program's theory using scientific knowledge.…

  16. Better Crunching: Recommendations for Multivariate Data Analysis Approaches for Program Impact Evaluations

    ERIC Educational Resources Information Center

    Braverman, Marc T.

    2016-01-01

    Extension program evaluations often present opportunities to analyze data in multiple ways. This article suggests that program evaluations can involve more sophisticated data analysis approaches than are often used. On the basis of a hypothetical program scenario and corresponding data set, two approaches to testing for evidence of program impact…

  17. Testing of a Program Evaluation Model: Final Report.

    ERIC Educational Resources Information Center

    Nagler, Phyllis J.; Marson, Arthur A.

    A program evaluation model developed by Moraine Park Technical Institute (MPTI) is described in this report. Following background material, the four main evaluation criteria employed in the model are identified as program quality, program relevance to community needs, program impact on MPTI, and the transition and growth of MPTI graduates in the…

  18. The Houston Community College Eligible Legalized Alien Program. Evaluation Program. Evaluation Report.

    ERIC Educational Resources Information Center

    Seaman, Don F.; Cuellar, Sylvia

    The Houston Community College (Texas) program (TOTAL ACCESS) designed in response to the Immigration Reform and Control Act of 1986, is described and evaluated. The program offers classes to eligible aliens (97% Hispanic Americans from Mexico, El Salvador, and Guatemala) wishing to pursue the educational program required for legalization. Program…

  19. Using evaluation methods to guide the development of a tobacco-use prevention curriculum for youth: a case study.

    PubMed

    Bridge, P D; Gallagher, R E; Berry-Bobovski, L C

    2000-01-01

    Fundamental to the development of educational programs and curricula is the evaluation of processes and outcomes. Unfortunately, many otherwise well-designed programs do not incorporate stringent evaluation methods and are limited in measuring program development and effectiveness. Using an advertising lesson in a school-based tobacco-use prevention curriculum as a case study, the authors examine the role of evaluation in the development, implementation, and enhancement of the curricular lesson. A four-phase formative and summative evaluation design was developed to divide the program-evaluation continuum into a structured process that would aid in the management of the evaluation, as well as assess curricular components. Formative and summative evaluation can provide important guidance in the development, implementation, and enhancement of educational curricula. Evaluation strategies identified unexpected barriers and allowed the project team to make necessary "time-relevant" curricular adjustments during each stage of the process.

  20. Design and Operation of the Transformed National Healthy Start Evaluation.

    PubMed

    Banks, Jamelle E; Dwyer, Maura; Hirai, Ashley; Ghandour, Reem M; Atrash, Hani K

    2017-12-01

    Purpose Improving pregnancy outcomes for women and children is one of the nation's top priorities. The Healthy Start (HS) program was created to address factors that contribute to high infant mortality rates (IMRs) and persistent disparities in IMRs. The program began in 1991 and was transformed in 2014 to apply lessons from emerging research, past evaluation findings, and expert recommendations. To understand the implementation and impact of the transformed program, there is a need for a robust and comprehensive evaluation. Description The national HS evaluation will include an implementation evaluation, which will describe program components that affect outcomes; a utilization evaluation, which will examine the characteristics of women and infants who did and did not utilize the program; and an outcome evaluation, which will assess the program's effectiveness with regard to producing expected outcomes among the target population. Data sources include the National HS Program Survey, a HS participant survey, and individual-level program data linked to vital records and the Pregnancy Risk Assessment Monitoring System (PRAMS) survey. Assessment Descriptive analyses will be used to examine differences in risk profiles between participants and non-participants, as well as to calculate penetration rates for high-risk women in respective service areas. Multivariable analyses will be used to determine the impact of the program on key outcomes and will explore variation by dose, type of services received, and grantee characteristics. Conclusion Evaluation findings are expected to inform program decisions and direction, including identification of effective program components that can be spread and scaled.

  1. Language Program Evaluation

    ERIC Educational Resources Information Center

    Norris, John M.

    2016-01-01

    Language program evaluation is a pragmatic mode of inquiry that illuminates the complex nature of language-related interventions of various kinds, the factors that foster or constrain them, and the consequences that ensue. Program evaluation enables a variety of evidence-based decisions and actions, from designing programs and implementing…

  2. From Schools to Community Learning Centers: A Program Evaluation of a School Reform Process

    ERIC Educational Resources Information Center

    Magolda, Peter; Ebben, Kelsey

    2007-01-01

    This manuscript reports on a program evaluation of a school reform initiative conducted in an Ohio city. The paper describes, interprets, and evaluates this reform process aimed at transforming schools into community learning centers. The manuscript also describes and analyzes the initiative's program evaluation process. Elliot Eisner's [(1998).…

  3. Evaluating public participation in environmental decision-making: EPA's superfund community involvement program.

    Treesearch

    Susan Charnley; Bruce Engelbert

    2005-01-01

    This article discusses an 8-year, ongoing project that evaluates the Environmental Protection Agency's Superfund community involvement program. The project originated as a response to the Government Performance and Results Act, which requires federal agencies to articulate program goals, and evaluate and report their progress in meeting those goals. The evaluation...

  4. Formative Evaluation in a Cognitive Developmental Program for Young Children.

    ERIC Educational Resources Information Center

    Willis, Sherry L.; And Others

    The Cognitive Developmental Early Childhood program at Pennsylvania State University has formulated a program, curriculum, and evaluation system guided by Piagetian theory. The evaluation system described in this paper is a result of an intensive examination of the purposes and types of evaluation systems for their compatibility with the Piagetian…

  5. The Do's and Don'ts in Regard to the Evaluation of Bilingual Programs.

    ERIC Educational Resources Information Center

    Rodriquez-Brown, Flora V.

    The difficulties in evaluating bilingual education appear to have prevented success in all but a few evaluation attempts, but better and more meaningful evaluation is neccessary in order to identify the strengths and weaknesses of bilingual programs. Many bilingual programs are undergoing constant modification, adequate assessment instruments have…

  6. Evaluating Federal Social Programs: Finding out What Works and What Does Not

    ERIC Educational Resources Information Center

    Muhlhausen, David B.

    2012-01-01

    Federal social programs are rarely evaluated to determine whether they are actually accomplishing their intended purposes. As part of its obligation to spend taxpayers' dollars wisely, Congress should mandate that experimental evaluations of every federal social program be conducted. The evaluations should be large-scale, multisite studies to…

  7. What Have We Learned about the Politics of Program Evaluation?

    ERIC Educational Resources Information Center

    Chelimsky, Eleanor

    The politics of program evaluation are discussed from the personal perspective of the Director of the General Accounting Office's Program Evaluation and Methodology Division, which has produced reports for committees of the United States Congress. It is concluded that successful evaluations must be useful to others and must understand the…

  8. Adapting the Empowerment Evaluation Model: A Mental Health Drop-In Center Case Example

    ERIC Educational Resources Information Center

    Sullins, Carolyn D.

    2003-01-01

    Empowerment evaluation involves a program's stakeholders in designing and implementing an evaluation of their own program, thus contributing to the program's improvement and self-determination (Fetterman, 1994, 1996). It appeared to be an appropriate approach for evaluating a mental health drop-in center, which had congruent goals of collaboration…

  9. Formative Evaluation of the Adult Learning, Literacy and Essential Skills Program. Final Report

    ERIC Educational Resources Information Center

    Human Resources and Skills Development Canada, 2010

    2010-01-01

    This report presents the results of the formative evaluation of the Adult Learning, Literacy and Essential Skills Program (ALLESP). Data collection related to this evaluation took place between November 2008 and May 2009. The evaluation resulted in the following four recommendations: (1) It is recommended that Program objectives and activities…

  10. 7 CFR 210.29 - Management evaluations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 4 2014-01-01 2014-01-01 false Management evaluations. 210.29 Section 210.29... AGRICULTURE CHILD NUTRITION PROGRAMS NATIONAL SCHOOL LUNCH PROGRAM Additional Provisions § 210.29 Management evaluations. (a) Management evaluations. FNS will conduct a comprehensive management evaluation of each State...

  11. 7 CFR 210.29 - Management evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false Management evaluations. 210.29 Section 210.29... AGRICULTURE CHILD NUTRITION PROGRAMS NATIONAL SCHOOL LUNCH PROGRAM Additional Provisions § 210.29 Management evaluations. (a) Management evaluations. FNS will conduct a comprehensive management evaluation of each State...

  12. Purposive Facebook Recruitment Endows Cost-Effective Nutrition Education Program Evaluation

    PubMed Central

    Wamboldt, Patricia

    2013-01-01

    Background Recent legislation established a requirement for nutrition education in federal assistance programs to be evidence-based. Recruitment of low-income persons to participate and evaluate nutrition education activities can be challenging and costly. Facebook has been shown to be a cost-effective strategy to recruit this target audience to a nutrition program. Objective The purpose of our study was to examine Facebook as a strategy to recruit participants, especially Supplemental Nutrition Assistance Program Education (SNAP-Ed) eligible persons, to view and evaluate an online nutrition education program intended to be offered as having some evidence base for SNAP-Ed programming. Methods English-speaking, low-income Pennsylvania residents, 18-55 years with key profile words (eg, Supplemental Nutrition Assistance Program, Food bank), responded to a Facebook ad inviting participation in either Eating Together as a Family is Worth It (WI) or Everyone Needs Folic Acid (FA). Participants completed an online survey on food-related behaviors, viewed a nutrition education program, and completed a program evaluation. Facebook set-up functions considered were costing action, daily spending cap, and population reach. Results Respondents for both WI and FA evaluations were similar; the majority were white, <40 years, overweight or obese body mass index, and not eating competent. A total of 807 Facebook users clicked on the WI ad with 73 unique site visitors and 47 of them completing the program evaluation (ie, 47/807, 5.8% of clickers and 47/73, 64% of site visitors completed the evaluation). Cost per completed evaluation was US $25.48; cost per low-income completer was US $39.92. Results were similar for the FA evaluation; 795 Facebook users clicked on the ad with 110 unique site visitors, and 73 completing the evaluation (ie, 73/795, 9.2% of ad clickers and 73/110, 66% of site visitors completed the evaluation). Cost per valid completed survey with program evaluation was US $18.88; cost per low-income completer was US $27.53. Conclusions With Facebook we successfully recruited low-income Pennsylvanians to online nutrition program evaluations. Benefits using Facebook as a recruitment strategy included real-time recruitment management with lower costs and more efficiency compared to previous data from traditional research recruitment strategies reported in the literature. Limitations prompted by repeated survey attempts need to be addressed to optimize this recruitment strategy. PMID:23948573

  13. Purposive facebook recruitment endows cost-effective nutrition education program evaluation.

    PubMed

    Lohse, Barbara; Wamboldt, Patricia

    2013-08-15

    Recent legislation established a requirement for nutrition education in federal assistance programs to be evidence-based. Recruitment of low-income persons to participate and evaluate nutrition education activities can be challenging and costly. Facebook has been shown to be a cost-effective strategy to recruit this target audience to a nutrition program. The purpose of our study was to examine Facebook as a strategy to recruit participants, especially Supplemental Nutrition Assistance Program Education (SNAP-Ed) eligible persons, to view and evaluate an online nutrition education program intended to be offered as having some evidence base for SNAP-Ed programming. English-speaking, low-income Pennsylvania residents, 18-55 years with key profile words (eg, Supplemental Nutrition Assistance Program, Food bank), responded to a Facebook ad inviting participation in either Eating Together as a Family is Worth It (WI) or Everyone Needs Folic Acid (FA). Participants completed an online survey on food-related behaviors, viewed a nutrition education program, and completed a program evaluation. Facebook set-up functions considered were costing action, daily spending cap, and population reach. Respondents for both WI and FA evaluations were similar; the majority were white, <40 years, overweight or obese body mass index, and not eating competent. A total of 807 Facebook users clicked on the WI ad with 73 unique site visitors and 47 of them completing the program evaluation (ie, 47/807, 5.8% of clickers and 47/73, 64% of site visitors completed the evaluation). Cost per completed evaluation was US $25.48; cost per low-income completer was US $39.92. Results were similar for the FA evaluation; 795 Facebook users clicked on the ad with 110 unique site visitors, and 73 completing the evaluation (ie, 73/795, 9.2% of ad clickers and 73/110, 66% of site visitors completed the evaluation). Cost per valid completed survey with program evaluation was US $18.88; cost per low-income completer was US $27.53. With Facebook we successfully recruited low-income Pennsylvanians to online nutrition program evaluations. Benefits using Facebook as a recruitment strategy included real-time recruitment management with lower costs and more efficiency compared to previous data from traditional research recruitment strategies reported in the literature. Limitations prompted by repeated survey attempts need to be addressed to optimize this recruitment strategy.

  14. Building an Evidence Base to Inform Interventions for Pregnant and Parenting Adolescents: A Call for Rigorous Evaluation

    PubMed Central

    Burrus, Barri B.; Scott, Alicia Richmond

    2012-01-01

    Adolescent parents and their children are at increased risk for adverse short- and long-term health and social outcomes. Effective interventions are needed to support these young families. We studied the evidence base and found a dearth of rigorously evaluated programs. Strategies from successful interventions are needed to inform both intervention design and policies affecting these adolescents. The lack of rigorous evaluations may be attributable to inadequate emphasis on and sufficient funding for evaluation, as well as to challenges encountered by program evaluators working with this population. More rigorous program evaluations are urgently needed to provide scientifically sound guidance for programming and policy decisions. Evaluation lessons learned have implications for other vulnerable populations. PMID:22897541

  15. Alternative Designs for Evaluating Workplace Literacy Programs. Conference Proceedings and Commissioned Papers at the "Design Guidance for Evaluating Workplace Literacy Programs" Work Group Conference (Washington, D.C. April 13, 1993).

    ERIC Educational Resources Information Center

    Research Triangle Inst., Research Triangle Park, NC.

    This document contains the five papers presented at a meeting at which key issues in evaluating workplace literacy programs were discussed. In "Key Components of Workplace Liteacy Projects and Definitions of Project 'Modules,'" Judith A. Alamprese describes the context for evaluating the National Extension Program, components of workplace literacy…

  16. Multimethod Strategy for Assessing Program Fidelity: The National Evaluation of the Revised G.R.E.A.T. Program

    ERIC Educational Resources Information Center

    Esbensen, Finn-Aage; Matsuda, Kristy N.; Taylor, Terrance J.; Peterson, Dana

    2011-01-01

    This study reports the results of the process evaluation component of the Process and Outcome Evaluation of the Gang Resistance Education and Training (G.R.E.A.T.) program. The process evaluation consisted of multiple methods to assess program fidelity: (a) observations of G.R.E.A.T. Officer Trainings (G.O.T); (b) surveys and interviews of…

  17. Phoenix Rising: use of a participatory approach to evaluate a federally funded HIV, hepatitis and substance abuse prevention program.

    PubMed

    Dryden, Eileen; Hyde, Justeen; Livny, Ayala; Tula, Monique

    2010-11-01

    This paper highlights the value of utilizing a participatory evaluation approach when working with community agencies receiving federal funding for prevention and intervention services. Drawing from our experience as evaluators of a SAMHSA-funded substance abuse, HIV and Hepatitis prevention program targeting homeless young adults, we describe the importance of and strategies for creating a participatory evaluation partnership with program implementers. By participatory evaluation we mean the active involvement of program implementers in defining the evaluation, developing instruments, collecting data, discussing findings, and disseminating results. There are a number of challenges faced when using this approach with federally funded programs that require the use of standardized measurement tools and data collection procedures. Strategies we used to strike a balance between federal requirements and local needs are presented. By increasing the understanding of and participation in the evaluation process, program implementers have greater support for data collection requirements and are appreciably more interested in learning from the evaluation data. This approach has helped to build the capacity of a program and stimulated new possibilities for learning, growing, and ultimately improving the services offered to those the program strives to reach. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  18. Potential pros and cons of external healthcare performance evaluation systems: real-life perspectives on Iranian hospital evaluation and accreditation program.

    PubMed

    Jaafaripooyan, Ebrahim

    2014-09-01

    Performance evaluation is essential to quality improvement in healthcare. The current study has identified the potential pros and cons of external healthcare evaluation programs, utilizing them subsequently to look into the merits of a similar case in a developing country. A mixed method study employing both qualitative and quantitative data collection and analysis techniques was adopted to achieve the study end. Subject Matter Experts (SMEs) and professionals were approached for two-stage process of data collection. Potential advantages included greater attractiveness of high accreditation rank healthcare organizations to their customers/purchasers and boosted morale of their personnel. Downsides, as such, comprised the programs' over-reliance on value judgment of surveyors, routinization and incurring undue cost on the organizations. In addition, the improved, standardized care processes as well as the judgmental nature of program survey were associated, as pros and cons, to the program investigated by the professionals. Besides rendering a tentative assessment of Iranian hospital evaluation program, the study provides those running external performance evaluations with a lens to scrutinize the virtues of their own evaluation systems through identifying the potential advantages and drawbacks of such programs. Moreover, the approach followed could be utilized for performance assessment of similar evaluation programs.

  19. Advancing Evaluation of Character Building Programs

    ERIC Educational Resources Information Center

    Urban, Jennifer Brown; Trochim, William M.

    2017-01-01

    This article presents how character development practitioners, researchers, and funders might think about evaluation, how evaluation fits into their work, and what needs to happen in order to sustain evaluative practices. A broader view of evaluation is presented whereby evaluation is not just seen as something that is applied at a program level,…

  20. The Nursing Leadership Institute program evaluation: a critique

    PubMed Central

    Havaei, Farinaz; MacPhee, Maura

    2015-01-01

    A theory-driven program evaluation was conducted for a nursing leadership program, as a collaborative project between university faculty, the nurses’ union, the provincial Ministry of Health, and its chief nursing officers. A collaborative logic model process was used to engage stakeholders, and mixed methods approaches were used to answer evaluation questions. Despite demonstrated, successful outcomes, the leadership program was not supported with continued funding. This paper examines what happened during the evaluation process: What factors failed to sustain this program? PMID:29355180

  1. A responsive evaluation of an Aboriginal nursing education access program.

    PubMed

    Curran, Vernon; Solberg, Shirley; LeFort, Sandra; Fleet, Lisa; Hollett, Ann

    2008-01-01

    Nursing education access programs have been introduced in a number of countries to address the shortage of healthcare providers of Aboriginal descent. An evaluation study of a nursing education access program in Labrador, Canada, was undertaken using a Responsive Evaluation approach. Interviews and focus groups with program stakeholders were conducted. Program effectiveness was influenced by culturally relevant curriculum, experiential and authentic learning opportunities, academic and social support, and the need for partnership building between stakeholders. The authors report key findings resulting from the Responsive Evaluation.

  2. Evaluating a physician leadership development program - a mixed methods approach.

    PubMed

    Throgmorton, Cheryl; Mitchell, Trey; Morley, Tom; Snyder, Marijo

    2016-05-16

    Purpose - With the extent of change in healthcare today, organizations need strong physician leaders. To compensate for the lack of physician leadership education, many organizations are sending physicians to external leadership programs or developing in-house leadership programs targeted specifically to physicians. The purpose of this paper is to outline the evaluation strategy and outcomes of the inaugural year of a Physician Leadership Academy (PLA) developed and implemented at a Michigan-based regional healthcare system. Design/methodology/approach - The authors applied the theoretical framework of Kirkpatrick's four levels of evaluation and used surveys, observations, activity tracking, and interviews to evaluate the program outcomes. The authors applied grounded theory techniques to the interview data. Findings - The program met targeted outcomes across all four levels of evaluation. Interview themes focused on the significance of increasing self-awareness, building relationships, applying new skills, and building confidence. Research limitations/implications - While only one example, this study illustrates the importance of developing the evaluation strategy as part of the program design. Qualitative research methods, often lacking from learning evaluation design, uncover rich themes of impact. The study supports how a PLA program can enhance physician learning, engagement, and relationship building throughout and after the program. Physician leaders' partnership with organization development and learning professionals yield results with impact to individuals, groups, and the organization. Originality/value - Few studies provide an in-depth review of evaluation methods and outcomes of physician leadership development programs. Healthcare organizations seeking to develop similar in-house programs may benefit applying the evaluation strategy outlined in this study.

  3. Motivation for Evaluation: A roadmap for Improving Program Efficacy

    NASA Astrophysics Data System (ADS)

    Taber, J. J.; Bohon, W.; Bravo, T. K.; Dorr, P. M.; Hubenthal, M.; Johnson, J. A.; Sumy, D. F.; Welti, R.; Davis, H. B.

    2016-12-01

    Over the past year, the Incorporated Research Institutions for Seismology (IRIS) Education and Public Outreach (EPO) program has undertaken a new effort to increase the rigor with which it evaluates its programs and products. More specifically we sought to make evaluation an integral part of our EPO staff's work, enable staff to demonstrate why we do the activities we do, enhance the impact or our products and programs, and empower staff to be able to make evidence-based claims. The challenges we faced included a modest budget, finding an applicable approach to both new and legacy programs ranging from formal and informal education to public outreach, and implementing the process without overwhelming staff. The Collaborative Impact Analysis Method (IAM; Davis and Scalice, 2015) was selected as it allowed us to combine the EPO staff's knowledge of programs, audiences and content with the expertise of an outside evaluation expert, through consultations and a qualitative rubric assessing the initial state of each product/program's evaluation. Staff then developed action plans to make incremental improvements to the evaluation of programs over time. We have found that this approach promotes the development of staff knowledge and skills regarding evaluation, provides a common language among staff, increases enthusiasm to collect and share data, encourages discussions of evaluative approaches when planning new activities, and improves each program's ability to capture the intended and unintended effects on the behaviors, attitudes, skills, interests, and/or knowledge of users/participants. We will share the initial IAM Scores for products and programs in the EPO portfolio, along with examples of the action plans for several key products and programs, and the impact that implementing those actions plans has had on our evaluations. Davis, H. & Scalice, D. (2015). Evaluate the Impact of your Education and Outreach Program Using the Quantitative Collaborative Impact Analysis Method (Invited). Abstract ED53D-0871 presented at 2015 Fall Meeting, AGU, San Francisco, Calif., 14 - 18 Dec.

  4. Putting program evaluation into practice: enhancing the Girls Just Wanna Have Fun program.

    PubMed

    Bean, Corliss N; Kendellen, Kelsey; Halsall, Tanya; Forneris, Tanya

    2015-04-01

    In recent years there has been a call for increased community physical activity and sport programs for female youth that are deliberately structured to foster positive developmental outcomes. In addition, researchers have recognized the need to empirically evaluate such programs to ensure that youth are provided with optimal opportunities to thrive. This study represents a utilization-focused evaluation of Girls Just Wanna Have Fun, a female-only physical activity-based life skills community program. A utilization-focused evaluation is particularly important when the evaluation is to help stakeholders utilize the findings in practice. The purpose of this study was twofold: (a) to gain an understanding of the ongoing successes and challenges after year two of program implementation and (b) to examine how the adaptations made based on feedback from the first year evaluation were perceived as impacting the program. From interviews with youth participants and program leaders, three main themes with eight sub-themes emerged. The main themes were: (a) applying lessons learned can make a significant difference, (b) continually implementing successful strategies, and (c) ongoing challenges. Overall, this evaluation represents an important step in understanding how to improve program delivery to better meet the needs of the participants in community-based programming. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Design and implementation of an integrated, continuous evaluation, and quality improvement system for a state-based home-visiting program.

    PubMed

    McCabe, Bridget K; Potash, Dru; Omohundro, Ellen; Taylor, Cathy R

    2012-10-01

    To describe the design and implementation of an evaluation system to facilitate continuous quality improvement (CQI) and scientific evaluation in a statewide home visiting program, and to provide a summary of the system's progress in meeting intended outputs and short-term outcomes. Help Us Grow Successfully (HUGS) is a statewide home visiting program that provides services to at-risk pregnant/post-partum women, children (0-5 years), and their families. The program goals are to improve parenting skills and connect families to needed services and thus improve the health of the service population. The evaluation system is designed to: (1) integrate evaluation into daily workflow; (2) utilize standardized screening and evaluation tools; (3) facilitate a culture of CQI in program management; and, (4) facilitate scientifically rigorous evaluations. The review of the system's design and implementation occurred through a formative evaluation process (reach, dose, and fidelity). Data was collected through electronic and paper surveys, administrative data, and notes from management meetings, and medical chart review. In the design phase, four process and forty outcome measures were selected and are tracked using standardized screening and monitoring tools. During implementation, the reach and dose of training were adequate to successfully launch the evaluation/CQI system. All staff (n = 165) use the system for management of families; the supervisors (n = 18) use the system to track routine program activities. Data quality and availability is sufficient to support periodic program reviews at the region and state level. In the first 7 months, the HUGS evaluation system tracked 3,794 families (7,937 individuals). System use and acceptance is high. A successful implementation of a structured evaluation system with a strong CQI component is feasible in an existing, large statewide program. The evaluation/CQI system is an effective mechanism to drive modest change in management of the program.

  6. Strategies for Evaluating a Freshman Studies Program.

    ERIC Educational Resources Information Center

    Ketkar, Kusum; Bennett, Shelby D.

    1989-01-01

    The study developed an economic model for the evaluation of Seaton Hall University's freshman studies program. Two techniques used to evaluate the economic success of the program are break-even analysis and elasticity coefficient. (Author/MLW)

  7. 1999 commuter assistance program evaluation manual

    DOT National Transportation Integrated Search

    2001-01-01

    This manual was developed to assist Florida's Commuter Assistance Programs (CAP) to measure and evaluate their performance. It provides information necessary for a CAP to create and implement its own evaluation program. It discusses performance measu...

  8. Practice guidelines for program evaluation in community-based rehabilitation.

    PubMed

    Grandisson, Marie; Hébert, Michèle; Thibeault, Rachel

    2017-06-01

    This paper proposes practice guidelines to evaluate community-based rehabilitation (CBR) programs. These were developed through a rigorous three-phase research process including a literature review on good practices in CBR program evaluation, a field study during which a South Africa CBR program was evaluated, and a Delphi study to generate consensus among a highly credible panel of CBR experts from a wide range of backgrounds and geographical areas. The 10 guidelines developed are summarized into a practice model highlighting key features of sound CBR program evaluation. They strongly indicate that sound CBR evaluations are those that give a voice and as much control as possible to the most affected groups, embrace the challenge of diversity, and foster use of evaluation processes and findings through a rigorous, collaborative and empowering approach. The practice guidelines should facilitate CBR evaluation decisions in respect to facilitating an evaluation process, using frameworks and designing methods. Implications for rehabilitation Ten practice guidelines provide guidance to facilitate sound community-based rehabilitation (CBR) program evaluation decisions. Key indications of good practice include: • being as participatory and empowering as possible; • ensuring that all, including the most affected, have a real opportunity to share their thoughts; • highly considering mixed methods and participatory tools; • adapting to fit evaluation context, local culture and language(s); • defining evaluation questions and reporting findings using shared CBR language when possible, which the framework offered may facilitate.

  9. Adult Literacy Education: Program Evaluation and Learner Assessment. Information Series No. 338.

    ERIC Educational Resources Information Center

    Lytle, Susan L.; Wolfe, Marcie

    Adult literacy programs need reliable information about program quality and effectiveness for accountability, improvement of practice, and expansion of knowledge. Evaluation and assessment reflect fundamental beliefs about adult learners, concepts of literacy, and educational settings. Resources for planning program evaluations include surveys,…

  10. Evaluating theory-based evaluation: information, norms, and adherence.

    PubMed

    Jacobs, W Jake; Sisco, Melissa; Hill, Dawn; Malter, Frederic; Figueredo, Aurelio José

    2012-08-01

    Programmatic social interventions attempt to produce appropriate social-norm-guided behavior in an open environment. A marriage of applicable psychological theory, appropriate program evaluation theory, and outcome of evaluations of specific social interventions assures the acquisition of cumulative theory and the production of successful social interventions--the marriage permits us to advance knowledge by making use of both success and failures. We briefly review well-established principles within the field of program evaluation, well-established processes involved in changing social norms and social-norm adherence, the outcome of several program evaluations focusing on smoking prevention, pro-environmental behavior, and rape prevention and, using the principle of learning from our failures, examine why these programs often do not perform as expected. Finally, we discuss the promise of learning from our collective experiences to develop a cumulative science of program evaluation and to improve the performance of extant and future interventions. Copyright © 2012. Published by Elsevier Ltd.

  11. Evaluation of School Library Media Centers: Demonstrating Quality.

    ERIC Educational Resources Information Center

    Everhart, Nancy

    2003-01-01

    Discusses ways to evaluate school library media programs and how to demonstrate quality. Topics include how principals evaluate programs; sources of evaluative data; national, state, and local instruments; surveys and interviews; Colorado benchmarks; evaluating the use of electronic resources; and computer reporting options. (LRW)

  12. Evaluation and capacity building to improve precollege science and mathematics achievement in the US: 10 CFR, Part 605. Technical progress report, June--December 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-12-31

    The National Center for Improving Science Education has undertaken activities to achieve evaluation goals for DOE`s Precollege programs: develop means to determine program quality; develop means for determining the contribution of DOE precollege programs to both teacher enhancement and student achievement; provide evaluation designs and instruments and reports of program quality and impact; and strengthen both DOE`s and the Labs` capacity to do both short- and long-term planning as well as deliver effective programs and evaluation. Appendices include evaluation/technical assistance report, profiling teacher research participation and teacher development programs, teacher surveys, impact assessment design, and teacher research participation programs anecdotesmore » for 8 labs.« less

  13. Evaluating public participation in environmental decision-making: EPA's superfund community involvement program.

    PubMed

    Charnley, Susan; Engelbert, Bruce

    2005-11-01

    This article discusses an 8-year, ongoing project that evaluates the Environmental Protection Agency's Superfund community involvement program. The project originated as a response to the Government Performance and Results Act, which requires federal agencies to articulate program goals, and evaluate and report their progress in meeting those goals. The evaluation project assesses how effective the Superfund community involvement program is in promoting public participation in decisions about how to clean up hazardous wastes at Superfund sites. We do three things in the article: (1) share our experience with evaluating an Agency public participation program, including lessons learned about methods of evaluation; (2) report evaluation results; and (3) address a number of issues pertaining to the evaluation of public participation in environmental decision-making. Our goal is to encourage more environmental managers to incorporate evaluation into their public participation programs as a tool for improving them. We found that written mail surveys were an effective and economical tool for obtaining feedback on EPA's community involvement program at Superfund sites. The evaluation focused on four criteria: citizen satisfaction with EPA information about the Superfund site, citizen understanding of environmental and human health risks associated with the site, citizen satisfaction with opportunities provided by EPA for community input, and citizen satisfaction with EPA's response to community input. While the evaluation results were mixed, in general, community members who were most informed about and involved in the cleanup process at Superfund sites generally were also the most satisfied with the community involvement process, and the job that EPA was doing cleaning up the site. We conclude that systematic evaluation provides meaningful and useful information that agencies can use to improve their public participation programs. However, there need to be institutionalized processes that ensure evaluation results are used to develop and implement strategies for improvement.

  14. Evaluation of the Radiography Program at Caldwell Community College and Technical Institute--Summer, 1982.

    ERIC Educational Resources Information Center

    Pipes, V. David

    As part of a periodic evaluation of the occupational programs at Caldwell Community College and Technical Institute (CCC&TI), a study of the radiography program was conducted to collect information to facilitate planning, aid in program improvement, and meet accountability demands. The specific objectives of the program evaluation were to…

  15. Is Evaluative Research on Youth Suicide Programs Theory-Driven? The Canadian Experience.

    ERIC Educational Resources Information Center

    Breton, Jean-Jacques; Boyer, Richard; Bilodeau, Henriette; Raymond, Sylvaine; Joubert, Natacha; Nantel, Marie-Andree

    2002-01-01

    A review found that only 15 Canadian youth suicide programs were evaluated over the last decade. Descriptions of the programs were incomplete and theoretical bases for programs were not presented. Only two programs led to a reduction in suicidal behavior. Suggests that future evaluative research needs to place greater emphasis on content and…

  16. Guidelines "PEP," Peer Evaluation Program. A Systematic Approach for Evaluating Educational Programs, 1973-1974.

    ERIC Educational Resources Information Center

    Grotsky, Jeffery N.; And Others

    The Peer Evaluation Program (PEP) has been instituted by the Division of Special Education, Pennsylvania State Department of Education, to allow intermediate units an opportunity to continuously improve their programs. The advantages of the PEP system are: (1) it is a self-improvememt system of program development; (2) PEP allows local autonomy as…

  17. Handbook for Evaluating Drug and Alcohol Prevention Programs: Staff/Team Evaluation of Prevention Programs (STEPP).

    ERIC Educational Resources Information Center

    Hawkins, J. David; Nederhood, Britt

    This handbook was developed for the purpose of providing drug and alcohol prevention program managers with a comprehensive yet easy-to-use tool to help their evaluation efforts. The handbook emphasizes program staff members working together as a team. It provides instruments and activities for determining program effectiveness, as well as…

  18. Assessing Operation Purple: A Program Evaluation of a Summer Camp for Military Youth

    DTIC Science & Technology

    2012-01-01

    However, it should be noted that the four FOCUS studies employed evaluation designs with no control or comparison group . Furthermore, despite...evaluation design was more robust than other military youth program evaluations that do not have a control or comparison group (Beardslee, Lester, et al...Operation Purple: A Program Evaluation of a Summer Camp for Military Youth camp group ); the remaining group formed the no-camp, or control , group . We

  19. Using a systems orientation and foundational theory to enhance theory-driven human service program evaluations.

    PubMed

    Wasserman, Deborah L

    2010-05-01

    This paper offers a framework for using a systems orientation and "foundational theory" to enhance theory-driven evaluations and logic models. The framework guides the process of identifying and explaining operative relationships and perspectives within human service program systems. Self-Determination Theory exemplifies how a foundational theory can be used to support the framework in a wide range of program evaluations. Two examples illustrate how applications of the framework have improved the evaluators' abilities to observe and explain program effect. In both exemplars improvements involved addressing and organizing into a single logic model heretofore seemingly disparate evaluation issues regarding valuing (by whose values); the role of organizational and program context; and evaluation anxiety and utilization. Copyright 2009 Elsevier Ltd. All rights reserved.

  20. Faculty performance evaluation in accredited U.S. public health graduate schools and programs: a national study.

    PubMed

    Gimbel, Ronald W; Cruess, David F; Schor, Kenneth; Hooper, Tomoko I; Barbour, Galen L

    2008-10-01

    To provide baseline data on evaluation of faculty performance in U.S. schools and programs of public health. The authors administered an anonymous Internet-based questionnaire using PHP Surveyor. The invited sample consisted of individuals listed in the Council on Education for Public Health (CEPH) Directory of Accredited Schools and Programs of Public Health. The authors explored performance measures in teaching, research, and service, and assessed how faculty performance measures are used. A total of 64 individuals (60.4%) responded to the survey, with 26 (40.6%) reporting accreditation/reaccreditation by CEPH within the preceding 24 months. Although all schools and programs employ faculty performance evaluations, a significant difference exists between schools and programs in the use of results for merit pay increases and mentoring purposes. Thirty-one (48.4%) of the organizations published minimum performance expectations. Fifty-nine (92.2%) of the respondents counted number of publications, but only 22 (34.4%) formally evaluated their quality. Sixty-two (96.9%) evaluated teaching through student course evaluations, and only 29 (45.3%) engaged in peer assessment. Although aggregate results of teaching evaluation are available to faculty and administrators, this information is often unavailable to students and the public. Most schools and programs documented faculty service activities qualitatively but neither assessed it quantitatively nor evaluated its impact. This study provides insight into how schools and programs of public health evaluate faculty performance. Results suggest that although schools and programs do evaluate faculty performance on a basic level, many do not devote substantial attention to this process.

  1. Program evaluation of FHWA pedestrian and bicycle safety activities.

    DOT National Transportation Integrated Search

    2011-03-01

    "Introduction : FHWAs Office of Highway Safety (HSA) initiated a program evaluation by Booz Allen Hamilton to assess the overall effectiveness of the Agencys Pedestrian and Bicycle Safety Program. The evaluation covers pedestrian and bicycle sa...

  2. Do Early Childhood Programs Have Lasting Effects on Children? Evaluation Science Brief

    ERIC Educational Resources Information Center

    National Forum on Early Childhood Program Evaluation, 2008

    2008-01-01

    "Evaluation Science Briefs" summarize the findings and implications of a recent study evaluating the effects of an early childhood program or environment. This Brief evaluates the study "Early Intervention in Low Birthweight Premature Infants: Results at 18 Years of Age for the Infant Health and Development Program (IHDP)" (M.C. McCormick, J.…

  3. The Development and Implementation of a Model for Evaluating Clinical Specialty Education Programs.

    ERIC Educational Resources Information Center

    McLean, James E.; And Others

    A new method for evaluating cancer education programs, using an external/internal evaluation team is outlined. The internal program staff are required to collect the data, arrange for a site visit, provide access to personnel, and make available other information requested by the evaluators. The external team consists of a dentist with oncological…

  4. Evaluating Youth Sexual Health Peer Education Programs: "Challenges and Suggestions for Effective Evaluation Practices"

    ERIC Educational Resources Information Center

    Jaworsky, Denise; Larkin, June; Sriranganathan, Gobika; Clout, Jerri; Janssen, Jesse; Campbell, Lisa; Flicker, Sarah; Stadnicki, Dan; Erlich, Leah; Flynn, Susan

    2013-01-01

    Although peer sexual health education is a common form of sexual health promotion for youth, systematic reviews of these programs are relatively rare. In this study we interviewed youth peer educators to inquire about their experience of program evaluation and their perception of what is needed to develop effective evaluation practices. Data were…

  5. Redesigning and Aligning Assessment and Evaluation for a Federally Funded Math and Science Teacher Educational Program

    ERIC Educational Resources Information Center

    Hardre, Patricia L.; Slater, Janis; Nanny, Mark

    2010-01-01

    This paper examines the redesign of evaluation components for a teacher professional development project funded by the National Science Foundation. It focuses on aligning evaluation instrumentation and strategies with program goals, research goals and program evaluation best practices. The study identifies weaknesses in the original (year 1)…

  6. School Community Education Program in New York City 1988-89. Volume II. OREA Evaluation Section Report.

    ERIC Educational Resources Information Center

    Guerrero, Frank; Abbott, Lori

    This second volume of a four-volume evaluation of the 1988-89 New York City School Community Education Program (also known as the Umbrella Program) comprises reports evaluating nine innovative elementary school projects on social, ethnical, and environmental studies, four of which included staff development workshops. Evaluation sources included…

  7. Mandarins and Lemons--The Executive Investment in Program Evaluation.

    ERIC Educational Resources Information Center

    MacDonald, Barry

    The author contends that within executive government, which already has all the information it needs to make decisions, program evaluation is seen as a symbolic rather than a substantive enterprise. If it is assumed that program evaluation is an end of policy, not a means, the reasons for government investment in evaluation include: (1) no choice;…

  8. Formative Evaluation of the No-Fee Teacher Education Program from the Students' Standpoint

    ERIC Educational Resources Information Center

    Han, Yumei; Hu, Meizhong; Li, Ling

    2013-01-01

    This exploratory case study applied a formative evaluation framework to evaluate the no-fee teacher education program at Southwest University. The study focused on the students' perspective and their perceptions of the program, both intrinsic and extrinsic. A self-evaluation checklist and a questionnaire were the instruments used to collect data.…

  9. Curriculum and Evaluation Guide for Safety Education Programs. Research and Evaluation Report Series No. 40.00.

    ERIC Educational Resources Information Center

    Lowry, Carlee S.

    Designed to assist Bureau of Indian Affairs school officials in the identification of safety education program needs, this evaluation guide focuses upon the basic operational components in a safety education program. The means for establishing an evaluation design for safety education are presented via a flexible model appropriate for most…

  10. Insourcing, Not Capacity Building, a Better Model for Sustained Program Evaluation

    ERIC Educational Resources Information Center

    Miller, Thomas I.; Kobayashi, Michelle M.; Noble, Paula M.

    2006-01-01

    Community-based organizations (CBOs), typically small and underfunded with transient staff members, are told by funders to care for clients and verify program value. To assist CBOs with evaluations that speak to program effectiveness, many funders wish to expand the evaluation capacity of CBO staff members so that evaluation will occur as long as…

  11. Second NBL measurement evaluation program meeting: A summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spaletto, M.I.; Clapper, M.; Tolbert, M.E.M.

    New Brunswick Laboratory (NBL), the US government`s nuclear materials measurements and reference materials laboratory, administers interlaboratory measurement evaluation programs to evaluate the quality and adequacy of safeguards measurements. The NBL Measurement Evaluation Program covers several types of safeguards analytical measurements. The Safeguards Measurement Evaluation (SME) program distributes test materials destructive measurements of uranium for both elemental concentration and isotopic abundances, and of plutonium for isotopic abundances. The Calorimetry Exchange (CalEx) Program tests the quality of nondestructive measurements of plutonium isotopic abundances by gamma spectroscopy and plutonium concentration by calorimetry. In May 1997, more than 30 representatives from the Department ofmore » Energy (DOE), its contractor laboratories, and Nuclear Regulatory Commission licensees met at NBL in Argonne, Illinois, for the annual meeting of the Measurement Evaluation Program. The summary which follows details key points that were discussed or presented at the meeting.« less

  12. Documenting Progress and Demonstrating Results: Evaluating Local Out-of-School Time Programs.

    ERIC Educational Resources Information Center

    Little, Priscilla; DuPree, Sharon; Deich, Sharon

    A collaborative publication between Harvard Family Research Project and The Finance Project, this brief offers guidance in documenting progress and demonstrating results in local out-of-school-time programs. Following introductory remarks providing a rationale for program evaluation, discussing principles of program evaluation, and clarifying key…

  13. What Do We Know about School-Based Management?

    ERIC Educational Resources Information Center

    Patrinos, Harry Anthony; Fasih, Tazeen; Barrera, Felipe; Garcia-Moreno, Vicente A.; Bentaouet-Kattan, Raja; Baksh, Shaista; Wickramasekera, Inosha

    2007-01-01

    Impact evaluations of school-based management (SBM) programs, or any other kind of program, are important because they can demonstrate whether or not the program has accomplished its objectives. Furthermore, these evaluations can identify ways to improve the design of the program. These evaluations can also make successful interventions…

  14. 40 CFR 51.353 - Network type and program evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 2 2011-07-01 2011-07-01 false Network type and program evaluation. 51... Requirements § 51.353 Network type and program evaluation. Basic and enhanced I/M programs can be centralized.... (a) Presumptive equivalency. A decentralized network consisting of stations that only perform...

  15. Creating an Information Literacy Badges Program in Blackboard: A Formative Program Evaluation

    ERIC Educational Resources Information Center

    Tunon, Johanna; Ramirez, Laura Lucio; Ryckman, Brian; Campbell, Loy; Mlinar, Courtney

    2015-01-01

    A formative program evaluation using Stufflebeam's (2010) Context, Input, Process, Product (CIPP) model was conducted to assess the use of digital badges for tracking basic library instructional skills across academic programs at Nova Southeastern University. Based on the evaluation of pilot library modules and Blackboard Learn's badges…

  16. 29 CFR 1960.79 - Self-evaluations of occupational safety and health programs.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Self-evaluations of occupational safety and health programs. 1960.79 Section 1960.79 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... AND HEALTH PROGRAMS AND RELATED MATTERS Evaluation of Federal Occupational Safety and Health Programs...

  17. 29 CFR 1960.79 - Self-evaluations of occupational safety and health programs.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Self-evaluations of occupational safety and health programs. 1960.79 Section 1960.79 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... AND HEALTH PROGRAMS AND RELATED MATTERS Evaluation of Federal Occupational Safety and Health Programs...

  18. 29 CFR 1960.79 - Self-evaluations of occupational safety and health programs.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Self-evaluations of occupational safety and health programs. 1960.79 Section 1960.79 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... AND HEALTH PROGRAMS AND RELATED MATTERS Evaluation of Federal Occupational Safety and Health Programs...

  19. 29 CFR 1960.79 - Self-evaluations of occupational safety and health programs.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Self-evaluations of occupational safety and health programs. 1960.79 Section 1960.79 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... AND HEALTH PROGRAMS AND RELATED MATTERS Evaluation of Federal Occupational Safety and Health Programs...

  20. Home-Start between Childhood and Maturity: A Programme Evaluation.

    ERIC Educational Resources Information Center

    Terpstra, Linda; van Dijke, Anke

    A crucial question for evaluating nationally or internationally implemented programs is whether local adaptations detract from program quality and effectiveness. An evaluation examined the program successes and challenges encountered in the first 5 years of Home-Start in the Netherlands, a home-based family support program for families with young…

  1. Maryland Community Colleges 1980 Program Evaluations.

    ERIC Educational Resources Information Center

    Maryland State Board for Community Colleges, Annapolis.

    This report contains qualitative evaluations of 48 programs throughout the Maryland community college system, as well as a statewide evaluation of Teacher Education transfer programs. A summary of the Teacher Education programs is presented first, in which the purpose and role of teacher education in the community college, enrollment trends,…

  2. 29 CFR 1960.79 - Self-evaluations of occupational safety and health programs.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Self-evaluations of occupational safety and health programs. 1960.79 Section 1960.79 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... AND HEALTH PROGRAMS AND RELATED MATTERS Evaluation of Federal Occupational Safety and Health Programs...

  3. INSREC: Computational System for Quantitative Analysis of Radiation Effects Covering All Radiation Field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong Hoon Shin; Young Wook Lee; Young Ho Cho

    2006-07-01

    In the nuclear energy field, there are so many difficult things that even people who are working in this field are not much familiar with, such as, Dose evaluation, Dose management, etc. Thus, so many efforts have been done to achieve the knowledge and data for understanding. Although some data had been achieved, the applications of these data to necessary cases were more difficult job. Moreover, the type of Dose evaluation program until now was 'Console type' which is not easy enough to use for the beginners. To overcome the above causes of difficulties, the window-based integrated program and databasemore » management were developed in our research lab. The program, called as INSREC, consists of four sub-programs as follow; INSREC-NOM, INSREC-ACT, INSREC-MED, and INSREC-EXI. In ICONE 11 conference, INSREC-program(ICONE-36203) which can evaluates on/off-site dose of nuclear power plant in normal operation was introduced. Upgraded INSREC-program which will be presented in ICONE 14 conference has three additional codes comparing with pre-presented INSREC-program. Those subprograms can evaluate on/off-site Dose of nuclear power plant in accident cases. And they also have the functions of 'Dose evaluation and management' in the hospital and provide the 'Expert system' based on knowledge related to nuclear energy/radiation field. The INSREC-NOM, one of subprograms, is composed of 'Source term evaluation program', 'Atmospheric diffusion factor evaluation program', 'Off-site dose evaluation program', and 'On-site database program'. The INSREC-ACT is composed of 'On/Off-site dose evaluation program' and 'Result analysis program' and the INSREC-MED is composed of 'Workers/patients dose database program' and 'Dose evaluation program for treatment room'. The final one, INSREC-EXI, is composed of 'Database searching program based on artificial intelligence', 'Instruction program,' and 'FAQ/Q and A boards'. Each program was developed by using of Visual C++, Microsoft Access mainly. To verify the reliability, some suitable programs were selected such as AZAP and Stardose programs for the comparison. The AZAP program was selected for the on/off-site dose evaluation during the normal operation of nuclear reactor and Stardose program was used for the on/off-site dose evaluation in accident. The MCNP code was used for the dose evaluation and management in the hospital. Each comparison result was acceptable in errors analysis. According to the reliable verification results, it was concluded that INSREC program had an acceptable reliability for dose calculation and could give many proper dada for the sites. To serve the INSREC to people, the proper server system was constructed. We gave chances for the people (user) to utilize the INSREC through network connected to server system. The reactions were pretty much good enough to be satisfied. For the future work, many efforts will be given to improve the better user-interface and more necessary data will be provided to more people through database supplement and management. (authors)« less

  4. CONSORT to community: translation of an RCT to a large-scale community intervention and learnings from evaluation of the upscaled program.

    PubMed

    Moores, Carly Jane; Miller, Jacqueline; Perry, Rebecca Anne; Chan, Lily Lai Hang; Daniels, Lynne Allison; Vidgen, Helen Anna; Magarey, Anthea Margaret

    2017-11-29

    Translation encompasses the continuum from clinical efficacy to widespread adoption within the healthcare service and ultimately routine clinical practice. The Parenting, Eating and Activity for Child Health (PEACH™) program has previously demonstrated clinical effectiveness in the management of child obesity, and has been recently implemented as a large-scale community intervention in Queensland, Australia. This paper aims to describe the translation of the evaluation framework from a randomised controlled trial (RCT) to large-scale community intervention (PEACH™ QLD). Tensions between RCT paradigm and implementation research will be discussed along with lived evaluation challenges, responses to overcome these, and key learnings for future evaluation conducted at scale. The translation of evaluation from PEACH™ RCT to the large-scale community intervention PEACH™ QLD is described. While the CONSORT Statement was used to report findings from two previous RCTs, the REAIM framework was more suitable for the evaluation of upscaled delivery of the PEACH™ program. Evaluation of PEACH™ QLD was undertaken during the project delivery period from 2013 to 2016. Experiential learnings from conducting the evaluation of PEACH™ QLD to the described evaluation framework are presented for the purposes of informing the future evaluation of upscaled programs. Evaluation changes in response to real-time changes in the delivery of the PEACH™ QLD Project were necessary at stages during the project term. Key evaluation challenges encountered included the collection of complete evaluation data from a diverse and geographically dispersed workforce and the systematic collection of process evaluation data in real time to support program changes during the project. Evaluation of large-scale community interventions in the real world is challenging and divergent from RCTs which are rigourously evaluated within a more tightly-controlled clinical research setting. Constructs explored in an RCT are inadequate in describing the enablers and barriers of upscaled community program implementation. Methods for data collection, analysis and reporting also require consideration. We present a number of experiential reflections and suggestions for the successful evaluation of future upscaled community programs which are scarcely reported in the literature. PEACH™ QLD was retrospectively registered with the Australian New Zealand Clinical Trials Registry on 28 February 2017 (ACTRN12617000315314).

  5. A natural history of behavioral health program evaluation in Arizona.

    PubMed

    Braun, S H; Irving, D

    1984-01-01

    This article examines the history of behavioral health program evaluation efforts in the state of Arizona during the years 1974-1982. Program Evaluation in Arizona has been carried out in an environment where planning, monitoring, contracting, appropriations, and evaluation have been inter-related--sometimes loosely, sometimes closely. Here we trace the year-by-year evolution of the evaluation system and its connections with the other parts of the environment. This history illustrates the gradual development of an evaluation system in an organizational context, including the sidetracks and setbacks .

  6. Assessing the effects of employee assistance programs: a review of employee assistance program evaluations.

    PubMed

    Colantonio, A

    1989-01-01

    Employee assistance programs have grown at a dramatic rate, yet the effectiveness of these programs has been called into question. The purpose of this paper was to assess the effectiveness of employee assistance programs (EAPs) by reviewing recently published EAP evaluations. All studies evaluating EAPs published since 1975 from peer-reviewed journals in the English language were included in this analysis. Each of the articles was assessed in the following areas: (a) program description (subjects, setting, type of intervention, format), (b) evaluation design (research design, variables measured, operational methods), and (c) program outcomes. Results indicate numerous methodological and conceptual weaknesses and issues. These weaknesses included lack of controlled research designs and short time lags between pre- and post-test measures. Other problems identified are missing information regarding subjects, type of intervention, how variables are measured (operational methods), and reliability and validity of evaluation instruments. Due to the aforementioned weaknesses, positive outcomes could not be supported. Recommendations are made for future EAP evaluations.

  7. Assessing the effects of employee assistance programs: a review of employee assistance program evaluations.

    PubMed Central

    Colantonio, A.

    1989-01-01

    Employee assistance programs have grown at a dramatic rate, yet the effectiveness of these programs has been called into question. The purpose of this paper was to assess the effectiveness of employee assistance programs (EAPs) by reviewing recently published EAP evaluations. All studies evaluating EAPs published since 1975 from peer-reviewed journals in the English language were included in this analysis. Each of the articles was assessed in the following areas: (a) program description (subjects, setting, type of intervention, format), (b) evaluation design (research design, variables measured, operational methods), and (c) program outcomes. Results indicate numerous methodological and conceptual weaknesses and issues. These weaknesses included lack of controlled research designs and short time lags between pre- and post-test measures. Other problems identified are missing information regarding subjects, type of intervention, how variables are measured (operational methods), and reliability and validity of evaluation instruments. Due to the aforementioned weaknesses, positive outcomes could not be supported. Recommendations are made for future EAP evaluations. PMID:2728498

  8. Intern evaluation strategies in family medicine residency education: what is-and is not-being done.

    PubMed

    Yates, Jennifer E

    2013-06-01

    Family medicine interns often have deficiencies that are not initially appreciated. By recognizing those growth opportunities early, programs may be able to better meet their interns' training needs. This study provides a needs assessment to ascertain what evaluation tools are being utilized by residency programs to assess their incoming interns. A questionnaire was sent to all US family medicine residency program coordinators (439 programs) via Survey Monkey© inquiring about whether intern evaluation is performed and, if so, what strategies are used. A mixed mode methodology was used: mailing with incentive, email prompts, and telephone calls. Of 439 programs, 220 (50%) responded to the survey. Most respondents (145, 66%) think intern evaluation is needed. However, only 79 (36%) programs are actually doing intern evaluations-only 14 (6.4%) extensively. Most programs are performing simulations (81, 45%) and assessing knowledge/comfort levels (79, 36%); less than one third are considering personality/learning styles, and almost no programs are evaluating skills such as typing (three, 1.4%) and math (one, 0.5%). Many programs use evaluations to guide future planning, help with early identification of challenging learners, and to match training to the residents' needs. Several programs expressed concern about how they would use the information once obtained. The majority of respondents agreed that a baseline intern evaluation is useful; few are actually doing it. This area is not well-described in the literature; residency programs could benefit from information sharing. The next step is to encourage interest in and implementation of such strategies.

  9. Evaluation of Athletic Training Students' Clinical Proficiencies

    PubMed Central

    Walker, Stacy E; Weidner, Thomas G; Armstrong, Kirk J

    2008-01-01

    Context: Appropriate methods for evaluating clinical proficiencies are essential in ensuring entry-level competence. Objective: To investigate the common methods athletic training education programs use to evaluate student performance of clinical proficiencies. Design: Cross-sectional design. Setting: Public and private institutions nationwide. Patients or Other Participants: All program directors of athletic training education programs accredited by the Commission on Accreditation of Allied Health Education Programs as of January 2006 (n  =  337); 201 (59.6%) program directors responded. Data Collection and Analysis: The institutional survey consisted of 11 items regarding institutional and program demographics. The 14-item Methods of Clinical Proficiency Evaluation in Athletic Training survey consisted of respondents' demographic characteristics and Likert-scale items regarding clinical proficiency evaluation methods and barriers, educational content areas, and clinical experience settings. We used analyses of variance and independent t tests to assess differences among athletic training education program characteristics and the barriers, methods, content areas, and settings regarding clinical proficiency evaluation. Results: Of the 3 methods investigated, simulations (n  =  191, 95.0%) were the most prevalent method of clinical proficiency evaluation. An independent-samples t test revealed that more opportunities existed for real-time evaluations in the college or high school athletic training room (t189  =  2.866, P  =  .037) than in other settings. Orthopaedic clinical examination and diagnosis (4.37 ± 0.826) and therapeutic modalities (4.36 ± 0.738) content areas were scored the highest in sufficient opportunities for real-time clinical proficiency evaluations. An inadequate volume of injuries or conditions (3.99 ± 1.033) and injury/condition occurrence not coinciding with the clinical proficiency assessment timetable (4.06 ± 0.995) were barriers to real-time evaluation. One-way analyses of variance revealed no difference between athletic training education program characteristics and the opportunities for and barriers to real-time evaluations among the various clinical experience settings. Conclusions: No one primary barrier hindered real-time clinical proficiency evaluation. To determine athletic training students' clinical proficiency for entry-level employment, athletic training education programs must incorporate standardized patients or take a disciplined approach to using simulation for instruction and evaluation. PMID:18668172

  10. SUPERFUND INNOVATIVE TECHNOLOGY EVALUATION PROGRAM TECHNOLOGY PROFILES: SIXTH EDITION

    EPA Science Inventory

    The Superfund Innovative Technology Evaluation (SITE) Program evaluates new and promising treatment and monitoring and measurement technologies for cleanup of hazardous waste sites. The program was created to encourage the development and routine use of innovative treatment techn...

  11. SUPERFUND INNOVATIVE TECHNOLOGY EVALUATION PROGRAM - TECHNOLOGY PROFILES - SEVENTH EDITION

    EPA Science Inventory

    The Superfund Innovative Technology Evaluation (SITE) Program evaluates new and promising treatment and monitoring and measurement technologies for cleanup of hazardous waste sites. The program was created to encourage the development and routine use of innovative treatment techn...

  12. Symposium: Perspectives on Formative Evaluation of Children's Television Programs.

    ERIC Educational Resources Information Center

    1977

    Evaluators of television programing and representatives of funding agencies discussed the impact of the perceptions of funding agencies on the evaluation of children's television. Participants also examined the interplay between the objectives of the television series and the evaluation, the relationship between production and evaluation, and the…

  13. Institution Building and Evaluation.

    ERIC Educational Resources Information Center

    Wedemeyer, Charles A.

    Institutional modeling and program evaluation in relation to a correspondence program are discussed. The evaluation process is first considered from the viewpoint that it is an add-on activity, which is largely summative, and is the least desirable type of evaluation. Formative evaluation is next considered as a part of the process of institution…

  14. Assessing the Subsequent Effect of a Formative Evaluation on a Program.

    ERIC Educational Resources Information Center

    Brown, J. Lynne; Kiernan, Nancy Ellen

    2001-01-01

    Conducted a formative evaluation of an osteoporosis prevention health education program using several methods, including questionnaires completed by 256 women, and then compared formative evaluation results to those of a summative evaluation focusing on the same target group. Results show the usefulness of formative evaluation for strengthening…

  15. Housing First and Photovoice: Transforming Lives, Communities, and Systems

    PubMed Central

    Barile, John P.; Ogawa, Terry Yasuko; Peralta, Nelson; Bugg, Reumell; Lau, John; Lamberton, Thomas; Hall, Corazon; Mori, Victor

    2018-01-01

    This article presents findings from a community-based participatory evaluation of a Housing First program on the Island of O’ahu. In this study, clients in a Housing First program used Photovoice to evaluate the program and to advocate for progressive housing policies. Written together by members of the Housing First Photovoice group, this collaborative article describes the outcomes from both the Housing First program and the Photovoice project and demonstrates the ways in which participatory program evaluations can interact with client-driven programs like Housing First to produce a cumulative, transformative impact. Findings suggest that community psychologists hoping to re-engage with community mental health systems through enacting transformative change should consider taking a community-based participatory approach to program evaluation because increased client voice in community mental health programs and their evaluations can have far-reaching, transformative impacts for research, practice, and policy. PMID:29323410

  16. Evaluative Thinking in Practice: The National Asthma Control Program

    PubMed Central

    Fierro, Leslie A.; Codd, Heather; Gill, Sarah; Pham, Phung K.; Grandjean Targos, Piper T.; Wilce, Maureen

    2018-01-01

    Although evaluative thinking lies at the heart of what we do as evaluators and what we hope to promote in others through our efforts to build evaluation capacity, researchers have given limited attention to measuring this concept. We undertook a research study to better understand how instances of evaluative thinking may present in practice-based settings—specifically within four state asthma control programs funded by the Centers for Disease Control and Prevention’s National Asthma Control Program. Through content analyses of documents as well as interviews and a subsequent focus group with four state asthma control programs’ evaluators and program managers we identified and defined twenty-two indicators of evaluative thinking. Findings provide insights about what practitioners may wish to look for when they intend to build evaluative thinking and the types of data sources that may be more or less helpful in such efforts. PMID:29950803

  17. Evaluation of STD/AIDS prevention programs: a review of approaches and methodologies.

    PubMed

    da Cruz, Marly Marques; dos Santos, Elizabeth Moreira; Monteiro, Simone

    2007-05-01

    The article presents a review of approaches and methodologies in the evaluation of STD/AIDS prevention programs, searching for theoretical and methodological support for the institutionalization of evaluation and decision-making. The review included the MEDLINE, SciELO, and ISI Web of Science databases and other sources like textbooks and congress abstracts from 1990 to 2005, with the key words: "evaluation", "programs", "prevention", "STD/AIDS", and similar terms. The papers showed a predominance of quantitative outcome or impact evaluative studies with an experimental or quasi-experimental design. The main use of evaluation is accountability, although knowledge output and program improvement were also identified in the studies. Only a few evaluative studies contemplate process evaluation and its relationship to the contexts. The review aimed to contribute to the debate on STD/AIDS, which requires more effective, consistent, and sustainable decisions in the field of prevention.

  18. Monitoring and evaluating transition and sustainability of donor-funded programs: Reflections on the Avahan experience.

    PubMed

    Bennett, Sara; Ozawa, Sachiko; Rodriguez, Daniela; Paul, Amy; Singh, Kriti; Singh, Suneeta

    2015-10-01

    In low and middle-income countries, programs funded and implemented by international donors frequently transition to local funding and management, yet such processes are rarely evaluated. We reflect upon experience evaluating the transition of a large scale HIV/AIDS prevention program in India, known as Avahan, in order to draw lessons about transition evaluation approaches and implementation challenges. In terms of conceptualizing the transition theory, the evaluation team identified tensions between the idea of institutionalizing key features of the Avahan program, and ensuring program flexibility to promote sustainability. The transition was planned in three rounds allowing for adaptations to transition intervention and program design during the transition period. The assessment team found it important to track these changes in order to understand which strategies and contextual features supported transition. A mixed methods evaluation was employed, combining semi-structured surveys of transitioning entities (conducted pre and post transition), with longitudinal case studies. Qualitative data helped explain quantitative findings. Measures of transition readiness appeared robust, but we were uncertain of the robustness of institutionalization measures. Finally, challenges to the implementation of such an evaluation are discussed. Given the scarceness of transition evaluations, the lessons from this evaluation may have widespread relevance. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Measuring Success in Obesity Prevention: A Synthesis of Health Promotion Switzerland's Long-Term Monitoring and Evaluation Strategy

    PubMed Central

    Ackermann, Günter; Kirschner, Michael; Guggenbühl, Lisa; Abel, Bettina; Klohn, Axel; Mattig, Thomas

    2015-01-01

    Aims Since 2007, Health Promotion Switzerland has implemented a national priority program for a healthy body weight. This article provides insight into the methodological challenges and results of the program evaluation. Methods Evaluation of the long-term program required targeted monitoring and evaluation projects addressing different outcome levels. The evaluation was carried out according to the Swiss Model for Outcome Classification (SMOC), a model designed to classify the effects of health promotion and prevention efforts. Results The results presented in this article emphasize both content and methods. The national program successfully achieved outcomes on many different levels within complex societal structures. The evaluation system built around the SMOC enabled assessment of program progress and the development of key indicators. However, it is not possible to determine definitively to what extent the national program helped stabilize the prevalence of obesity in Switzerland. Conclusion The model has shown its utility in providing a basis for evaluation and monitoring of the national program. Continuous analysis of data from evaluation and monitoring has made it possible to check the plausibility of suspected causal relationships as well as to establish an overall perspective and assessment of effectiveness supported by a growing body of evidence. PMID:25765161

  20. Potential pros and cons of external healthcare performance evaluation systems: real-life perspectives on Iranian hospital evaluation and accreditation program

    PubMed Central

    Jaafaripooyan, Ebrahim

    2014-01-01

    Background: Performance evaluation is essential to quality improvement in healthcare. The current study has identified the potential pros and cons of external healthcare evaluation programs, utilizing them subsequently to look into the merits of a similar case in a developing country. Methods: A mixed method study employing both qualitative and quantitative data collection and analysis techniques was adopted to achieve the study end. Subject Matter Experts (SMEs) and professionals were approached for two-stage process of data collection. Results: Potential advantages included greater attractiveness of high accreditation rank healthcare organizations to their customers/purchasers and boosted morale of their personnel. Downsides, as such, comprised the programs’ over-reliance on value judgment of surveyors, routinization and incurring undue cost on the organizations. In addition, the improved, standardized care processes as well as the judgmental nature of program survey were associated, as pros and cons, to the program investigated by the professionals. Conclusion: Besides rendering a tentative assessment of Iranian hospital evaluation program, the study provides those running external performance evaluations with a lens to scrutinize the virtues of their own evaluation systems through identifying the potential advantages and drawbacks of such programs. Moreover, the approach followed could be utilized for performance assessment of similar evaluation programs. PMID:25279381

  1. Evaluating a Small-Group Counseling Program--A Model for Program Planning and Improvement in the Elementary Setting

    ERIC Educational Resources Information Center

    Bostick, Dee; Anderson, Ron

    2009-01-01

    School counselors are under increasing pressure to evaluate their programs in a manner consistent with teachers and other educators. A small-group counseling intervention was used by a school counselor as part of a three-level program planning initiative that illustrated best research practices to evaluate program outcomes. Forty-nine third-grade…

  2. Evaluation of a peer assessment approach for enhancing the organizational capacity of state injury prevention programs.

    PubMed

    Hunter, Wanda M; Schmidt, Ellen R; Zakocs, Ronda

    2005-01-01

    To conduct a formative and pilot impact evaluation of the State Technical Assessment Team (STAT) program, a visitation-based (visitatie) peer assessment program designed to enhance the organizational capacity of state health department injury prevention programs. The formative evaluation was based on observational, record review, and key informant interview data collected during the implementation of the first 7 STAT visits. Pilot impact data were derived from semi-structured interviews with state injury prevention personnel one year after the visit. Formative evaluation identified 6 significant implementation problems in the first visits that were addressed by the program planners, resulting in improvements to the STAT assessment protocol. Impact evaluation revealed that after one year, the 7 state injury prevention programs had acted on 81% of the recommendations received during their STAT visits. All programs reported gains in visibility and credibility within the state health department and increased collaboration and cooperation with other units and agencies. Other significant program advancements were also reported. Specific program standards and review procedures are important to the success of peer assessment programs such as STAT. Early impact evaluation suggests that peer assessment protocols using the visitatie model can lead to gains in organizational capacity.

  3. Evaluation of clinical practice guidelines.

    PubMed Central

    Basinski, A S

    1995-01-01

    Compared with the current focus on the development of clinical practice guidelines the effort devoted to their evaluation is meagre. Yet the ultimate success of guidelines depends on routine evaluation. Three types of evaluation are identified: evaluation of guidelines under development and before dissemination and implementation, evaluation of health care programs in which guidelines play a central role, and scientific evaluation, through studies that provide the scientific knowledge base for further evolution of guidelines. Identification of evaluation and program goals, evaluation design and a framework for evaluation planning are discussed. PMID:7489550

  4. Methodology for the evaluation of the Stephanie Alexander Kitchen Garden program.

    PubMed

    Gibbs, L; Staiger, P K; Townsend, M; Macfarlane, S; Gold, L; Block, K; Johnson, B; Kulas, J; Waters, E

    2013-04-01

    Community and school cooking and gardening programs have recently increased internationally. However, despite promising indications, there is limited evidence of their effectiveness. This paper presents the evaluation framework and methods negotiated and developed to meet the information needs of all stakeholders for the Stephanie Alexander Kitchen Garden (SAKG) program, a combined cooking and gardening program implemented in selectively funded primary schools across Australia. The evaluation used multiple aligned theoretical frameworks and models, including a public health ecological approach, principles of effective health promotion and models of experiential learning. The evaluation is a non-randomised comparison of six schools receiving the program (intervention) and six comparison schools (all government-funded primary schools) in urban and rural areas of Victoria, Australia. A mixed-methods approach was used, relying on qualitative measures to understand changes in school cultures and the experiential impacts on children, families, teachers, parents and volunteers, and quantitative measures at baseline and 1 year follow up to provide supporting information regarding patterns of change. The evaluation study design addressed the limitations of many existing evaluation studies of cooking or garden programs. The multistrand approach to the mixed methodology maintained the rigour of the respective methods and provided an opportunity to explore complexity in the findings. Limited sensitivity of some of the quantitative measures was identified, as well as the potential for bias in the coding of the open-ended questions. The SAKG evaluation methodology will address the need for appropriate evaluation approaches for school-based kitchen garden programs. It demonstrates the feasibility of a meaningful, comprehensive evaluation of school-based programs and also demonstrates the central role qualitative methods can have in a mixed-method evaluation. So what? This paper contributes to debate about appropriate evaluation approaches to meet the information needs of all stakeholders and will support the sharing of measures and potential comparisons between program outcomes for comparable population groups and settings.

  5. Using a Non-Equivalent Groups Quasi Experimental Design to Reduce Internal Validity Threats to Claims Made by Math and Science K-12 Teacher Recruitment Programs

    NASA Astrophysics Data System (ADS)

    Moin, Laura

    2009-10-01

    The American Recovery and Reinvestment Act national policy established in 2009 calls for ``meaningful data'' that demonstrate educational improvements, including the recruitment of high-quality teachers. The scant data available and the low credibility of many K-12 math/science teacher recruitment program evaluations remain the major barriers for the identification of effective recruitment strategies. Our study presents a methodology to better evaluate the impact of recruitment programs on increasing participants' interest in teaching careers. The research capitalizes on the use of several control groups and presents a non-equivalent groups quasi-experimental evaluation design that produces program effect claims with higher internal validity than claims generated by current program evaluations. With this method that compares responses to a teaching career interest question from undergraduates all along a continuum from just attending an information session to participating (or not) in the recruitment program, we were able to compare the effect of the program in increasing participants' interest in teaching careers versus the evolution of the same interest but in the absence of the program. We were also able to make suggestions for program improvement and further research. While our findings may not apply to other K-12 math/science teacher recruitment programs, we believe that our evaluation methodology does and will contribute to conduct stronger program evaluations. In so doing, our evaluation procedure may inform recruitment program designers and policy makers.

  6. A Structure and Scheme for the Evaluation of Innovative Programs. The EPIC Brief, Issue No. 2.

    ERIC Educational Resources Information Center

    Objective evaluation of school programs is a process in which a school staff collects information used to provide feedback as to whether or not a given set of objectives has been met. The Evaluative Programs for Innovativ e Curriculums (EPIC) four-step scheme of objective evaluation is based on a three-dimensional structure of variables…

  7. The Design and Pilot Evaluation of an Interactive Learning Environment for Introductory Programming Influenced by Cognitive Load Theory and Constructivism

    ERIC Educational Resources Information Center

    Moons, Jan; De Backer, Carlos

    2013-01-01

    This article presents the architecture and evaluation of a novel environment for programming education. The design of this programming environment, and the way it is used in class, is based on the findings of constructivist and cognitivist learning paradigms. The environment is evaluated based on qualitative student and teacher evaluations and…

  8. A semi-automated tool for treatment plan-quality evaluation and clinical trial quality assurance

    NASA Astrophysics Data System (ADS)

    Wang, Jiazhou; Chen, Wenzhou; Studenski, Matthew; Cui, Yunfeng; Lee, Andrew J.; Xiao, Ying

    2013-07-01

    The goal of this work is to develop a plan-quality evaluation program for clinical routine and multi-institutional clinical trials so that the overall evaluation efficiency is improved. In multi-institutional clinical trials evaluating the plan quality is a time-consuming and labor-intensive process. In this note, we present a semi-automated plan-quality evaluation program which combines MIMVista, Java/MATLAB, and extensible markup language (XML). More specifically, MIMVista is used for data visualization; Java and its powerful function library are implemented for calculating dosimetry parameters; and to improve the clarity of the index definitions, XML is applied. The accuracy and the efficiency of the program were evaluated by comparing the results of the program with the manually recorded results in two RTOG trials. A slight difference of about 0.2% in volume or 0.6 Gy in dose between the semi-automated program and manual recording was observed. According to the criteria of indices, there are minimal differences between the two methods. The evaluation time is reduced from 10-20 min to 2 min by applying the semi-automated plan-quality evaluation program.

  9. An Evaluation of TCITY: The Twin City Institute for Talented Youth. Report #1 in Evaluation Report Series.

    ERIC Educational Resources Information Center

    Stake, Robert E.; Gjerde, Craig

    This evaluation of the Twin City Institute for Talented Youth, a summer program for gifted students in grades 9 through 12, consists of two parts: a description of the program; and the evaluators' assessments, including advocate and adversary reports. Achievement tests were not used for evaluation. Evaluative comments follow each segment of the…

  10. Social Work and Evaluation: Why You Might Be Interested in the American Evaluation Association Social Work Topical Interest Group

    ERIC Educational Resources Information Center

    Wharton, Tracy C.; Kazi, Mansoor A.

    2012-01-01

    With increased pressure on programs to evaluate outcomes, the issue of evaluation in social work has never been so topical. In response to these pressures, there has been a growing interest in evidence-based practice and strategies for the evaluation of social work programs. The American Evaluation Association (AEA) is an international…

  11. Program Evaluation in Gifted Education. Essential Readings in Gifted Education Series

    ERIC Educational Resources Information Center

    Callahan, Carolyn M., Ed.; Reis, Sally M., Ed.

    2004-01-01

    The readings in this ready-reference report on specific program evaluations, offer critical guidance in the development and utilization of instruments for assessing gifted and talented programs, and are designed to stimulate the discussion of issues surrounding the evaluation of gifted programs. Key features include: (1) Carolyn M. Callahan's…

  12. A Program Evaluation Tool for Dual Enrollment Transition Programs

    ERIC Educational Resources Information Center

    Grigal, Meg; Dwyre, Amy; Emmett, Joyce; Emmett, Richard

    2012-01-01

    This article describes the development and use of a program evaluation tool designed to support self-assessment of college-based dual enrollment transition programs serving students with intellectual disabilities between the ages of 18-21 in college settings. The authors describe the need for such an evaluation tool, outline the areas addressed by…

  13. EVALUATION OF THE WEIGHT-BASED COLLECTION PROJECT IN FARMINGTON, MINNESOTA: A MITE PROGRAM EVALUATION

    EPA Science Inventory

    This project evaluates a test program of a totally automated weight-based refuse disposal rate system. his test program was conducted by the City of Farmington, Minnesota between 1991 and 1993. he intent of the program was to test a mechanism which would automatically assess a fe...

  14. Applying the Concept of Trustworthiness to the Evaluation of a Clinical Program.

    ERIC Educational Resources Information Center

    Barzansky, Barbara; And Others

    An attending tutor program designed to increase faculty-student contact within an Obstetrics and Gynecology clerkship was evaluated. Sessions were observed, written documents were reviewed, and faculty and students were interviewed in order to determine if the program was meeting its goals. Based on the evaluation data, the program was…

  15. Evaluation of a Community-Based Aging Intervention Program

    ERIC Educational Resources Information Center

    Hsu, Hui-Chuan; Wang, Chun-Hou; Chen, Yi-Chun; Chang, Ming-Chen; Wang, Jean

    2010-01-01

    This study evaluated the outcome and process of a community-based aging intervention program for the elderly in Taiwan. The program included education on nutrition and dietary behavior and on physical activities. Outcome and process evaluations were conducted. The program may have had some effects on decreasing some dietary behavioral problems and…

  16. Assessment and Evaluation of the Utah Master Naturalist Program: Implications for Targeting Audiences

    ERIC Educational Resources Information Center

    Larese-Casanova, Mark

    2011-01-01

    The Utah Master Naturalist Program trains citizens who provide education, outreach, and service to promote citizen stewardship of natural resources within their communities. In 2007-2008, the Watersheds module of the program was evaluated for program success, and participant knowledge was assessed. Assessment and evaluation results indicated that…

  17. Some Measures of Evaluation and Effectiveness in Social Work Practice.

    ERIC Educational Resources Information Center

    Kapoor, J. M.

    Measures of accountability and evaluation of social work program efforts are examined. Evaluation of program effort refers to an assessment of the amount and kinds of program activities considered necessary for the accomplishment of program goals within a particular stage of development. It refers not only to staff time, activity, and commitment,…

  18. Formative Evaluation of the Canada Student Loans Program. Final Report

    ERIC Educational Resources Information Center

    Human Resources and Skills Development Canada, 2004

    2004-01-01

    The "Formative Evaluation of the Canada Student Loans Program" was undertaken to assess issues of program relevance, design and delivery and for the purposes of examining the early impacts of changes made to the program since 1998. The evaluation also reviewed the Performance Measurement Strategy contained in the July 2002 Results-Based…

  19. Evaluation of a Research Mentorship Program in Community Care

    ERIC Educational Resources Information Center

    Ploeg, Jenny; de Witt, Lorna; Hutchison, Brian; Hayward, Lynda; Grayson, Kim

    2008-01-01

    This article describes the results of a qualitative case study evaluating a research mentorship program in community care settings in Ontario, Canada. The purpose of the program was to build evaluation and research capacity among staff of community care agencies through a mentorship program. Data were collected through in-depth, semi-structured…

  20. Evaluation of a federally funded workforce development program: The Centers for Public Health Preparedness☆

    PubMed Central

    Sobelson, Robyn K.; Young, Andrea C.

    2017-01-01

    The Centers for Public Health Preparedness (CPHP) program was a five-year cooperative agreement funded by the Centers for Disease Control and Prevention (CDC). The program was initiated in 2004 to strengthen terrorism and emergency preparedness by linking academic expertise to state and local health agency needs. The purposes of the evaluation study were to identify the results achieved by the Centers and inform program planning for future programs. The evaluation was summative and retrospective in its design and focused on the aggregate outcomes of the CPHP program. The evaluation results indicated progress was achieved on program goals related to development of new training products, training members of the public health workforce, and expansion of partnerships between accredited schools of public health and state and local public health departments. Evaluation results, as well as methodological insights gleaned during the planning and conduct of the CPHP evaluation, were used to inform the design of the next iteration of the CPHP Program, the Preparedness and Emergency Response Learning Centers (PERLC). PMID:23380597

  1. Cancer Therapy Evaluation Program | Office of Cancer Genomics

    Cancer.gov

    The Cancer Therapy Evaluation Program (CTEP) seeks to improve the lives of cancer patients by finding better treatments, control mechanisms, and cures for cancer. CTEP funds a national program of cancer research, sponsoring clinical trials to evaluate new anti-cancer agents.

  2. 5 CFR 339.205 - Medical evaluation programs.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Medical evaluation programs. 339.205 Section 339.205 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS MEDICAL QUALIFICATION DETERMINATIONS Physical and Medical Qualifications § 339.205 Medical evaluation programs. Agencies...

  3. 5 CFR 339.205 - Medical evaluation programs.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Medical evaluation programs. 339.205 Section 339.205 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS MEDICAL QUALIFICATION DETERMINATIONS Physical and Medical Qualifications § 339.205 Medical evaluation programs. Agencies...

  4. The SUPERFUND INNOVATIVE TECHNOLOGY EVALUATION program - Technology Profiles

    EPA Science Inventory

    The Superfund Innovative Technology Evaluation (SITE) program was created to evaluate new and promising treatment technologies for cleanup at hazardous waste sites. The mission of the SITE program is to encourage the development and routine use of innovative treatment technologie...

  5. SUPERFUND INNOVATIVE TECHNOLOGY EVALUATION PROGRAM - TECHNOLOGY PROFILES 4th Edition

    EPA Science Inventory

    The Superfund Innovative Technology Evaluation (SITE) Program evaluates new and promising treatment technologies for cleanup of hazardous waste sites. The program was created to encourage the development and routine use of innovative treatment technologies. As a result, the SI...

  6. 5 CFR 339.205 - Medical evaluation programs.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Medical evaluation programs. 339.205 Section 339.205 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS MEDICAL QUALIFICATION DETERMINATIONS Physical and Medical Qualifications § 339.205 Medical evaluation programs. Agencies...

  7. SUPERFUND INNOVATIVE TECHNOLOGY EVALUATION PROGRAM ANNUAL REPORT TO CONGRESS FY 2002

    EPA Science Inventory

    This report details the Fiscal Year 2002 activities of the Superfund Innovative Technology Evaluation (SITE) Program. The Program focused on the remediation needs of the hazardous waste remediation community through demonstration and evaluation of innovative technologies for reme...

  8. SUPERFUND INNOVATIVE TECHNOLOGY EVALUATION PROGRAM ANNUAL REPORT TO CONGRESS FY 2001

    EPA Science Inventory

    This report details the fiscal year 2001 activities of the Superfund Innovative Technology Evaluation (SITE) Program. The Program focuses on the remediation needs of the hazardous waste remediation community through demonstration and evaluation of innovative technologies for re...

  9. Revisioning the Process: A Case Study in Feminist Program Evaluation.

    ERIC Educational Resources Information Center

    Beardsley, Rebecca M.; Miller, Michelle Hughes

    2002-01-01

    Conducted a case study of the evaluation of a women's substance abuse prevention program and identified three key aspects of negotiated evaluation. Discusses the processes involved in feminist evaluation, including collaborative agenda setting and cooperative teamwork. (SLD)

  10. Evaluation of a Postdischarge Call System Using the Logic Model.

    PubMed

    Frye, Timothy C; Poe, Terri L; Wilson, Marisa L; Milligan, Gary

    2018-02-01

    This mixed-method study was conducted to evaluate a postdischarge call program for congestive heart failure patients at a major teaching hospital in the southeastern United States. The program was implemented based on the premise that it would improve patient outcomes and overall quality of life, but it had never been evaluated for effectiveness. The Logic Model was used to evaluate the input of key staff members to determine whether the outputs and results of the program matched the expectations of the organization. Interviews, online surveys, reviews of existing patient outcome data, and reviews of publicly available program marketing materials were used to ascertain current program output. After analyzing both qualitative and quantitative data from the evaluation, recommendations were made to the organization to improve the effectiveness of the program.

  11. National Weatherization Assistance Program Characterization - Describing the Pre-ARRA Progam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bensch, Ingo; Keene, Ashleigh; Cowan, Claire

    2014-09-01

    This report characterizes the Department of Energy s Weatherization Assistance Program (WAP) as it was administered in Program Year 2008. WAP has supported energy efficiency improvements to the homes of low-income households in the United States since 1976. The program provides grants, guidance, and other support to grantees: weatherization programs administered by each of the 50 states, the District of Columbia and some Native American tribes. Although there have been studies of some grantee-administered weatherization programs, the overall effectiveness of the national weatherization program has not been formally evaluated since Program Year 1989. Since that time, the program has evolvedmore » significantly, with an increased focus on baseload electric usage, continued evolution of diagnostic tools, new guidelines and best practices for heating-related measures, and adjustments in program rules. More recently, the program has also adjusted to large, temporary funding increases and changes in federal rules spurred by the American Recovery and Reinvestment Act (ARRA). Because the Weatherization Assistance Program of today is dramatically different from the one evaluated in 1989, DOE determined to undertake a new comprehensive evaluation of the national program. This new national evaluation is managed by Oak Ridge National Laboratory (ORNL). Under a competitive solicitation process, ORNL selected APPRISE, Inc., Blasnik & Associates, Dalhoff Associates and the Energy Center of Wisconsin to conduct the evaluation. The national evaluation comprises two independent evaluations. The first evaluation of which this report is a part focuses on Program Year 2008 (PY08). The second evaluation focuses on the ARRA-funded years of 2009 through 2011. This report, together with its companion the Eligible Population Study addresses specific program characterization goals established for the greater evaluation. The Energy Center led grantee and subgrantee data collection efforts, administering surveys to 51 grantees and 851 of the approximately 900 subgrantees that were slated to receive DOE weatherization funds in PY08. In all, seven different data collection instruments were used to gather the needed data two instruments for grantees and five for subgrantees. See Table 1 for a list of these survey instruments. These surveys were used to determine, among other things: Structure and funding of weatherization programs Training and staff development of service providers How weatherization services are delivered Clients served« less

  12. Expert opinions on good practice in evaluation of health promotion and primary prevention measures related to children and adolescents in Germany.

    PubMed

    Korber, Katharina; Becker, Christian

    2017-10-02

    Determining what constitutes "good practice" in the measurement of the costs and effects of health promotion and disease prevention measures is of particular importance. The aim of this paper was to gather expert knowledge on (economic) evaluations of health promotion and prevention measures for children and adolescents, especially on the practical importance, the determinants of project success, meaningful parameters for evaluations, and supporting factors, but also on problems in their implementation. This information is targeted at people responsible for the development of primary prevention or health promotion programs. Partially structured open interviews were conducted by two interviewers and transcribed, paraphrased, and summarized for further use. Eight experts took part in the interviews. The interviewed experts saw evaluation as a useful tool to establish the effects of prevention programs, to inform program improvement and further development, and to provide arguments to decision making. The respondents' thought that determinants of a program's success were effectiveness with evidence of causality, cost benefit relation, target-group reach and sustainability. It was considered important that hard and soft factors were included in an evaluation; costs were mentioned only by one expert. According to the experts, obstacles to evaluation were lacking resources, additional labor requirements, and the evaluators' unfamiliarity with a program's contents. It was recommended to consider evaluation design before a program is launched, to co-operate with people involved in a program and to make use of existing structures. While in in this study only a partial view of expert knowledge is represented, it could show important points to consider when developing evaluations of prevention programs. By considering these points, researchers could further advance towards a more comprehensive approach of evaluation targeting measures in children and adolescents.

  13. Encyclopedia of Educational Evaluation: Concepts and Techniques for Evaluating Education and Training Programs.

    ERIC Educational Resources Information Center

    Anderson, Scarvia B.; And Others

    Arranged like an encyclopedia, this book, addressed to directors and sponsors of education/training programs, as well as evaluators and those studying to become evaluators, unifies and systematizes the field of evaluation by organizing its main concepts and techniques into one volume. Researched and documented articles, contributed by recognized…

  14. Training Evaluation as an Integral Component of Training for Performance.

    ERIC Educational Resources Information Center

    Lapp, H. J., Jr.

    A training evaluation system should address four major areas: reaction, learning, behavior, and results. The training evaluation system at GPU Nuclear Corporation addresses each of these areas through practical approaches such as course and program evaluation. GPU's program evaluation instrument uses a Likert-type scale to assess task development,…

  15. Program Evaluation of the Associate of Arts Degree. Revised.

    ERIC Educational Resources Information Center

    2003

    This document is the program evaluation of the associate of Arts Degree in Holmes Community College (Mississippi) that was completed in 2001. The Southern Association of Colleges and Schools mandate the evaluation so that all colleges have the opportunity to evaluate themselves and use the results of the evaluation to improve instruction. The…

  16. Higher Education Trends (1997-1999): Program Evaluation. ERIC-HE Trends.

    ERIC Educational Resources Information Center

    Kezar, Adrianna J.

    The amount of literature on program evaluation decreased in 1996, continuing a trend begun in the late 1980s. One exception to this is the literature on assessment. Another frequent issue is the technique of evaluation. Many examples of research on evaluation are from international settings, where accountability and evaluation appear to be…

  17. Program Evaluation at HEW: Research versus Reality. Part 2: Education.

    ERIC Educational Resources Information Center

    Abert, James G., Ed.

    Intended for both the student and the practitioner of evaluation, this book describes the state of the practice of program evaluation. Its focus is mainly institutional. Results of evaluation studies are of secondary importance. An introductory chapter written by the editor discusses evaluation at the Office of Education from 1967 through 1973.…

  18. Taiwan Teacher Preparation Program Evaluation: Some Critical Perspectives

    ERIC Educational Resources Information Center

    Liu, Tze-Chang

    2015-01-01

    This paper focuses on the influences and changes of recent Taiwan teacher preparation program evaluation (TTPPE) as one of the national evaluation projects conducted by the Higher Education Evaluation and Accreditation Council of Taiwan. The main concerns are what kind of ideology is transformed through the policy by means of evaluation, and what…

  19. Making Evaluation Work for You: Ideas for Deriving Multiple Benefits from Evaluation

    ERIC Educational Resources Information Center

    Jayaratne, K. S. U.

    2016-01-01

    Increased demand for accountability has forced Extension educators to evaluate their programs and document program impacts. Due to this situation, some Extension educators may view evaluation simply as the task, imposed on them by administrators, of collecting outcome and impact data for accountability. They do not perceive evaluation as a useful…

  20. RBS Career Education. Evaluation Planning Manual. Education Is Going to Work.

    ERIC Educational Resources Information Center

    Kershner, Keith M.

    Designed for use with the Research for Better Schools career education program, this evaluation planning manual focuses on procedures and issues central to planning the evaluation of an educational program. Following a statement on the need for evaluation, nine sequential steps for evaluation planning are discussed. The first two steps, program…

  1. EVALUATION OF OXYGEN-ENRICHED MSW/SEWAGE SLUDGE CO-INCINERATION DEMONSTRATION PROGRAM

    EPA Science Inventory

    This report provides an evaluation of a two-phased demonstration program conducted for the U.S. Environmental Protection Agency's Municipal Solid Waste Innovative Technology Evaluation Program, and the results thereof, of a recently developed method of sewage sludge managemen...

  2. 7 CFR 3405.22 - Evaluation of program.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 15 2011-01-01 2011-01-01 false Evaluation of program. 3405.22 Section 3405.22 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND AGRICULTURE HIGHER EDUCATION CHALLENGE GRANTS PROGRAM Supplementary Information § 3405.22 Evaluation of...

  3. 7 CFR 3405.22 - Evaluation of program.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 15 2012-01-01 2012-01-01 false Evaluation of program. 3405.22 Section 3405.22 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND AGRICULTURE HIGHER EDUCATION CHALLENGE GRANTS PROGRAM Supplementary Information § 3405.22 Evaluation of...

  4. 7 CFR 3405.22 - Evaluation of program.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 15 2013-01-01 2013-01-01 false Evaluation of program. 3405.22 Section 3405.22 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND AGRICULTURE HIGHER EDUCATION CHALLENGE GRANTS PROGRAM Supplementary Information § 3405.22 Evaluation of...

  5. 7 CFR 3405.22 - Evaluation of program.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 15 2014-01-01 2014-01-01 false Evaluation of program. 3405.22 Section 3405.22 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND AGRICULTURE HIGHER EDUCATION CHALLENGE GRANTS PROGRAM Supplementary Information § 3405.22 Evaluation of...

  6. Evaluation of a PhD Program: Paving the Way.

    ERIC Educational Resources Information Center

    Germain, Carol P.; And Others

    1994-01-01

    During the evolution of an evaluation process for the University of Pennsylvania's doctoral program in nursing, a task force developed criteria and sources for program evaluation and surveyed students, alumni, and faculty to write a self-study report for external reviewers. (JOW)

  7. Leisure Today. Leisure Programming: The State of the Art.

    ERIC Educational Resources Information Center

    Busser, James A.; And Others

    1993-01-01

    Nine articles examine current topics in leisure programing, including program design and evaluation, program design through imagery, keys to quality leisure programing, programing with style, total quality program planning, evaluation of leisure programs, programing for older adults, and the intergenerational entrepreneurship demonstration…

  8. Who Is Afraid of Evaluation? Ethics in Evaluation Research as a Way to Cope with Excessive Evaluation Anxiety: Insights from a Case Study

    ERIC Educational Resources Information Center

    Bechar, Shlomit; Mero-Jaffe, Irit

    2014-01-01

    In this paper we share our reflections, as evaluators, on an evaluation where we encountered Excessive Evaluation Anxiety (XEA). The signs of XEA which we discerned were particularly evident amongst the program head and staff who were part of a new training program. We present our insights on the evaluation process and its difficulties, as well as…

  9. Alternative Aviation Jet Fuel Sustainability Evaluation Report Task 1 : Report Evaluating Existing Sustainability Evaluation Programs

    DOT National Transportation Integrated Search

    2011-10-25

    This report describes how existing biofuel sustainability evaluation programs meet requirements that are under consideration or are in early phases of adoption and implementation in various US and international contexts. Biofuel sustainability evalua...

  10. Evaluation and communication: using a communication audit to evaluate organizational communication.

    PubMed

    Hogard, Elaine; Ellis, Roger

    2006-04-01

    This article identifies a surprising dearth of studies that explicitly link communication and evaluation at substantive, theoretical, and methodological levels. A three-fold typology of evaluation studies referring to communication is proposed and examples given. The importance of organizational communication in program delivery is stressed and illustrative studies reviewed. It is proposed that organizational communication should be considered in all program evaluations and that this should be approached through communication audit. Communication audits are described with particular reference to established survey questionnaire instruments. Two case studies exemplify the use of such instruments in the evaluation of educational and social programs.

  11. Evaluation of a Community College's Nursing Faculty Advising Program Relative to Students' Satisfaction and Retention

    ERIC Educational Resources Information Center

    Harrell, Johnna C.; Reglin, Gary

    2018-01-01

    Problem was the community college recognized a decline in student retention rates from 2009 to 2012 in the School of Nursing. Purpose of this program evaluation was to evaluate a faculty advising program (FAP) in the School of Nursing at a community college in regard to students' satisfaction and retention. Evaluation period was from Fall 2012 to…

  12. Title I of the Higher Education Act of 1965: Evaluation of the Present Program: Recommendations for the Future.

    ERIC Educational Resources Information Center

    Whipple, James B.

    In this document, which points out weaknesses in evaluation procedures and offers a new approach to the subject, it is suggested that in the area of the United States studied, the Title 1 program is drifting without direction, leadership, or system. This makes evaluation impossible. Evaluation is sometimes a description of a program and often…

  13. Is the Closet Door Still Closed in 2014? A CIPP Model Program Evaluation of Preservice Diversity Training Regarding LGBT Issues

    ERIC Educational Resources Information Center

    Woodruff, Joseph

    2014-01-01

    The purpose of this program evaluation was to examine the four components of the CIPP evaluation model (Context, Input, Process, and Product evaluations) in the diversity training program conceptualization and design delivered to College of Education K-12 preservice teachers at a large university in the southeastern United States (referred to in…

  14. Methods for evaluating a mature substance abuse prevention/early intervention program.

    PubMed

    Becker, L R; Hall, M; Fisher, D A; Miller, T R

    2000-05-01

    The authors describe methods for work in progress to evaluate four workplace prevention and/or early intervention programs designed to change occupational norms and reduce substance abuse at a major U.S. transportation company. The four programs are an employee assistance program, random drug testing, managed behavioral health care, and a peer-led intervention program. An elaborate mixed-methods evaluation combines data collection and analysis techniques from several traditions. A process-improvement evaluation focuses on the peer-led component to describe its evolution, document the implementation process for those interested in replicating it, and provide information for program improvement. An outcome-assessment evaluation examines impacts of the four programs on job performance measures (e.g., absenteeism, turnover, injury, and disability rates) and includes a cost-offset and employer cost-savings analysis. Issues related to using archival data, combining qualitative and quantitative designs, and working in a corporate environment are discussed.

  15. Using Art For Health Promotion: Evaluating an In-School Program Through Student Perspectives.

    PubMed

    McKay, Fiona H; McKenzie, Hayley

    2017-09-01

    The value of incorporating arts-based approaches into health promotion programs has long been recognized as useful in affecting change. Such approaches have been used in many schools across Australia and have been found to promote general well-being and mental health. Despite these positive findings, few programs have used or evaluated an integrated arts-based approach to achieve health and well-being goals. This article presents the findings of an evaluation of an integrated arts-based program focused on creativity and improving well-being in students. The findings of this evaluation suggest that students who took part in the program were more interested in art and music at the end of the program and had gained an overall increase in awareness and mindfulness and a positivity toward leisure activities. This evaluation provides some evidence to suggest that this type of program is a promising way to promote well-being in schools.

  16. Introduction of blended learning in a master program: Developing an integrative mixed method evaluation framework.

    PubMed

    Chmiel, Aviva S; Shaha, Maya; Schneider, Daniel K

    2017-01-01

    The aim of this research is to develop a comprehensive evaluation framework involving all actors in a higher education blended learning (BL) program. BL evaluation usually either focuses on students, faculty, technological or institutional aspects. Currently, no validated comprehensive monitoring tool exists that can support introduction and further implementation of BL in a higher education context. Starting from established evaluation principles and standards, concepts that were to be evaluated were firstly identified and grouped. In a second step, related BL evaluation tools referring to students, faculty and institutional level were selected. This allowed setting up and implementing an evaluation framework to monitor the introduction of BL during two succeeding recurrences of the program. The results of the evaluation allowed documenting strengths and weaknesses of the BL format in a comprehensive way, involving all actors. It has led to improvements at program, faculty and course level. The evaluation process and the reporting of the results proved to be demanding in time and personal resources. The evaluation framework allows measuring the most significant dimensions influencing the success of a BL implementation at program level. However, this comprehensive evaluation is resource intensive. Further steps will be to refine the framework towards a sustainable and transferable BL monitoring tool that finds a balance between comprehensiveness and efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Evaluating an English Language Teacher Education Program through Peacock's Model

    ERIC Educational Resources Information Center

    Coskun, Abdullah; Daloglu, Aysegul

    2010-01-01

    The main aim of this study is to draw attention to the importance of program evaluation for teacher education programs and to reveal the pre-service English teacher education program components that are in need of improvement or maintenance both from teachers' and students' perspectives by using Peacock's (2009) recent evaluation model in a…

  18. Using a Crystal Ball Instead of a Rear-View Mirror: Helping State Legislators Assess the Future Impacts of Major Federal Legislation

    ERIC Educational Resources Information Center

    Alter, Joel; Patterson, John

    2006-01-01

    Typically, program evaluation agencies in the legislative branch of state government examine programs that have already been implemented. These evaluations often consider whether a program achieved the legislature's original goals or complied with statutory requirements. Program evaluations frequently determine whether executive branch agencies…

  19. Participants' perceptions of the 1997-1998 Missouri State Parks Passport Program

    Treesearch

    Yi-Jin Ye; Jaclyn Card

    2002-01-01

    Service quality is increasingly important to park managers. Recreation and park evaluation measures the implementation and outcome of programs for decision-making. Decisions based on evaluations are often concerned with improving the quality of the program for participants. The purpose of the study was to evaluate the Missouri State Parks Passport Program (MSPPP) by...

  20. Evaluation of Turkish Education Programs

    ERIC Educational Resources Information Center

    Durmuscelebi, Mustafa

    2010-01-01

    The aim of this study is to evaluate the new Turkish education program that has been being implemented since 2005 gradually in light of teacher suggestions. The study has been done in scanning model. In this study which has been conducted with the purpose of evaluating the newly prepared Turkish education programs, the program has been tried to be…

  1. An Evaluation of On-Line, Interactive Tutorials Designed to Teach Practice Concepts

    ERIC Educational Resources Information Center

    Seabury, Brett A.

    2005-01-01

    This paper presents an evaluation of two on-line-based programs designed to teach practice skills. One program teaches crisis intervention and the other teaches suicide assessment. The evaluation of the use of these programs compares outcomes for two groups of students, one using the interactive program outside a class context and the other using…

  2. Evaluation of Selected New York City Umbrella Programs, 1974-1975 School Year.

    ERIC Educational Resources Information Center

    Fordham Univ., Bronx, NY. Inst. for Research and Evaluation.

    An evaluation of twelve different New York City Umbrella Programs coordinated in New York City public schools during the 1974-1975 school year is contained in this document. This report presents a description and evaluation of these programs, together with the major findings. The programs were implemented in the following areas: (1) tutoring in…

  3. Implementing and Evaluating a Rural Community-Based Sexual Abstinence Program: Challenges and Solutions

    ERIC Educational Resources Information Center

    Stauss, Kimberly; Boyas, Javier; Murphy-Erby, Yvette

    2012-01-01

    Informing both program evaluation and practice research, this paper describes lessons learned during the planning, implementation, and pilot phases of an abstinence education program based in a rural community in a southern state in the USA. Although a number of challenges can emerge in successfully implementing and evaluating such a program in a…

  4. 45 CFR 2516.840 - By what standards will the Corporation evaluate individual Learn and Serve America programs?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 4 2012-10-01 2012-10-01 false By what standards will the Corporation evaluate individual Learn and Serve America programs? 2516.840 Section 2516.840 Public Welfare Regulations Relating to... Learn and Serve America programs? The Corporation will evaluate programs based on the following: (a) The...

  5. 45 CFR 2516.840 - By what standards will the Corporation evaluate individual Learn and Serve America programs?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 4 2011-10-01 2011-10-01 false By what standards will the Corporation evaluate individual Learn and Serve America programs? 2516.840 Section 2516.840 Public Welfare Regulations Relating to... Learn and Serve America programs? The Corporation will evaluate programs based on the following: (a) The...

  6. 45 CFR 2516.840 - By what standards will the Corporation evaluate individual Learn and Serve America programs?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 4 2013-10-01 2013-10-01 false By what standards will the Corporation evaluate individual Learn and Serve America programs? 2516.840 Section 2516.840 Public Welfare Regulations Relating to... Learn and Serve America programs? The Corporation will evaluate programs based on the following: (a) The...

  7. 45 CFR 2516.840 - By what standards will the Corporation evaluate individual Learn and Serve America programs?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 4 2014-10-01 2014-10-01 false By what standards will the Corporation evaluate individual Learn and Serve America programs? 2516.840 Section 2516.840 Public Welfare Regulations Relating to... Learn and Serve America programs? The Corporation will evaluate programs based on the following: (a) The...

  8. ENVIRONMENTAL, ECONOMIC AND ENERGY IMPACTS OF MATERIAL RECOVERY FACILITIES - A MITE PROGRAM EVALUATION

    EPA Science Inventory

    This report documents an evaluation of the environmental, economic, and energy impacts of material recovery facilities (MRFS) conducted under the Municipal Solid Waste Innovative Technology Evaluation (MITE) Program. he MITE Program is sponsored by the U.S. Environmental Protecti...

  9. Evaluating Afterschool Programs

    ERIC Educational Resources Information Center

    Little, Priscilla M.

    2014-01-01

    Well-implemented afterschool programs can promote a range of positive learning and developmental outcomes. However, not all research and evaluation studies have shown the benefits of participation, in part because programs and their evaluation were out of sync. This chapter provides practical guidance on how to foster that alignment between…

  10. Evaluation of Career Development Programs from an Action Perspective.

    ERIC Educational Resources Information Center

    Young, Richard A.; Valach, Ladislav

    1994-01-01

    Presents action-theoretical approach to evaluation of career development programs based on constructionist epistemology. Propositions from action-theoretical perspective center around career and action as related, interpretative constructs. Propositions give rise to implications for evaluation of career programs that address ongoing nature of…

  11. Care management program evaluation: constituents, conflicts, and moves toward standardization.

    PubMed

    Long, D Adam; Perry, Theodore L; Pelletier, Kenneth R; Lehman, Gregg O

    2006-06-01

    Care management program evaluations bring together constituents from finance, medicine, and social sciences. The differing assumptions and scientific philosophies that these constituents bring to the task often lead to frustrations and even contentions. Given the forms and variations of care management programs, the difficulty associated with program outcomes measurement should not be surprising. It is no wonder then that methods for clinical and economic evaluations of program efficacy continue to be debated and have yet to be standardized. We describe these somewhat hidden processes, examine where the industry stands, and provide recommendations for steps to standardize evaluation methodology.

  12. Planning and Selecting Evaluation Designs for Leadership Training: A Toolkit for Nurse Managers and Educators.

    PubMed

    Dunne, Simon; Lunn, Cora; Kirwan, Marcia; Matthews, Anne; Condell, Sarah

    2015-01-01

    Leadership development training and education for nurses is a priority in modern health care systems. Consequently, effective evaluation of nurse leadership development programs is essential for managers and educators in health care organizations to determine the impact of such programs on staff behaviors and patient outcomes. Our team has identified a framework for the evaluation of the design and implementation of such programs. Following this, we provide practical tools for the selection of evaluation methodologies for leadership development programs for use by health care educators and program commissioners. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. The Practice of Health Program Evaluation.

    PubMed

    Lewis, Sarah R

    2017-11-01

    The Practice of Health Program Evaluation provides an overview of the evaluation process for public health programs while diving deeper to address select advanced concepts and techniques. The book unfolds evaluation as a three-phased process consisting of identification of evaluation questions, data collection and analysis, and dissemination of results and recommendations. The text covers research design, sampling methods, as well as quantitative and qualitative approaches. Types of evaluation are also discussed, including economic assessment and systems research as relative newcomers. Aspects critical to conducting a successful evaluation regardless of type or research design are emphasized, such as stakeholder engagement, validity and reliability, and adoption of sound recommendations. The book encourages evaluators to document their approach by developing an evaluation plan, a data analysis plan, and a dissemination plan, in order to help build consensus throughout the process. The evaluative text offers a good bird's-eye view of the evaluation process, while offering guidance for evaluation experts on how to navigate political waters and advocate for their findings to help affect change.

  14. Structuring an Internal Evaluation Process.

    ERIC Educational Resources Information Center

    Gordon, Sheila C.; Heinemann, Harry N.

    1980-01-01

    The design of an internal program evaluation system requires (1) formulation of program, operational, and institutional objectives; (2) establishment of evaluation criteria; (3) choice of data collection and evaluation techniques; (4) analysis of results; and (5) integration of the system into the mainstream of operations. (SK)

  15. Program Evaluation of Services for the Homeless: Challenges and Strategies.

    ERIC Educational Resources Information Center

    Mercier, Celine; And Others

    1992-01-01

    Research strategies, including types of evaluations, designs, and indicators, developed to assess programs for chronic alcoholics and mentally ill homeless people in Canada are reviewed. Findings from previous evaluations are summarized, and the implications for evaluation practice are considered. (SLD)

  16. 10 CFR 420.36 - Evaluation criteria.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Evaluation criteria. 420.36 Section 420.36 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION STATE ENERGY PROGRAM Implementation of Special Projects Financial Assistance § 420.36 Evaluation criteria. The evaluation criteria, including program activity-specific...

  17. 10 CFR 420.36 - Evaluation criteria.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Evaluation criteria. 420.36 Section 420.36 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION STATE ENERGY PROGRAM Implementation of Special Projects Financial Assistance § 420.36 Evaluation criteria. The evaluation criteria, including program activity-specific...

  18. 10 CFR 420.36 - Evaluation criteria.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Evaluation criteria. 420.36 Section 420.36 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION STATE ENERGY PROGRAM Implementation of Special Projects Financial Assistance § 420.36 Evaluation criteria. The evaluation criteria, including program activity-specific...

  19. 10 CFR 420.36 - Evaluation criteria.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Evaluation criteria. 420.36 Section 420.36 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION STATE ENERGY PROGRAM Implementation of Special Projects Financial Assistance § 420.36 Evaluation criteria. The evaluation criteria, including program activity-specific...

  20. 10 CFR 420.36 - Evaluation criteria.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 3 2013-01-01 2013-01-01 false Evaluation criteria. 420.36 Section 420.36 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION STATE ENERGY PROGRAM Implementation of Special Projects Financial Assistance § 420.36 Evaluation criteria. The evaluation criteria, including program activity-specific...

  1. Some Methods for Evaluating Program Implementation.

    ERIC Educational Resources Information Center

    Hardy, Roy A.

    An approach to evaluating program implementation is described. This approach includes the development of a project description which includes a structure matrix, sampling from the structure matrix, and preparing an implementation evaluation plan. The implementation evaluation plan should include: (1) verification of implementation of planned…

  2. Program Evaluation: A Review and Synthesis.

    ERIC Educational Resources Information Center

    Webber, Charles F.

    This paper reviews models of program evaluation. Major topics and issues found in the evaluation literature include quantitative versus qualitative approaches, identification and involvement of stakeholders, formulation of research questions, collection of data, analysis and interpretation of data, reporting of results, evaluation utilization, and…

  3. Evaluating Cross-Cutting Approaches to Chronic Disease Prevention and Management: Developing a Comprehensive Evaluation

    PubMed Central

    Jernigan, Jan; Barnes, Seraphine Pitt; Shea, Pat; Davis, Rachel; Rutledge, Stephanie

    2017-01-01

    We provide an overview of the comprehensive evaluation of State Public Health Actions to Prevent and Control Diabetes, Heart Disease, Obesity and Associated Risk Factors and Promote School Health (State Public Health Actions). State Public Health Actions is a program funded by the Centers for Disease Control and Prevention to support the statewide implementation of cross-cutting approaches to promote health and prevent and control chronic diseases. The evaluation addresses the relevance, quality, and impact of the program by using 4 components: a national evaluation, performance measures, state evaluations, and evaluation technical assistance to states. Challenges of the evaluation included assessing the extent to which the program contributed to changes in the outcomes of interest and the variability in the states’ capacity to conduct evaluations and track performance measures. Given the investment in implementing collaborative approaches at both the state and national level, achieving meaningful findings from the evaluation is critical. PMID:29215974

  4. The effects of stakeholder involvement on perceptions of an evaluation's credibility.

    PubMed

    Jacobson, Miriam R; Azzam, Tarek

    2018-06-01

    This article presents a study of the effects of stakeholder involvement on perceptions of an evaluation's credibility. Crowdsourced members of the public and a group of educational administrators read a description of a hypothetical program and two evaluations of the program: one conducted by a researcher and one conducted by program staff (i.e. program stakeholders). Study participants were randomly assigned versions of the scenario with different levels of stakeholder credibility and types of findings. Results showed that both samples perceived the researcher's evaluation findings to be more credible than the program staff's, but that this difference was significantly reduced when the program staff were described to be highly credible. The article concludes with implications for theory and research on evaluation dissemination and stakeholder involvement. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Evaluation of a cross-sector community initiative partnership: delivering a local sport program.

    PubMed

    Kihl, Lisa A; Tainsky, Scott; Babiak, Kathy; Bang, Hyejin

    2014-06-01

    Corporate community initiatives (CCI) are often established via cross-sector partnerships with nonprofit agencies to address critical social problems. While there is a growing body of literature exploring the effectiveness and social impact of these partnerships, there is a limited evaluative research on the implementation and execution processes of CCIs. In this paper, we examined the implementation and operational processes in the delivery of a professional sport organization's CCI initiative using program theory evaluation. The findings showed discrepancies between the associate organization and the implementers regarding understanding and fulfilling responsibilities with performing certain aspects (maintaining accurate records and program marketing) of the service delivery protocol. Despite program stakeholders being satisfied overall with the program delivery, contradictions between program stakeholders' satisfaction in the quality of program delivery was found in critical components (marketing and communications) of the service delivery. We conclude that ongoing evaluations are necessary to pinpoint the catalyst of the discrepancies along with all partners valuing process evaluation in addition to outcome evaluation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Evaluation in the Context of the Government Market Place: Implications for the Evaluation of Research

    ERIC Educational Resources Information Center

    Della-Piana, Connie Kubo; Della-Piana, Gabriel M.

    2007-01-01

    While the current debate in the evaluation community has concentrated on examining and explicating implications of the choice of methods for evaluating federal programs, the authors of this paper address the challenges faced by the government in the selection of funding mechanisms for supporting program evaluation efforts. The choice of funding…

  7. How To Design a Program Evaluation. Program Evaluation Kit, 3.

    ERIC Educational Resources Information Center

    Fitz-Gibbon, Carol Taylor; Morris, Lynn Lyons

    The evaluation design, which prescribes when and from whom to gather data, is described as significant because it can lend credibility to evaluation results. The objectives of this booklet are to aid the evaluator in: choosing a design; putting it into operation; and analyzing and reporting the data. Examples include both formative and summative…

  8. The Role for an Evaluator: A Fundamental Issue for Evaluation of Education and Social Programs

    ERIC Educational Resources Information Center

    Luo, Heng

    2010-01-01

    This paper discusses one of the fundamental issues in education and social program evaluation: the proper role for an evaluator. Based on respective and comparative analysis of five theorists' positions on this fundamental issue, this paper reveals how different perspectives on other fundamental issues in evaluation such as value, methods, use and…

  9. The Impact of Evaluation: Lessons Drawn from the Evaluations of Five Early Childhood Education Programs.

    ERIC Educational Resources Information Center

    Granville, Arthur C.; And Others

    Five different program evaluations were described to indicate those qualities which make an evaluation effective or not effective. Evaluation effectiveness was defined as impact on decision making or long-term policy formation, and influence upon a variety of audiences. Robert D. Matz described the First Chance Project, and concluded that the…

  10. Evaluation of the Brownfields Program

    EPA Pesticide Factsheets

    The Evaluation of 2003-2008 Brownfields Assessment, Revolving Loan Fund, and Cleanup Grants is the first national program evaluation of the outcomes, efficiencies, and economic benefits produced by Brownfields grants.

  11. Improving Accreditor's Evaluation of Experiential Learning Programs.

    ERIC Educational Resources Information Center

    Keeton, Morris T.

    1980-01-01

    Principles of good practice in assessing experiential learning include better self-evaluation of the learning outcomes of experiential components, systematic program auditing, and training of external evaluators. (SK)

  12. Toward building a typology for the evaluation of services in family support programs.

    PubMed

    Manalo, V; Meezan, W

    2000-01-01

    This article briefly reviews the history, philosophy, practice principles, and foci of family support programs, examines the typologies currently in use to classify these programs, and discusses the difficulties these classifications pose for program evaluators. The authors introduce a new typology that deconstructs family support programs into their component services and discuss the potential of this typology for evaluation of family support services.

  13. Competency-based goals, objectives, and linked evaluations for rheumatology training programs: a standardized template of learning activities from the Carolinas Fellows Collaborative.

    PubMed

    Criscione-Schreiber, Lisa G; Bolster, Marcy B; Jonas, Beth L; O'Rourke, Kenneth S

    2013-06-01

    American Council on Graduate Medical Education program requirements mandate that rheumatology training programs have written goals, objectives, and performance evaluations for each learning activity. Since learning activities are similar across rheumatology programs, we aimed to create competency-based goals and objectives (CBGO) and evaluations that would be generalizable nationally. Through an established collaboration of the 4 training programs' directors in North Carolina and South Carolina, we collaboratively composed CBGO and evaluations for each learning activity for rheumatology training programs. CBGO and linked evaluations were written using appropriate verbs based on Bloom's taxonomy. Draft documents were peer reviewed by faculty at the 4 institutions and by members of the American College of Rheumatology (ACR) Clinician Scholar Educator Group. We completed templates of CBGO for core and elective rotations and conferences. Templates detail progressive fellow performance improvement appropriate to educational level. Specific CBGO are mirrored in learning activity evaluations. Templates are easily modified to fit individual program attributes, have been successfully implemented by our 4 programs, and have proven their value in 4 residency review committee reviews. We propose adoption of these template CBGO by the ACR, with access available to all rheumatology training programs. Evaluation forms that exactly reflect stated objectives ensure that trainees are assessed using standardized measures and that trainees are aware of the learning expectations. The objectives mirrored in the evaluations closely align with the proposed milestones for internal medicine training, and will therefore be a useful starting point for creating these milestones in rheumatology. Copyright © 2013 by the American College of Rheumatology.

  14. An Evaluation of the Right Choices Program to Determine Effectiveness in Delivering Constructive Interventions and Providing an Early Support Program in Order to Modify Behavior of First-Time Student Offenders Who Commit Drug and Violent Acts

    ERIC Educational Resources Information Center

    Barnes, Lisa B.

    2010-01-01

    The purpose of the study was to perform a program evaluation of the Right Choices Program to determine the program's effectiveness in delivering constructive interventions that modify student behavior once students have left the program and have returned to their regular learning environment. This mixed-method evaluation consisted of an…

  15. Improving the Impact and Implementation of Disaster Education: Programs for Children Through Theory-Based Evaluation.

    PubMed

    Johnson, Victoria A; Ronan, Kevin R; Johnston, David M; Peace, Robin

    2016-11-01

    A main weakness in the evaluation of disaster education programs for children is evaluators' propensity to judge program effectiveness based on changes in children's knowledge. Few studies have articulated an explicit program theory of how children's education would achieve desired outcomes and impacts related to disaster risk reduction in households and communities. This article describes the advantages of constructing program theory models for the purpose of evaluating disaster education programs for children. Following a review of some potential frameworks for program theory development, including the logic model, the program theory matrix, and the stage step model, the article provides working examples of these frameworks. The first example is the development of a program theory matrix used in an evaluation of ShakeOut, an earthquake drill practiced in two Washington State school districts. The model illustrates a theory of action; specifically, the effectiveness of school earthquake drills in preventing injuries and deaths during disasters. The second example is the development of a stage step model used for a process evaluation of What's the Plan Stan?, a voluntary teaching resource distributed to all New Zealand primary schools for curricular integration of disaster education. The model illustrates a theory of use; specifically, expanding the reach of disaster education for children through increased promotion of the resource. The process of developing the program theory models for the purpose of evaluation planning is discussed, as well as the advantages and shortcomings of the theory-based approaches. © 2015 Society for Risk Analysis.

  16. Creating a Minnesota Statewide SNAP-Ed Program Evaluation

    ERIC Educational Resources Information Center

    Gold, Abby; Barno, Trina Adler; Sherman, Shelley; Lovett, Kathleen; Hurtado, G. Ali

    2013-01-01

    Systematic evaluation is an essential tool for understanding program effectiveness. This article describes the pilot test of a statewide evaluation tool for the Supplemental Nutrition Assistance Program-Education (SNAP-Ed). A computer algorithm helped Community Nutrition Educators (CNEs) build surveys specific to their varied educational settings…

  17. Planning, Implementation, and Evaluation of AIDS Education Programs for Dentists.

    ERIC Educational Resources Information Center

    Gerbert, Barbara; And Others

    1991-01-01

    An office-based continuing education program on acquired immune deficiency syndrome (AIDS) for dentists is described, including needs assessment, model development, local piloting, national implementation with 119 dentists, and evaluation phases. Program evaluation indicated an improvement in risk perception, knowledge, and practice resulted, but…

  18. Evaluation Project of a Postvention Program.

    ERIC Educational Resources Information Center

    Simon, Robert; And Others

    A student suicide or parasuicide increases the risk that potentially suicidal teenagers see suicide as an enviable option. The "copycat effect" can be reduced by a postvention program. This proposed evaluative research project will provide an implementation and impact evaluation of a school's postvention program following a suicide or…

  19. Evaluation of Prevention Programs: A Basic Guide for Practitioners.

    ERIC Educational Resources Information Center

    Moberg, D. Paul

    This guide is intended for professionals, laypersons, funding agents and others involved in planning and delivering local prevention services. Chapter 1 defines prevention, and differentiates between prevention strategies and programs targeted toward individuals or to general populations. Program evaluation and evaluation research are defined and…

  20. Critical Measurement Issues in Translational Research

    ERIC Educational Resources Information Center

    Glasgow, Russell E.

    2009-01-01

    This article summarizes critical evaluation needs, challenges, and lessons learned in translational research. Evaluation can play a key role in enhancing successful application of research-based programs and tools as well as informing program refinement and future research. Discussion centers on what is unique about evaluating programs and…

  1. Children's Programs: A Comparative Evaluation Framework and Five Illustrations. Briefing Report to the Ranking Minority Member, Select Committee on Children, Youth, and Families, House of Representatives.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC. Program Evaluation and Methodology Div.

    This general program evaluation framework provides a wide range of criteria that can be applied in the evaluation of diverse federal progams. The framework was developed from a literature search on program evaluation methods and their use, the experiences of the United States Government Accounting Office (GAO), and consideration of the types of…

  2. A Self-Instructional Course in Student Financial Aid Administration. Module 17--Evaluation of Student Aid Management: Self-Evaluation, Audit, and Program Review. Second Edition.

    ERIC Educational Resources Information Center

    Washington Consulting Group, Inc., Washington, DC.

    The 17th module in the 17-module self-instructional course on student financial aid administration discusses the evaluation of student aid management in terms of self-evaluation, audit, and program review. The full course offers a systematic introduction to the management of federal financial aid programs authorized by Title IV of the Higher…

  3. Empowerment evaluation with programs designed to prevent first-time male perpetration of sexual violence.

    PubMed

    Noonan, Rita K; Gibbs, Deborah

    2009-01-01

    This special issue captures several threads in the ongoing evolution of sexual violence prevention. The articles that follow examine an empowerment evaluation process with four promising programs dedicated to preventing first-time male perpetration of sexual violence, as well as evaluation findings. Both the evaluation approach and the programs examined shed light on how sexual violence prevention can continue to be improved in the future.

  4. Work Group on American Indian Research and Program Evaluation Methodology, Symposium on Research and Evaluation Methodology: Lifespan Issues Related to American Indians/Alaska Natives with Disabilities (Washington, DC, April 26-27, 2002).

    ERIC Educational Resources Information Center

    Davis, Jamie D., Ed.; Erickson, Jill Shepard, Ed.; Johnson, Sharon R., Ed.; Marshall, Catherine A., Ed.; Running Wolf, Paulette, Ed.; Santiago, Rolando L., Ed.

    This first symposium of the Work Group on American Indian Research and Program Evaluation Methodology (AIRPEM) explored American Indian and Alaska Native cultural considerations in relation to "best practices" in research and program evaluation. These cultural considerations include the importance of tribal consultation on research…

  5. 28 CFR 90.58 - Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Evaluation. 90.58 Section 90.58 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) VIOLENCE AGAINST WOMEN Indian Tribal Governments Discretionary Program § 90.58 Evaluation. The National Institute of Justice will conduct an evaluation of these programs. ...

  6. 28 CFR 90.58 - Evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Evaluation. 90.58 Section 90.58 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) VIOLENCE AGAINST WOMEN Indian Tribal Governments Discretionary Program § 90.58 Evaluation. The National Institute of Justice will conduct an evaluation of these programs. ...

  7. 28 CFR 90.58 - Evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Evaluation. 90.58 Section 90.58 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) VIOLENCE AGAINST WOMEN Indian Tribal Governments Discretionary Program § 90.58 Evaluation. The National Institute of Justice will conduct an evaluation of these programs. ...

  8. 28 CFR 90.58 - Evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Evaluation. 90.58 Section 90.58 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) VIOLENCE AGAINST WOMEN Indian Tribal Governments Discretionary Program § 90.58 Evaluation. The National Institute of Justice will conduct an evaluation of these programs. ...

  9. Handbook in Evaluating with Photography.

    ERIC Educational Resources Information Center

    Templin, Patricia A.

    This handbook is intended to help educational evaluators use still photography in designing, conducting, and reporting evaluations of educational programs. It describes techniques for using a visual documentary approach to program evaluation that features data collected with a camera. The emphasis is on the aspects of educational evaluation…

  10. Camp's "Disneyland" Effect.

    ERIC Educational Resources Information Center

    Renville, Gary

    1999-01-01

    Describes the positive mental, physical, and social growth impacts that the camping experience had on the author, and urges camp program evaluation to plan and implement such changes. Sidebar lists steps of effective evaluation: program goals and objectives, goals of evaluation, implementation of evaluation, data analysis, and findings and…

  11. Computer Aided Instruction in Teaching Program Evaluation.

    ERIC Educational Resources Information Center

    Dowell, David A.; Binette, Holly A. Lizotte

    This paper reports the results of two semesters of experience using computer-assisted instruction (CAI) to teach topics in program evaluation to undergraduate and graduate psychology students at California State University, Long Beach. (The topics addressed are models of evaluation, evaluability assessment, needs assessment, experimental and…

  12. Evaluating AIDS Prevention: Contributions of Multiple Disciplines.

    ERIC Educational Resources Information Center

    Leviton, Laura C., Ed.; And Others

    1990-01-01

    Seven essays on efforts of evaluate prevention programs aimed at the acquired immune deficiency syndrome (AIDS) are presented. Topics include public health psychology, mathematical models of epidemiology, estimates of incubation periods, ethnographic evaluations of AIDS prevention programs, an AIDS education model, theory-based evaluation, and…

  13. Iterative evaluation in a mobile counseling and testing program to reach people of color at risk for HIV--new strategies improve program acceptability, effectiveness, and evaluation capabilities.

    PubMed

    Spielberg, Freya; Kurth, Ann; Reidy, William; McKnight, Teka; Dikobe, Wame; Wilson, Charles

    2011-06-01

    This article highlights findings from an evaluation that explored the impact of mobile versus clinic-based testing, rapid versus central-lab based testing, incentives for testing, and the use of a computer counseling program to guide counseling and automate evaluation in a mobile program reaching people of color at risk for HIV. The program's results show that an increased focus on mobile outreach using rapid testing, incentives and health information technology tools may improve program acceptability, quality, productivity and timeliness of reports. This article describes program design decisions based on continuous quality assessment efforts. It also examines the impact of the Computer Assessment and Risk Reduction Education computer tool on HIV testing rates, staff perception of counseling quality, program productivity, and on the timeliness of evaluation reports. The article concludes with a discussion of implications for programmatic responses to the Centers for Disease Control and Prevention's HIV testing recommendations.

  14. Program Implementers' Evaluation of the Project P.A.T.H.S.: Findings Based on Different Datasets over Time

    PubMed Central

    Shek, Daniel T. L.; Ma, Cecilia M. S.

    2012-01-01

    This paper integrates the evaluation findings based on program implementers in nine datasets collected from 2005 to 2009 (244 schools and 7,926 implementers). Using consolidated data with schools as the unit of analysis, results showed that program implementers generally had positive perceptions of the program, themselves, and benefits of the program, with more than four-fifths of the implementers regarding the program as beneficial to the program participants. The subjective outcome evaluation instrument was found to be internally consistent. Multiple regression analyses revealed that perceived qualities of the program and program implementers predicted perceived effectiveness of the program. In conjunction with evaluation findings based on other sources, the present study provides support for the effectiveness of the Tier 1 Program of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong. PMID:22629224

  15. Reading/Writing and Mathematics Instruction. CIPP Planning/Evaluation Report 95-018. Focus on Program Evaluation.

    ERIC Educational Resources Information Center

    Des Moines Public Schools, IA. Dept. of Information Management.

    The Chapter 1 reading, writing, and mathematics instruction programs of the Des Moines (Iowa) public schools were evaluated for the 1993-94 school year. These programs provided supplemental instruction for about 2,968 students in 1993-94 through six components: (1) schoolwide projects; (2) the Reading Recovery Program; (3) the Reading/Writing Lab…

  16. Qualitative Evaluation of Emotional Intelligence In-Service Program for Secondary School Teachers

    ERIC Educational Resources Information Center

    Fer, Seval

    2004-01-01

    This paper is an attempt to evaluate the Emotional Intelligence (EQ) In-Service Program on the basis of experiences of 20 secondary school teachers who attended the program in a private school in Turkey. A phenomenological approach, with a focus group method was used. The first objective of this study was to evaluate EQ program on the basis of…

  17. Impact Evaluation of the U.S. Department of Education's Student Mentoring Program. Final Report. NCEE 2009-4047

    ERIC Educational Resources Information Center

    Bernstein, Lawrence; Rappaport, Catherine Dun; Olsho, Lauren; Hunt, Dana; Levin, Marjorie

    2009-01-01

    This report summarizes the findings from a national evaluation of mentoring programs funded under the U.S. Department of Education's Student Mentoring Program. The impact evaluation used an experimental design in which students were randomly assigned to a treatment or control group. Thirty-two purposively selected School Mentoring Programs and…

  18. Evaluation of the Cosmetology Program at Caldwell Community College and Technical Institute--Fall, 1981.

    ERIC Educational Resources Information Center

    Pipes, V. David

    In fall 1981, the cosmetology program at Caldwell Community College and Technical Institute (CCC&TI) was evaluated as part of a process to create a model for the periodic evaluation of all occupational programs at the school. In addition to collecting information for planning and program improvement, the study sought to assess the achievement of…

  19. Process Evaluation of a School-Based Weight Gain Prevention Program: The Dutch Obesity Intervention in Teenagers (DOiT)

    ERIC Educational Resources Information Center

    Singh, A. S.; Chinapaw, M. J. M.; Brug, J.; van Mechelen, W.

    2009-01-01

    Health promotion programs benefit from an accompanying process evaluation since it can provide more insight in the strengths and weaknesses of a program. A process evaluation was conducted to assess the reach, implementation, satisfaction and maintenance of a school-based program aimed at the prevention of excessive weight gain among Dutch…

  20. Assessment of the Teaching Behavior of the Instructors of an Out-of-School Program

    ERIC Educational Resources Information Center

    Rodríguez-Naveiras, Elena; Borges, África

    2015-01-01

    Out-of-school programs for students with high abilities are especially relevant when their needs are not covered in formal education. The evaluation of these programs is essential and it can be carried out from different evaluative approaches. The evaluation of the behavior of the people who implement the programs is an important aspect in the…

  1. A Program Evaluation of a Special Education Day School for Students with Emotional and Behavioral Disabilities

    ERIC Educational Resources Information Center

    Klein-Lombardo, Lucinda

    2012-01-01

    The purpose of this study was the program evaluation of a special education day school compared to a set of best practice standards for school programs for students with emotional and behavioral disorders. This evaluation will enable the organization to make decisions about which aspects of the program to continue, strengthen, or discontinue. …

  2. Governor's Educator Excellence Grant (GEEG) Program: Year One Evaluation Report. Policy Evaluation Report

    ERIC Educational Resources Information Center

    Springer, Matthew G.; Podgursky, Michael J.; Lewis, Jessica L.; Ehlert, Mark W.; Gardner, Catherine G.; Ghoshdastidar, Bonnie; Lopez, Omar S.; Patterson, Christine H.; Taylor, Lori L.

    2007-01-01

    This report presents findings stemming from the first-year evaluation of the Governor's Educator Excellence Grant (GEEG) program, one of several statewide performance incentive programs in Texas. In the fall of 2006, the GEEG program made available non-competitive, three-year grants to 99 schools ranging from $60,000 to $220,000 per year. Grants…

  3. Assessment and Evaluation of Primary Prevention in Spinal Cord Injury

    PubMed Central

    2013-01-01

    Although the incidence of spinal cord injury (SCI) is low, the consequences of this disabling condition are extremely significant for the individual, family, and the community. Sequelae occur in the physical, psychosocial, sexual, and financial arenas, making global prevention of SCI crucial. Understanding how to assess and evaluate primary prevention programs is an important competency for SCI professionals. Assessing a program’s success requires measuring processes, outcomes, and impact. Effective evaluation can lead future efforts for program design while ensuring accountability for the program itself. The intended impact of primary prevention programs for SCI is to decrease the number of individuals who sustain traumatic injury; many programs have process and outcome goals as well. An understanding of the basic types of evaluation, evaluation design, and the overall process of program evaluation is essential for ensuring that these programs are efficacious. All health care professionals have the opportunity to put prevention at the forefront of their practice. With the current paucity of available data, it is important that clinicians share their program design, their successes, and their failures so that all can benefit and future injury can be prevented. PMID:23678281

  4. Development and application of course-embedded assessment system for program outcome evaluation in the Korean nursing education: A pilot study.

    PubMed

    Park, Jee Won; Seo, Eun Ji; You, Mi-Ae; Song, Ju-Eun

    2016-03-01

    Program outcome evaluation is important because it is an indicator for good quality of education. Course-embedded assessment is one of the program outcome evaluation methods. However, it is rarely used in Korean nursing education. The study purpose was to develop and apply preliminarily a course-embedded assessment system to evaluate one program outcome and to share our experiences. This was a methodological study to develop and apply the course-embedded assessment system based on the theoretical framework in one nursing program in South Korea. Scores for 77 students generated from the three practicum courses were used. The course-embedded assessment system was developed following the six steps suggested by Han's model as follows. 1) One program outcome in the undergraduate program, "nursing process application ability", was selected and 2) the three clinical practicum courses related to the selected program outcome were identified. 3) Evaluation tools including rubric and items were selected for outcome measurement and 4) performance criterion, the educational goal level for the program, was established. 5) Program outcome was actually evaluated using the rubric and evaluation items in the three practicum courses and 6) the obtained scores were analyzed to identify the achievement rate, which was compared with the performance criterion. Achievement rates for the selected program outcome in adult, maternity, and pediatric nursing practicum were 98.7%, 100%, and 66.2% in the case report and 100% for all three in the clinical practice, and 100%, 100%, and 87% respectively for the conference. These are considered as satisfactory levels when compared with the performance criterion of "at least 60% or more". Course-embedded assessment can be used as an effective and economic method to evaluate the program outcome without running an integrative course additionally. Further studies to develop course-embedded assessment systems for other program outcomes in nursing education are needed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Lessons learned using a values-engaged approach to attend to culture, diversity, and equity in a STEM program evaluation.

    PubMed

    Boyce, Ayesha S

    2017-10-01

    Evaluation must attend meaningfully and respectfully to issues of culture, race, diversity, power, and equity. This attention is especially critical within the evaluation of science, technology, engineering, and mathematics (STEM) educational programming, which has an explicit agenda of broadening participation. The purpose of this article is to report lessons learned from the implementation of a values-engaged, educative (Greene et al., 2006) evaluation within a multi-year STEM education program setting. This meta-evaluation employed a case study design using data from evaluator weekly systematic reflections, review of evaluation and program artifacts, stakeholder interviews, and peer review and assessment. The main findings from this study are (a) explicit attention to culture, diversity, and equity was initially challenged by organizational culture and under-developed evaluator-stakeholder professional relationship and (b) evidence of successful engagement of culture, diversity, and equity emerged in formal evaluation criteria and documents, and informal dialogue and discussion with stakeholders. The paper concludes with lessons learned and implications for practice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Promoting Excellence in Nursing Education (PENE): Pross evaluation model.

    PubMed

    Pross, Elizabeth A

    2010-08-01

    The purpose of this article is to examine the Promoting Excellence in Nursing Education (PENE) Pross evaluation model. A conceptual evaluation model, such as the one described here, may be useful to nurse academicians in the ongoing evaluation of educational programs, especially those with goals of excellence. Frameworks for evaluating nursing programs are necessary because they offer a way to systematically assess the educational effectiveness of complex nursing programs. This article describes the conceptual framework and its tenets of excellence. Copyright 2009 Elsevier Ltd. All rights reserved.

  7. Employer evaluations of nurse graduates: a critical program assessment element.

    PubMed

    Ryan, M E; Hodson, K E

    1992-05-01

    Accountability in higher education dictates implementation of a comprehensive evaluation plan. Employer evaluation of graduates is an important component of program evaluation and contributes a different view that is rarely reported in the literature. The purpose of this study was to establish a database by surveying employers of baccalaureate-prepared nurses, postgraduation, over a five-year period. Employer surveys measured perceptions of graduates' functioning. Findings indicated that graduates function above expected levels for leadership skills, nursing skills, communication skills, and professionalism. Systematic program evaluation by employers is recommended at one and five years after graduation. A tool for employer evaluation of baccalaureate graduates is discussed.

  8. 78 FR 66905 - Agency Information Collection Activities; Comment Request; Evaluating the Retired Mentors for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-07

    ...; Comment Request; Evaluating the Retired Mentors for Teachers Program AGENCY: Institute of Education... considered public records. Title of Collection: Evaluating the Retired Mentors for Teachers Program. OMB... study of the Retired Mentors for New Teachers program for probationary teachers developed by the Aurora...

  9. Guidance and Counseling Program K-12. Report of Evaluation.

    ERIC Educational Resources Information Center

    Roberts, Joan; Weslander, Darrell L.

    This evaluation of the Des Moines school guidance program provides an introductory history of guidance and counseling services in the district. General results of the evaluation focus on changing counselor roles, and problems caused by lack of specific program guidelines. Counselor profiles and interviews are presented, with an analysis of…

  10. Interpreting Outcomes: Using Focus Groups in Evaluation Research

    ERIC Educational Resources Information Center

    Ansay, Sylvia J.; Perkins, Daniel F.; Nelson, John

    2004-01-01

    Although focus groups continue to gain popularity in marketing and social science research, their use in program evaluation has been limited. Here we demonstrate how focus groups can benefit evaluators, program staff, policy makers and administrators by providing an in-depth understanding of program effectiveness from the perspective of…

  11. Planning Educational Programs: An Evaluation Report for the Occupational Exploration Program.

    ERIC Educational Resources Information Center

    Altschuld, James W.; Pritz, Sandra

    The evaluation report is one of seven produced for the Occupational Exploration Program (OEP), a series of simulated occupational experiences designed for junior high school students. Describing the pilot testing of the simulation dealing with education, the report contains sections describing the simulation context, evaluation procedures,…

  12. E-Basics: Online Basic Training in Program Evaluation

    ERIC Educational Resources Information Center

    Silliman, Ben

    2016-01-01

    E-Basics is an online training in program evaluation concepts and skills designed for youth development professionals, especially those working in nonformal science education. Ten hours of online training in seven modules is designed to prepare participants for mentoring and applied practice, mastery, and/or team leadership in program evaluation.…

  13. Afterschool Alliance Backgrounder: Formal Evaluations of Afterschool Programs.

    ERIC Educational Resources Information Center

    Afterschool Alliance, Washington, DC.

    Noting that various types of evaluations of after-school programming conducted over the last several years have provided useful information to providers and to policymakers, this report summarizes the lessons learned from independent evaluations of after-school programs. The following overall findings are supported with a delineation of findings…

  14. Program Evaluation: Where Instruction and School Business Meet

    ERIC Educational Resources Information Center

    Ayers, Steven V.

    2011-01-01

    In this article, the author talks about program evaluation, a strategy commonly used by instructional leaders that can help school business officials improve their budget process. As districts struggle to develop budgets in these challenging economic times, school business officials might consider turning to program evaluation for help. Program…

  15. Manipulator system man-machine interface evaluation program. [technology assessment

    NASA Technical Reports Server (NTRS)

    Malone, T. B.; Kirkpatrick, M.; Shields, N. L.

    1974-01-01

    Application and requirements for remote manipulator systems for future space missions were investigated. A manipulator evaluation program was established to study the effects of various systems parameters on operator performance of tasks necessary for remotely manned missions. The program and laboratory facilities are described. Evaluation criteria and philosophy are discussed.

  16. Evaluation of a Bullying Prevention Program

    ERIC Educational Resources Information Center

    Hallford, Abby; Borntrager, Cameo; Davis, Joanne L.

    2006-01-01

    In order to address the federal "No Child Left Behind Act", the state of Oklahoma required that all public schools address the problem of bullying. Although numerous anti-bullying programs exist, few have been evaluated to determine their effectiveness. The present study evaluated the effectiveness of one such program,…

  17. Interrupted Time Series Analysis: A Research Technique for Evaluating Social Programs for the Elderly

    ERIC Educational Resources Information Center

    Calsyn, Robert J.; And Others

    1977-01-01

    After arguing that treatment programs for the elderly need to be evaluated with better research designs, the authors illustrate how interrupted time series analysis can be used to evaluate programs for the elderly when random assignment to experimental and control groups is not possible. (Author)

  18. The Logic of Evaluation.

    ERIC Educational Resources Information Center

    Welty, Gordon A.

    The logic of the evaluation of educational and other action programs is discussed from a methodological viewpoint. However, no attempt is made to develop methods of evaluating programs. In Part I, the structure of an educational program is viewed as a system with three components--inputs, transformation of inputs into outputs, and outputs. Part II…

  19. Foreign Language K-12. Program Evaluation 1991-92.

    ERIC Educational Resources Information Center

    Wadden, Jerry M.

    The Des Moines (Iowa) Public Schools foreign language program for K-12 is described and evaluated. The evaluation report focuses on six areas, including: (1) school district mission and philosophy of foreign language instruction; (2) context (state policies and standards, foreign language program overview and enrollment, fiber-optic communication…

  20. 10 CFR 26.419 - Suitability and fitness evaluations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Suitability and fitness evaluations. 26.419 Section 26.419 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS FFD Program for Construction § 26.419 Suitability and fitness evaluations. Licensees and other entities who implement FFD programs under this...

  1. 10 CFR 26.419 - Suitability and fitness evaluations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Suitability and fitness evaluations. 26.419 Section 26.419 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS FFD Program for Construction § 26.419 Suitability and fitness evaluations. Licensees and other entities who implement FFD programs under this...

  2. 10 CFR 26.419 - Suitability and fitness evaluations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Suitability and fitness evaluations. 26.419 Section 26.419 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS FFD Program for Construction § 26.419 Suitability and fitness evaluations. Licensees and other entities who implement FFD programs under this...

  3. Evaluating Injury Prevention Programs: The Oklahoma City Smoke Alarm Project.

    ERIC Educational Resources Information Center

    Mallonee, Sue

    2000-01-01

    Illustrates how evaluating the Oklahoma City Smoke Alarm Project increased its success in reducing residential fire-related injuries and deaths. The program distributed and tested smoke alarms in residential dwellings and offered educational materials on fire prevention and safety. Evaluation provided sound data on program processes and outcomes,…

  4. 10 CFR 26.419 - Suitability and fitness evaluations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Suitability and fitness evaluations. 26.419 Section 26.419 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS FFD Program for Construction § 26.419 Suitability and fitness evaluations. Licensees and other entities who implement FFD programs under this...

  5. 10 CFR 26.419 - Suitability and fitness evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Suitability and fitness evaluations. 26.419 Section 26.419 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS FFD Program for Construction § 26.419 Suitability and fitness evaluations. Licensees and other entities who implement FFD programs under this...

  6. Student Assistance Program in Pennsylvania. Evaluation Final Report.

    ERIC Educational Resources Information Center

    Fertman, Carl I.; Schlesinger, Jo; Fichter, Cele; Tarasevich, Susan; Zhang, Xiaoyan; Wald, Holly

    This report contains the second year evaluation of the Student Assistance Program (SAP) in Pennsylvania. These school-based and school-linked programs address barriers to learning for youth at risk for social and emotional problems, drug and alcohol use and abuse, and depression. Second year evaluation focused on identifying essential components…

  7. Evaluating Teacher Preparation Using Graduates' Observational Ratings

    ERIC Educational Resources Information Center

    Ronfeldt, Matthew; Campbell, Shanyce L.

    2016-01-01

    Despite growing calls for more accountability of teacher education programs (TEPs), there is little consensus about how to evaluate them. This study investigates the potential for using observational ratings of program completers to evaluate TEPs. Drawing on statewide data on almost 9,500 program completers, representing 44 providers (183…

  8. Cost Analysis for Educational Program Evaluation.

    ERIC Educational Resources Information Center

    Haller, Emil J.

    Concerned with the problem of determining program costs as part of the evaluation process, this article helps the practitioner arrive at useful conceptions of the term "cost" and procedures for assessing the costs of an educational program. Its purpose is to help design costing procedures for evaluation situations that are commonly encountered. An…

  9. 77 FR 30045 - 30-Day Notice of Proposed Information Collection: English Language Evaluation Surveys

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-21

    ...] 30-Day Notice of Proposed Information Collection: English Language Evaluation Surveys ACTION: Notice... clearance will allow ECA/P/V, as part of the English Language Evaluation, to conduct surveys of participants in the ETA Program, E-Teacher Scholarship program, and the English Language Specialist Program...

  10. Prevalence of Evaluation Method Courses in Education Leader Doctoral Preparation

    ERIC Educational Resources Information Center

    Shepperson, Tara L.

    2013-01-01

    This exploratory study investigated the prevalence of single evaluation methods courses in doctoral education leadership programs. Analysis of websites of 132 leading U.S. university programs found 62 evaluation methods courses in 54 programs. Content analysis of 49 course catalog descriptions resulted in five categories: survey, planning and…

  11. Evaluating Extension Based Leadership Development Programs in the Southern United States

    ERIC Educational Resources Information Center

    Lamm, Kevan W.; Carter, Hannah S.; Lamm, Alexa J.

    2016-01-01

    The ability to evaluate and accurately articulate the outcomes associated with leadership development programs is critical to their continued financial and administrative support. With calls for outcome-based accountability, the need for rigorous evaluation is particularly relevant for those programs administered through the Cooperative Extension…

  12. A Model for Evaluating Development Programs. Miscellaneous Report.

    ERIC Educational Resources Information Center

    Burton, John E., Jr.; Rogers, David L.

    Taking the position that the Classical Experimental Evaluation (CEE) Model does not do justice to the process of acquiring information necessary for decision making re planning, programming, implementing, and recycling program activities, this paper presents the Inductive, System-Process (ISP) evaluation model as an alternative to be used in…

  13. Theory-Based Stakeholder Evaluation

    ERIC Educational Resources Information Center

    Hansen, Morten Balle; Vedung, Evert

    2010-01-01

    This article introduces a new approach to program theory evaluation called theory-based stakeholder evaluation or the TSE model for short. Most theory-based approaches are program theory driven and some are stakeholder oriented as well. Practically, all of the latter fuse the program perceptions of the various stakeholder groups into one unitary…

  14. Student Services Program Planning and Evaluation: Responsibility, Procedures, Instrument, and Guidelines.

    ERIC Educational Resources Information Center

    Repp, Charles A.; Brach, Ronald C.

    The manual provides a rationale, procedural guidelines, time-schedules, instruments, and supporting documentation for student services program evaluation at SUNY Agricultural and Technical College, Delhi. Six procedural guidelines include: (1) all programs and services should be evaluated at least once every four years, with provision for annual…

  15. A Modified Importance-Performance Framework for Evaluating Recreation-Based Experiential Learning Programs

    ERIC Educational Resources Information Center

    Pitas, Nicholas; Murray, Alison; Olsen, Max; Graefe, Alan

    2017-01-01

    This article describes a modified importance-performance framework for use in evaluation of recreation-based experiential learning programs. Importance-performance analysis (IPA) provides an effective and readily applicable means of evaluating many programs, but the near universal satisfaction associated with recreation inhibits the use of IPA in…

  16. A Program Evaluation of a Leadership Academy for School Principals

    ERIC Educational Resources Information Center

    Wagner, Kristi E.

    2014-01-01

    This program evaluation focused on mid-range outcomes of a leadership academy for school principals. The mixed-methods evaluation included interviews, principals' instructional observation database, and teacher surveys. The Principal Academy program was designed to build principals' knowledge of high-yield instructional strategies (Hattie, 2009),…

  17. 7 CFR 1485.20 - Financial management, reports, evaluations and appeals.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    .... (ii) A brand promotion evaluation is a review of the U.S. and foreign commercial entities' export... of the program to determine the effectiveness of the participant's strategy in meeting specified... program evaluation's findings and recommendations and proposed changes in program strategy or design as a...

  18. US-VISIT Identity Matching Algorithm Evaluation Program: ADIS Algorithm Evaluation Project Plan Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grant, C W; Lenderman, J S; Gansemer, J D

    This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed bymore » Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).« less

  19. Framework for a National Testing and Evaluation Program ...

    EPA Pesticide Factsheets

    Abstract:The National STEPP Program seeks to improve water quality by accelerating the effective implementation and adoption of innovative stormwater management technologies. Itwill attempt to accomplish this by establishing practices through highly reliable, and cost-effective Stormwater control measures (SCM) testing, evaluation, and verification services. The program will aim to remove barriers to innovation, minimize duplicative performance evaluation needs, increase confidence that regulatory requirements are met by creating consistency among testing and evaluation protocols, and establishing equity between public domain and proprietary SCM evaluation approaches.The Environmental Technology Verification Program, established by the U.S. Environmental Protection Agency (EPA) 18 years ago, was the only national program of its kindin the stormwater sector, but is now defunct, leaving a national leadership void. The STEPP initiative was triggered in part by regulatory demands in the government and private sectors to fill this vacuum. A concerted focus and study of this matter led to the release of a Water Environment Federation (WEF) white paper entitled “Investigation into the Feasibility of a National Testing and Evaluation Program for Stormwater Products and Practices” in February 2014. During this second phase of the STEPP initiative, and with EPA support, five analogous technology evaluation programs related to both stormwater and non-stormwater were an

  20. Guidelines for Creating, Implementing, and Evaluating Mind-Body Programs in a Military Healthcare Setting.

    PubMed

    Smith, Katherine; Firth, Kimberly; Smeeding, Sandra; Wolever, Ruth; Kaufman, Joanna; Delgado, Roxana; Bellanti, Dawn; Xenakis, Lea

    2016-01-01

    Research suggests that the development of mind-body skills can improve individual and family resilience, particularly related to the stresses of illness, trauma, and caregiving. To operationalize the research evidence that mind-body skills help with health and recovery, Samueli Institute, in partnership with experts in mind-body programming, created a set of guidelines for developing and evaluating mind-body programs for service members, veterans, and their families. The Guidelines for Creating, Implementing, and Evaluating Mind-Body Programs in a Military Healthcare Setting outline key strategies and issues to consider when developing, implementing, and evaluating a mind-body focused family empowerment approach in a military healthcare setting. Although these guidelines were developed specifically for a military setting, most of the same principles can be applied to the development of programs in the civilian setting as well. The guidelines particularly address issues unique to mind-body programs, such as choosing evidence-based modalities, licensure and credentialing, safety and contraindications, and choosing evaluation measures that capture the holistic nature of these types of programs. The guidelines are practical, practice-based guidelines, developed by experts in the fields of program development and evaluation, mind-body therapies, patient- and family-centered care, as well as, experts in military and veteran's health systems. They provide a flexible framework to create mind-body family empowerment programs and describe important issues that program developers and evaluators are encouraged to address to ensure the development of the most impactful, successful, evidence-supported programs possible. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Evaluation Capacity in K-12 School Counseling Programs.

    ERIC Educational Resources Information Center

    Trevisan, Michael S.

    2002-01-01

    Presents results of a literature review, organized the Milstein and Cotton evaluation capacity (B. Milstein and D. Cotton, 2000) framework, detailing and analyzing the contextual factors and system features that impact the evaluation capacity of school counseling programs. Discusses implications for evaluation in other settings. (SLD)

  2. A review of elementary school-based substance use prevention programs: identifying program attributes.

    PubMed

    Hopfer, S; Davis, D; Kam, J A; Shin, Y; Elek, E; Hecht, M L

    2010-01-01

    This article takes a systematic approach to reviewing substance use prevention programs introduced in elementary school (K-6th grade). Previous studies evaluating such programs among elementary school students showed mixed effects on subsequent substance use and related psychosocial factors. Thirty published evaluation studies of 24 elementary school-based substance use prevention programs were reviewed. The study selection criteria included searching for program evaluations from 1980 to 2008. Among 27 evaluation studies that examined program effects on substance use, 56% (n = 15) found significant decreases. In addition, programs most often demonstrated effects on increasing negative substance use attitudes, increasing knowledge, decreasing perceptions of prevalence rates (i.e., descriptive norms), and improving resistance skills. These results have implications for the appropriateness and value of introducing substance use prevention programs to youth in elementary school.

  3. Building a patient-centered and interprofessional training program with patients, students and care professionals: study protocol of a participatory design and evaluation study.

    PubMed

    Vijn, Thomas W; Wollersheim, Hub; Faber, Marjan J; Fluit, Cornelia R M G; Kremer, Jan A M

    2018-05-30

    A common approach to enhance patient-centered care is training care professionals. Additional training of patients has been shown to significantly improve patient-centeredness of care. In this participatory design and evaluation study, patient education and medical education will be combined by co-creating a patient-centered and interprofessional training program, wherein patients, students and care professionals learn together to improve patient-centeredness of care. In the design phase, scientific literature regarding interventions and effects of student-run patient education will be synthesized in a scoping review. In addition, focus group studies will be performed on the preferences of patients, students, care professionals and education professionals regarding the structure and content of the training program. Subsequently, an intervention plan of the training program will be constructed by combining these building blocks. In the evaluation phase, patients with a chronic disease, that is rheumatoid arthritis, diabetes and hypertension, and patients with an oncologic condition, that is colonic cancer and breast cancer, will learn together with medical students, nursing students and care professionals in training program cycles of three months. Process and effect evaluation will be performed using the plan-do-study-act (PDSA) method to evaluate and optimize the training program in care practice and medical education. A modified control design will be used in PDSA-cycles to ensure that students who act as control will also benefit from participating in the program. Our participatory design and evaluation study provides an innovative approach in designing and evaluating an intervention by involving participants in all stages of the design and evaluation process. The approach is expected to enhance the effectiveness of the training program by assessing and meeting participants' needs and preferences. Moreover, by using fast PDSA cycles and a modified control design in evaluating the training program, the training program is expected to be efficiently and rapidly implemented into and adjusted to care practice and medical education.

  4. Monitoring and evaluation of mental health and psychosocial support programs in humanitarian settings: a scoping review of terminology and focus.

    PubMed

    Augustinavicius, Jura L; Greene, M Claire; Lakin, Daniel P; Tol, Wietse A

    2018-01-01

    Monitoring and evaluation of mental health and psychosocial support (MHPSS) programs is critical to facilitating learning and providing accountability to stakeholders. As part of an inter-agency effort to develop recommendations on MHPSS monitoring and evaluation, this scoping review aimed to identify the terminology and focus of monitoring and evaluation frameworks in this field. We collected program documents (logical frameworks (logframes) and theories of change) from members of the Inter-Agency Standing Committee Reference Group on MHPSS, and systematically searched the peer-reviewed literature across five databases. We included program documents and academic articles that reported on monitoring and evaluation of MHPSS in low- and middle-income countries describing original data. Inclusion and data extraction were conducted in parallel by independent reviewers. Thematic analysis was used to identify common language in the description of practices and the focus of each monitoring and evaluation framework. Logframe outcomes were mapped to MHPSS activity categories. We identified 38 program documents and 89 peer-reviewed articles, describing monitoring and evaluation of a wide range of MHPSS activities. In both program documents and peer-reviewed literature there was a lack of specificity and overlap in language used for goals and outcomes. Well-validated, reliable instruments were reported in the academic literature, but rarely used in monitoring and evaluation practices. We identified six themes in the terminology used to describe goals and outcomes. Logframe outcomes were more commonly mapped to generic program implementation activities (e.g. "capacity building") and those related to family and community support, while outcomes from academic articles were most frequently mapped to specialized psychological treatments. Inconsistencies between the language used in research and practice and discrepancies in measurement have broader implications for monitoring and evaluation in MHPSS programs in humanitarian settings within low- and middle-income countries. This scoping review of the terminology commonly used to describe monitoring and evaluation practices and their focus within MHPSS programming highlights areas of importance for the development of a more standardized approach to monitoring and evaluation.

  5. A Lesson in Carefully Managing Resources: A Case Study from an Evaluation of a Music Education Program

    ERIC Educational Resources Information Center

    Hobson, Kristin A.; Burkhardt, Jason T.

    2012-01-01

    Background: A music education program with a goal of enhancing cognitive development of preschool-aged children enrolled in local preschools is evaluated by The Evaluation Center at Western Michigan University. The budget for the evaluation was small, and therefore presented several challenges to the evaluation team. Purpose: Through a case study…

  6. Evaluating Human Resources, Programs, and Organizations. Professional Practices in Adult Education and Human Resource Development Series.

    ERIC Educational Resources Information Center

    Burnham, Byron R.

    This book is intended for the practitioner of evaluation or for the student about to do his or her first formal evaluation. Chapter 1 sets the role of evaluating within the context of an organization and discusses a critical role of evaluation: changing people, programs, and organizations. Chapter 2 discusses personnel appraisals from an…

  7. Report of the Evaluation of the Race/Human Relations Program. Student and Staff Program and Long Range Goals. Baseline Year 1982-83. Evaluation Services Department Report No. 348.

    ERIC Educational Resources Information Center

    Tomblin, Elizabeth A.; And Others

    In response to a 1982 Superior Court order, a centrally developed, sequential program for improving race/human relations in the San Diego City Schools was developed and field tested or implemented during the 1982-83 school year. This systematic evaluation reports on the student program, "Conflict"; the staff program; and baseline data…

  8. 7 CFR 1944.427 - Grantee self-evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) PROGRAM REGULATIONS (CONTINUED) HOUSING Self-Help Technical Assistance Grants § 1944.427 Grantee self-evaluation. Annually or more often, the board of directors will evaluate their own self-help program. Exhibit...

  9. Increasing the use of evaluation data collection in an EPO program

    NASA Astrophysics Data System (ADS)

    Taber, J. J.; Bohon, W.; Bravo, T. K.; Dordevic, M.; Dorr, P. M.; Hubenthal, M.; Johnson, J. A.; Sumy, D.; Welti, R.; Davis, H. B.

    2017-12-01

    Over the past two years, the Incorporated Research Institutions for Seismology Education and Public Outreach (EPO) program has sought to increase the evaluation rigor of its programs and products. Specifically we sought to make evaluation an integral part of our work; enabling staff to demonstrate why we do the activities we do, enhancing the impact or our products/programs, and empowering staff to make evidence-based claims. The Collaborative Impact Analysis Method (Davis and Scalice, 2015) was selected as it allowed us to combine staff's knowledge of programs, audiences and content with the expertise of an outside evaluation expert, through consultations and a qualitative rubric assessing the initial state of each product/program's evaluation. Staff then developed action plans to make improvements to the programs over time. A key part of the initial action plans has been the collection and analysis of new evaluation data. The most frequently used tools were surveys as they were relatively straightforward to implement and analyze, and could be adapted for different situations. Examples include: brand awareness, value of booth interactions, assessing community interest in a data app, and user surveys of social media and specific web pages. Other evaluation activities included beta testing of new software, and interviews with students and faculty involved in summer field experiences. The surveys have allowed us to document increased impact in some areas, to improve the usability of products and activities, and to provide baseline impact data. The direct involvement of staff in the process has helped staff appreciate the value of evaluation, but there are also challenges to this approach. Since many of the surveys are developed and conducted by EPO staff, rather than being primarily handled by the evaluator, the process takes considerably more staff time to implement. We are still determining how to best manage and present the data and analysis; our current approach is to post evaluation reports on our EPO website so that other groups may be able to benefit from our evaluation results. Davis, H. & Scalice, D. (2015). Evaluate the Impact of your Education and Outreach Program Using the Quantitative Collaborative Impact Analysis Method. Abstract ED53D-0871, 2015 Fall Meeting, AGU.

  10. Milestone-specific, Observed data points for evaluating levels of performance (MODEL) assessment strategy for anesthesiology residency programs.

    PubMed

    Nagy, Christopher J; Fitzgerald, Brian M; Kraus, Gregory P

    2014-01-01

    Anesthesiology residency programs will be expected to have Milestones-based evaluation systems in place by July 2014 as part of the Next Accreditation System. The San Antonio Uniformed Services Health Education Consortium (SAUSHEC) anesthesiology residency program developed and implemented a Milestones-based feedback and evaluation system a year ahead of schedule. It has been named the Milestone-specific, Observed Data points for Evaluating Levels of performance (MODEL) assessment strategy. The "MODEL Menu" and the "MODEL Blueprint" are tools that other anesthesiology residency programs can use in developing their own Milestones-based feedback and evaluation systems prior to ACGME-required implementation. Data from our early experience with the streamlined MODEL blueprint assessment strategy showed substantially improved faculty compliance with reporting requirements. The MODEL assessment strategy provides programs with a workable assessment method for residents, and important Milestones data points to programs for ACGME reporting.

  11. A participatory evaluation model for Healthier Communities: developing indicators for New Mexico.

    PubMed Central

    Wallerstein, N

    2000-01-01

    Participatory evaluation models that invite community coalitions to take an active role in developing evaluations of their programs are a natural fit with Healthy Communities initiatives. The author describes the development of a participatory evaluation model for New Mexico's Healthier Communities program. She describes evaluation principles, research questions, and baseline findings. The evaluation model shows the links between process, community-level system impacts, and population health changes. PMID:10968754

  12. A Comparison of Participant and Practitioner Beliefs About Evaluation

    PubMed Central

    Whitehall, Anna K.; Hill, Laura G.; Koehler, Christian R.

    2014-01-01

    The move to build capacity for internal evaluation is a common organizational theme in social service delivery, and in many settings, the evaluator is also the practitioner who delivers the service. The goal of the present study was to extend our limited knowledge of practitioner evaluation. Specifically, the authors examined practitioner concerns about administering pretest and posttest evaluations within the context of a multisite 7-week family strengthening program and compared those concerns with self-reported attitudes of the parents who completed evaluations. The authors found that program participants (n = 105) were significantly less likely to find the evaluation process intrusive, and more likely to hold positive beliefs about the evaluation process, than practitioners (n = 140) expected. Results of the study may address a potential barrier to effective practitioner evaluation—the belief that having to administer evaluations interferes with establishing a good relationship with program participants. PMID:25328379

  13. 42 CFR 485.729 - Condition of participation: Program evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... participation: Program evaluation. The organization has procedures that provide for a systematic evaluation of... with others. (a) Standard: Clinical-record review. A sample of active and closed clinical records is...

  14. 42 CFR 485.729 - Condition of participation: Program evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... participation: Program evaluation. The organization has procedures that provide for a systematic evaluation of... with others. (a) Standard: Clinical-record review. A sample of active and closed clinical records is...

  15. Toward Useful Program Evaluation in College Foreign Language Education

    ERIC Educational Resources Information Center

    Norris, John M., Ed.; Davis, John McE., Ed.; Sinicrope, Castle, Ed.; Watanabe, Yukiko, Ed.

    2009-01-01

    This volume reports on innovative, useful evaluation work conducted within U.S. college foreign language programs. An introductory chapter scopes out the territory, reporting key findings from research into the concerns, impetuses, and uses for evaluation that FL educators identify. Seven chapters then highlight examples of evaluations conducted…

  16. Outcome Evaluation: Student Development Program, Special Studies Division, Cleveland State University.

    ERIC Educational Resources Information Center

    Pasch, Marvin

    Techniques and procedures used to evaluate the outcomes of the student development program, and to use the evaluation results, are presented. Specific evaluation questions are posed that address overall outcomes, not individual student outcomes, and quantitative measures are suggested to accompany the questions. The measures include statistical…

  17. Evaluating Federal Education Programs.

    ERIC Educational Resources Information Center

    Baker, Eva L., Ed.

    A series of papers was developed by the authors as part of their deliberations as members of the National Research Council's Committee of Program Evaluation in Education. The papers provide a broad range of present evaluative thinking. The conflict between preferences in evaluation methodology comes through in these papers. The selections include:…

  18. Evaluation Options for Family Resource Centers.

    ERIC Educational Resources Information Center

    Horsch, Karen, Ed.; Weiss, Heather B., Ed.

    Family resource centers (FRC) are emerging as a promising program approach to solving urgent social problems. Evaluation plays an important role in learning how these programs work, what their impact is, and whether they should be expanded. However, FRCs pose unique challenges to evaluation. This report considers the challenges to evaluating FRCs,…

  19. Evaluating Community College Personnel: A Research Report.

    ERIC Educational Resources Information Center

    Deegan, William L.; And Others

    A statewide survey was conducted of local evaluation policies, procedures, and problems of implementing evaluation programs on the campuses of California community colleges. The following areas were studied: (1) the process of development of the evaluation program; (2) procedures utilized in the first year of implementing Senate Bill 696…

  20. Mapping New Approaches in Program Evaluation: A Cross-Cultural Perspective.

    ERIC Educational Resources Information Center

    Gorostiaga, Jorge M.; Paulston, Rolland G.

    This paper examines new approaches to program evaluation and explores their possible utility in Latin American educational settings. Part 1 briefly discusses why new ideas for evaluating educational studies are needed. Part 2 examines seven new evaluative approaches as follows: (1) "Concept Mapping," a type of structural…

  1. Inside the Black Box--An Implementation Evaluation Case Study

    ERIC Educational Resources Information Center

    Rector, Patricia; Bakacs, Michele; Rowe, Amy; Barbour, Bruce

    2016-01-01

    The case study presented in this article is an example of an implementation evaluation. The evaluation investigated significant components of the implementation of a long-term environmental educational program. Direct observation, evaluation-specific survey data, and historical data were used to determine program integrity as identified by…

  2. 75 FR 9608 - National Protection and Programs Directorate; Technical Assistance Request and Evaluation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-03

    ... Directorate; Technical Assistance Request and Evaluation AGENCY: National Protection and Programs Directorate... assistance requests from each State and territory. OEC will use the Technical Assistance Evaluation Form to... electronically. Evaluation forms may be submitted electronically or in paper form. The Office of Management and...

  3. 45 CFR 2519.800 - What are the evaluation requirements for Higher Education programs?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE HIGHER EDUCATION INNOVATIVE PROGRAMS FOR COMMUNITY SERVICE Evaluation Requirements § 2519.800 What are the evaluation requirements for Higher Education... 45 Public Welfare 4 2010-10-01 2010-10-01 false What are the evaluation requirements for Higher...

  4. Evaluation: An Imperative for Public Education and Training Programs. Professorial Inaugural Lecture Series.

    ERIC Educational Resources Information Center

    McCaslin, N. L.

    Evaluation is important to public education and training programs if the United States is to be competitive in a global economy. Six major factors have had an impact on evaluation efforts: decreased financial resources, increased public dissatisfaction, changed management approaches, enhanced evaluation methods and procedures, redesigned…

  5. Designing an evaluation for a multiple-strategy community intervention: the North Coast Stay on Your Feet program.

    PubMed

    van Beurden, E; Kempton, A; Sladden, T; Garner, E

    1998-02-01

    Evaluation of the North Coast Stay on Your Feet falls prevention program is described as a case study of a comprehensive evaluation design for multi-strategic community interventions. Qualitative and quantitative methods were used to evaluate the program at formative, process and outcome levels. Formative evaluation used literature review, focus groups, mail-out and telephone survey methods to gather evidence from publications, older people, health workers, local business, media and government bodies. It included an analysis of demographic and hospital databases and identified incidence, causal pathways, knowledge, attitudes, behaviour, consequences and effectiveness of potential strategies. Process evaluation employed auditing, monitoring and telephone surveys to maintain an inventory of intervention activities and to track the reach of the program. Outcome evaluation involved a longitudinal study of intervention and control cohorts, surveyed before, during and after the intervention by telephone to monitor changes in knowledge, attitudes, risk and falls incidence. The survey instrument was designed for both formative and outcome evaluation, and analysis reflected the research design by incorporating repeat measures and adjusting for bias and confounding. Outcome validity was cross-checked via hospital admission rates. A novel, integrated framework for presenting inputs, activities and outcomes from all stages of the program is described. This framework facilitated feedback to stakeholders and enabled subsequent rapid adjustment of the intervention. Rigorous evaluation combined with clear presentation of findings helped to engender intersectoral support and obtain funding grants for extended implementation and evaluation. It also helped Stay on Your Feet to become a model for other falls prevention programs within Australia and internationally.

  6. Forest Fire Prevention Programs and Their Evaluation In U.S. Forest Service Region 8

    Treesearch

    G. Richard Wetherill

    1982-01-01

    A telephone survey of all national forest ranger districts in Region 8 obtained data describing the status of forest fire prevention program evaluation. Out of the 396 programs being conducted on the 105 districts in the South, only one program had undergone any sort of systematic evaluation. Survey data indicate that ranger district prevention personnel are aware of...

  7. A Search for and Evaluation of Computer-Assisted Instructional Programs in Music for Use at Liberty Baptist College: Societal Factors Affecting Education.

    ERIC Educational Resources Information Center

    Reitenour, Steve

    The current use of high technology in music instruction was surveyed, and music-related computer programs appropriate for Liberty Baptist College were recommended. Information on 59 programs was gathered and submitted to a panel of three music instructors, who selected 10 programs for further evaluation. A Courseware Evaluation Sheet was used to…

  8. Evaluations of reproductive health programs in humanitarian settings: a systematic review

    PubMed Central

    2015-01-01

    Provision of reproductive health (RH) services is a minimum standard of health care in humanitarian settings; however access to these services is often limited. This systematic review, one component of a global evaluation of RH in humanitarian settings, sought to explore the evidence regarding RH services provided in humanitarian settings and to determine if programs are being evaluated. In addition, the review explored which RH services receive more attention based on program evaluations and descriptive data. Peer-reviewed papers published between 2004 and 2013 were identified via the Ovid MEDLINE database, followed by a PubMed search. Papers on quantitative evaluations of RH programs, including experimental and non-experimental designs that reported outcome data, implemented in conflict and natural disaster settings, were included. Of 5,669 papers identified in the initial search, 36 papers describing 30 programs met inclusion criteria. Twenty-five papers described programs in sub-Saharan Africa, six in Asia, two in Haiti and three reported data from multiple countries. Some RH technical areas were better represented than others: seven papers reported on maternal and newborn health (including two that also covered family planning), six on family planning, three on sexual violence, 20 on HIV and other sexually transmitted infections and two on general RH topics. In comparison to the program evaluation papers identified, three times as many papers were found that reported RH descriptive or prevalence data in humanitarian settings. While data demonstrating the magnitude of the problem are crucial and were previously lacking, the need for RH services and for evaluations to measure their effectiveness is clear. Program evaluation and implementation science should be incorporated into more programs to determine the best ways to serve the RH needs of people affected by conflict or natural disaster. Standard program design should include rigorous program evaluation, and the results must be shared. The papers demonstrated both that RH programs can be implemented in these challenging settings, and that women and men will use RH services when they are of reasonable quality. PMID:25685183

  9. Are You Making an Impact? Evaluating the Population Health Impact of Community Benefit Programs.

    PubMed

    Rains, Catherine M; Todd, Greta; Kozma, Nicole; Goodman, Melody S

    The Patient Protection and Affordable Care Act includes a change to the IRS 990 Schedule H, requiring nonprofit hospitals to submit a community health needs assessment every 3 years. Such health care entities are challenged to evaluate the effectiveness of community benefit programs addressing the health needs identified. In an effort to determine the population health impact of community benefit programs in 1 hospital outreach department, researchers and staff conducted an impact evaluation to develop priority areas and overarching goals along with program- and department-level objectives. The longitudinal impact evaluation study design consists of retrospective and prospective secondary data analyses. As an urban pediatric hospital, St Louis Children's Hospital provides an array of community benefit programs to the surrounding community. Hospital staff and researchers came together to form an evaluation team. Data from program evaluation and administrative data for analysis were provided by hospital staff. Impact scores were calculated by scoring objectives as met or unmet and averaged across goals to create impact scores that measure how closely programs meet the overarching departmental mission and goals. Over the 4-year period, there is an increasing trend in program-specific impact scores across all programs except one, Healthy Kids Express Asthma, which had a slight decrease in year 4 only. Current work in measuring and assessing the population health impact of community benefit programs is mostly focused on quantifying dollars invested into community benefit work rather than measuring the quality and impact of services. This article provides a methodology for measuring population health impact of community benefit programs that can be used to evaluate the effort of hospitals in providing community benefit. This is particularly relevant in our changing health care climate, as hospitals are being asked to justify community benefit and make meaningful contributions to population health. The Patient Protection and Affordable Care Act includes a change to the IRS 990 Schedule H, requiring nonprofit hospitals to submit a community health needs assessment every 3 years, and requires evaluation of program effectiveness; yet, it does not require any quantification of the impact of community benefit programs. The IRS Schedule H 990 policies could be strengthened by requiring an impact evaluation such as outlined in this article. As hospitals are being asked to justify community benefit and make meaningful contributions to population health, impact evaluations can be utilized to demonstrate the cumulative community benefit of programs and assess population health impact of community benefit programs.

  10. Evaluating efforts to diversify the biomedical workforce: the role and function of the Coordination and Evaluation Center of the Diversity Program Consortium.

    PubMed

    McCreath, Heather E; Norris, Keith C; Calderόn, Nancy E; Purnell, Dawn L; Maccalla, Nicole M G; Seeman, Teresa E

    2017-01-01

    The National Institutes of Health (NIH)-funded Diversity Program Consortium (DPC) includes a Coordination and Evaluation Center (CEC) to conduct a longitudinal evaluation of the two signature, national NIH initiatives - the Building Infrastructure Leading to Diversity (BUILD) and the National Research Mentoring Network (NRMN) programs - designed to promote diversity in the NIH-funded biomedical, behavioral, clinical, and social sciences research workforce. Evaluation is central to understanding the impact of the consortium activities. This article reviews the role and function of the CEC and the collaborative processes and achievements critical to establishing empirical evidence regarding the efficacy of federally-funded, quasi-experimental interventions across multiple sites. The integrated DPC evaluation is particularly significant because it is a collaboratively developed Consortium Wide Evaluation Plan and the first hypothesis-driven, large-scale systemic national longitudinal evaluation of training programs in the history of NIH/National Institute of General Medical Sciences. To guide the longitudinal evaluation, the CEC-led literature review defined key indicators at critical training and career transition points - or Hallmarks of Success. The multidimensional, comprehensive evaluation of the impact of the DPC framed by these Hallmarks is described. This evaluation uses both established and newly developed common measures across sites, and rigorous quasi-experimental designs within novel multi-methods (qualitative and quantitative). The CEC also promotes shared learning among Consortium partners through working groups and provides technical assistance to support high-quality process and outcome evaluation internally of each program. Finally, the CEC is responsible for developing high-impact dissemination channels for best practices to inform peer institutions, NIH, and other key national and international stakeholders. A strong longitudinal evaluation across programs allows the summative assessment of outcomes, an understanding of factors common to interventions that do and do not lead to success, and elucidates the processes developed for data collection and management. This will provide a framework for the assessment of other training programs and have national implications in transforming biomedical research training.

  11. ITERATIVE EVALUATION IN A MOBILE COUNSELING AND TESTING PROGRAM TO REACH PEOPLE OF COLOR AT RISK FOR HIV—NEW STRATEGIES IMPROVE PROGRAM ACCEPTABILITY, EFFECTIVENESS, AND EVALUATION CAPABILITIES

    PubMed Central

    Spielberg, Freya; Kurth, Ann; Reidy, William; McKnight, Teka; Dikobe, Wame; Wilson, Charles

    2016-01-01

    This article highlights findings from an evaluation that explored the impact of mobile versus clinic-based testing, rapid versus central-lab based testing, incentives for testing, and the use of a computer counseling program to guide counseling and automate evaluation in a mobile program reaching people of color at risk for HIV. The program’s results show that an increased focus on mobile outreach using rapid testing, incentives and health information technology tools may improve program acceptability, quality, productivity and timeliness of reports. This article describes program design decisions based on continuous quality assessment efforts. It also examines the impact of the Computer Assessment and Risk Reduction Education computer tool on HIV testing rates, staff perception of counseling quality, program productivity, and on the timeliness of evaluation reports. The article concludes with a discussion of implications for programmatic responses to the Centers for Disease Control and Prevention’s HIV testing recommendations. PMID:21689041

  12. ["Active in rehab": development and formative evaluation of a patient education program to increase health literacy of patients with chronic illness].

    PubMed

    Ullrich, A; Schöpf, A C; Nagl, M; Farin, E

    2015-04-01

    The aim of the article is to describe the development, the process of manualisation and results from the formative evaluation of a patient-oriented patient education program to increase health literacy of patients with chronic illness ("Active in rehab"). Themes of the patient education program were extracted from 17 focus groups. An expert meeting was conducted to validate the content of the patient education program. The formative evaluation was based on a questionnaire (N(max) = 295 patients and N(max) = 39 trainers). The patient education program includes 4 modules with 3 themes (bio-psycho-social model, rehabilitation goals, communication competencies). The evaluation of the modules was good to very good. An analysis of free texts and a follow-up survey among trainers helped us to infer important improvements to the patient education program. RESULTS from the formative evaluation show that the patient education program meets patients and trainers needs and is accepted. © Georg Thieme Verlag KG Stuttgart · New York.

  13. Using EPAS[TM] to Evaluate School-Based Intervention Programs: GEAR UP. Case Study

    ERIC Educational Resources Information Center

    ACT, Inc., 2007

    2007-01-01

    This brief examines how the ACT's EPAS[TM] (Educational Planning and Assessment System) can be used to evaluate school-based intervention programs. Specific evaluation considered is that of the federal government's Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP), an initiative designed to increase the college awareness…

  14. MiBLSi Program Evaluation of Participatory Elementary Schools from 2003-2009

    ERIC Educational Resources Information Center

    Gibbs, Marvin

    2012-01-01

    This dissertation details the use of the Program Evaluations Metaevaluation Checklist (PEMC), Stufflebeam, 1999, which is for performing final, summative metaevaluations. It is organized according to the Joint Committee Program Evaluation Standards. For each of the 30 standards the checklist includes 6 checkpoints drawn from the substance of the…

  15. Evaluation of Supplemental Nutrition Assistance Program Education: Application of Behavioral Theory and Survey Validation

    ERIC Educational Resources Information Center

    Wyker, Brett A.; Jordan, Patricia; Quigley, Danielle L.

    2012-01-01

    Objective: Application of the Transtheoretical Model (TTM) to Supplemental Nutrition Assistance Program Education (SNAP-Ed) evaluation and development and validation of an evaluation tool used to measure TTM constructs is described. Methods: Surveys were collected from parents of children receiving food at Summer Food Service Program sites prior…

  16. How To: Evaluate Education Programs

    ERIC Educational Resources Information Center

    Fink, Arlene; Kosecoff, Jacqueline

    This book presents a compilation of 28 issues of the newsletter, "How To Evaluate Education Programs" from the first one published in September 1977 through the issue of December 1979 on the topic of evaluating educational programs. The subject is covered in the following chapters: (1) How to Choose a Test; (2) How to Rate and Compare…

  17. A Service-Based Program Evaluation Platform for Enhancing Student Engagement in Assignments

    ERIC Educational Resources Information Center

    Wu, Ye-Chi; Ma, Lee Wei; Jiau, Hewijin Christine

    2013-01-01

    Programming assignments are commonly used in computer science education to encourage students to practice target concepts and evaluate their learning status. Ensuring students are engaged in such assignments is critical in attracting and retaining students. To this end, WebHat, a service-based program evaluation platform, is introduced in this…

  18. Federally Mandated Evaluation of Vocational Programs: One College's Response.

    ERIC Educational Resources Information Center

    Stevenson, Mike; Walleri, R. Dan

    The process and findings of the evaluation of seven of the 52 vocational preparatory programs at Mount Hood Community College are described with a focus on demonstrating the feasibility, utility, and problems associated with making program evaluations meet both internal needs and federal requirements. Summaries of data collected are provided in…

  19. 77 FR 3008 - Agency Information Collection Activities; Submission for OMB Review; Comment Request; Evaluation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-20

    ... for OMB Review; Comment Request; Evaluation of the Reintegration of Ex- Offenders--Adult Program... Reintegration of Ex- Offenders--Adult Program,'' to the Office of Management and Budget (OMB) for review and... participants and staff in the random assignment evaluation of the Reintegration of Ex-Offenders-Adult Program...

  20. Evaluating Inclusive Educational Practices for Students with Severe Disabilities Using the Program Quality Measurement Tool

    ERIC Educational Resources Information Center

    Cushing, Lisa S.; Carter, Erik W.; Clark, Nitasha; Wallis, Terry; Kennedy, Craig H.

    2009-01-01

    Recent legislative and school reform efforts require schools to evaluate and improve educational practices for students with severe disabilities. The authors developed the "Program Quality Measurement Tool" (PQMT) to enable administrators and educators to evaluate the educational programming provided to students with severe disabilities against…

  1. 76 FR 74076 - Notice of Random Assignment Study To Evaluate the YouthBuild Program; Final Notice

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-30

    ... Evaluate the YouthBuild Program; Final Notice AGENCY: Employment and Training Administration (ETA), Labor... rigorous, nationally-representative estimates of the net impacts of the YouthBuild program. The Department... study. In the sites randomly selected to participate in this evaluation, all applicants for YouthBuild...

  2. Assessing the Quality of the Business and Management Education in Higher Education

    ERIC Educational Resources Information Center

    Lee, Lung-Sheng; Ko, Hui-Min; Wang, Mei-Tyng; Pan, Ying-Ju

    2014-01-01

    As the third-party planner and implementer of higher education institutional and program evaluations, the Higher Education Evaluation and Accreditation Council of Taiwan (HEEACT) completed program evaluations for all 145 undergraduate business and management (B&M) programs in 43 universities/colleges from 2006 to 2010. In the 145 programs…

  3. Evaluation Methodologies for Estimating the Likelihood of Program Implementation Failure

    ERIC Educational Resources Information Center

    Durand, Roger; Decker, Phillip J.; Kirkman, Dorothy M.

    2014-01-01

    Despite our best efforts as evaluators, program implementation failures abound. A wide variety of valuable methodologies have been adopted to explain and evaluate the "why" of these failures. Yet, typically these methodologies have been employed concurrently (e.g., project monitoring) or to the post-hoc assessment of program activities.…

  4. Leveraging Volunteers: An Experimental Evaluation of a Tutoring Program for Struggling Readers

    ERIC Educational Resources Information Center

    Jacob, Robin; Armstrong, Catherine; Bowden, A. Brooks; Pan, Yilin

    2016-01-01

    This study evaluates the impacts and costs of the Reading Partners program, which uses community volunteers to provide one-on-one tutoring to struggling readers in under-resourced elementary schools. The evaluation uses an experimental design. Students were randomly assigned within 19 different Reading Partners sites to a program or control…

  5. Evaluating Voucher Programs: The Milwaukee Parental Choice Program

    ERIC Educational Resources Information Center

    Witte, John F.

    2016-01-01

    This paper is the first summary of two studies and 10 years of evaluating the Milwaukee Parental Choice (voucher) Program (MPCP). This paper discusses school voucher evaluations in general terms and how these studies are carried out. The paper outlines the types of studies completed in "Study I" and "Study II" and the results…

  6. Delphi-Discrepancy Evaluation: A Model for the Quality Control of Federal, State, and Locally Mandated Programs.

    ERIC Educational Resources Information Center

    Sirois, Herman A.; Iwanick, Edward F.

    Legally mandated educational programs often lack specificity and guidelines for such programs are often vague and subject to considerable variability in interpretation. This situation presents perennial problems for evaluators. Few evaluation models have the flexibility for dealing with this ambiguity and variability while at the same time…

  7. Using System Dynamics as an Evaluation Tool: Experience from a Demonstration Program

    ERIC Educational Resources Information Center

    Fredericks, Kimberly A.; Deegan, Michael; Carman, Joanne G.

    2008-01-01

    Evaluators are often faced with many challenges in the design and implementation of a program's evaluation. Because programs are entangled in complex networks of structures and stakeholders, they can be challenging to understand, and they often pose issues of competing and conflicting goals. However, by using a systems mapping approach to…

  8. 76 FR 12978 - Advisory Committee on the Maternal, Infant and Early Childhood Home Visiting Program Evaluation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ... Administration for Children and Families Advisory Committee on the Maternal, Infant and Early Childhood Home...: Advisory Committee on the Maternal, Infant and Early Childhood Home Visiting Program Evaluation. Date and... and Early Childhood Home Visiting Program Evaluation will meet for its first session on Wednesday...

  9. Evaluation of Mental Health Programs for the Aged.

    ERIC Educational Resources Information Center

    Kahn, Robert L.; Zarit, Steven H.

    This paper highlights what the authors believe are the important issues and directions of change in the evaluation of mental health programs. The rationale for such evaluation is twofold. First, it provides a scientifically rigorous method of determining the therapeutic efficacy of the treatment or program, and secondly, these results can exercise…

  10. 76 FR 68219 - Agency Information Collection Activities; Submission for OMB Review; Comment Request; Impact...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-03

    ... for OMB Review; Comment Request; Impact Evaluation of the YouthBuild Program ACTION: Notice. SUMMARY... proposal for a new information collection titled, ``Impact Evaluation of the YouthBuild Program,'' to the... support of an impact evaluation of the YouthBuild Program. Specifically, the DOL seeks to conduct a census...

  11. Implementation of the Career Education Incentive Act. First Interim Report on the Evaluability Assessment.

    ERIC Educational Resources Information Center

    Jung, Steven M.; And Others

    Survey activities are reported which were designed to provide the foundation for a national evaluation of the effectiveness of programs assisted under the Career Education Incentive Act of 1977 (PL 95-207). The methodology described, called "program evaluability assessment," focuses on detailed analysis of program assumptions in order to…

  12. Analyzing for Individual Differences in Evaluating Compensatory Education Programs. Occasional Paper No. 27.

    ERIC Educational Resources Information Center

    Krus, Patricia H.

    Trait-Treatment Interaction (TTI), a research method for observing experimental effects of treatments on subjects of different aptitudes and learning characteristics, is suggested as an effective evaluation tool to provide evaluators and educators in compensatory education programs with information about which program is best for different kinds…

  13. Evaluation of the Tennessee Nutrition Education and Training Program. 1981 Final Report.

    ERIC Educational Resources Information Center

    Banta, Trudy W.; And Others

    The Tennessee Nutrition Education and Training (NET) program is part of a U.S. Department of Agriculture effort to develop a coordinated nutrition education program for children from preschool through grade 12. For this second-year evaluation, researchers associated with the University of Tennessee collected data for the evaluation of program…

  14. Evaluation of the Integrated Services Pilot Program from Western Australia

    ERIC Educational Resources Information Center

    Hancock, Peter; Cooper, Trudi; Bahn, Susanne

    2009-01-01

    Independent evaluation of refugee-focused programs in developed nations is increasingly a mandatory requirement of funding bodies and government agencies. This paper presents an evaluation of the Integrated Services Centre (ISC) Pilot Project that was conducted in Australia in 2007 and early 2008. The purpose of the ISC program was to provide…

  15. GLOBE in the Czech Republic: A Program Evaluation

    ERIC Educational Resources Information Center

    Cincera, Jan; Maskova, Veronika

    2011-01-01

    The article presents results of the evaluation of the GLOBE program (Global Learning and Observations to Benefit the Environment) in the Czech Republic. The evaluation explores the implementation of the program in schools and its impact on research skills. Four hundred and sixty six pupils, aged 13, from 28 different schools participated in the…

  16. Evaluation of National Institute for Learning Development and Discovery Educational Therapy Program

    ERIC Educational Resources Information Center

    Frimpong, Prince Christopher

    2014-01-01

    In Maryland, some Christian schools have enrolled students with learning disabilities (LDs) but do not have any interventional programs at the school to help them succeed academically. The purpose of this qualitative program evaluation was to evaluate the National Institute for Learning Development (NILD) and Discovery Therapy Educational Program…

  17. An Evaluation of the Nutrition Education and Training Program: Project Summary.

    ERIC Educational Resources Information Center

    St. Pierre, Robert G.

    This project summary reviews and extends the findings of prior reports made by Abt Associates, Inc. (Cambridge, Massachusetts) on the Nutrition Education and Training (NET) program, synthesizes evaluation efforts in nutrition education, and presents a set of conclusions based on the evaluations of nutrition programs. Chapter 1 presents background…

  18. An Evaluation of Newgate and Other Prisoner Education Programs.

    ERIC Educational Resources Information Center

    Kaplan (Marshall), Gans and Kahn, San Francisco, CA.

    In a study to determine what impact prison college programs have had and to provide information useful to policy decisions, an evaluation, findings, and conclusions are presented for the Newgate and four other programs. An evaluation is made of post prison careers utilizing recidivism, "making it", and "doing good" as a…

  19. Summative Evaluation of the Foreign Credential Recognition Program. Final Report

    ERIC Educational Resources Information Center

    Human Resources and Skills Development Canada, 2010

    2010-01-01

    A summative evaluation of the Foreign Credential Recognition Program (FCRP) funded by Human Resources and Skills Development Canada (HRSDC) was conducted during the spring, summer and fall of 2008. The main objective of the evaluation was to measure the relevance, impacts, and cost-effectiveness of the program. Given the timing of the evaluation…

  20. Healing by Creating: Patient Evaluations of Art-Making Program

    ERIC Educational Resources Information Center

    Heiney, Sue P.; Darr-Hope, Heidi; Meriwether, Marian P.; Adams, Swann Arp

    2017-01-01

    The benefits of using art in health care, especially with cancer patients, have been described anecdotally. However, few manuscripts include a conceptual framework to describe the evaluation of patient programs. This paper describes patients' evaluation of a healing arts program developed within a hospital for cancer patients that used art-making,…

  1. A Collaborative Approach to Family Literacy Evaluation Strategies.

    ERIC Educational Resources Information Center

    Landerholm, Elizabeth; Karr, Jo Ann; Mushi, Selina

    A collaborative approach to program evaluation combined with the use of a variety of evaluation methods using currently available technology can yield valuable information about the effectiveness of family literacy programs. Such an approach was used for McCosh Even Start, a federally-funded family literacy program located at McCosh School in an…

  2. Program Evaluation of Alternative Schools in North Carolina: A Companion Dissertation

    ERIC Educational Resources Information Center

    Jones, Michael O.

    2013-01-01

    The purpose of the evaluation was to evaluate two alternative programs in a North Carolina (NC) and South Carolina (SC) public school district to determine if they are effective in delivering constructive interventions that modify student behavior once students have left the programs and have returned to their regular learning environments. This…

  3. Learning and Leadership: Evaluation of an Australian Rural Leadership Program

    ERIC Educational Resources Information Center

    Madsen, Wendy; O'Mullan, Cathy; Keen-Dyer, Helen

    2014-01-01

    Leadership programs have been extensively promoted in rural communities in Australia. However, few have been evaluated. The results of the evaluation of a rural leadership program provided in this paper highlight the need for adult learning theories to be more overtly identified and utilised as the basis of planning and implementing leadership…

  4. Evaluation of Illinois Benedictine's Freshman Advising Program via the New Benedictine Advising Survey.

    ERIC Educational Resources Information Center

    Iaccino, James F.

    Many colleges and universities lack a formal evaluation system to assess their advising programs. Illinois Benedictine College, however, has consistently evaluated its Freshman Advising Program (FAP) on an annual basis ever since its inception. In 1986, the college decided to implement the new Benedictine Advising Survey in the exit interview…

  5. Masonry Program Standards.

    ERIC Educational Resources Information Center

    Georgia Univ., Athens. Dept. of Vocational Education.

    This publication contains statewide standards for the masonry program in Georgia. The standards are divided into 12 categories: foundations (philosophy, purpose, goals, program objectives, availability, evaluation); admissions (admission requirements, provisional admission requirements, recruitment, evaluation and planning); program structure…

  6. [Evaluation of the implementation of reproductive health services in Maringá, Paraná State, Brazil].

    PubMed

    Nagahama, Elizabeth Eriko Ishida

    2009-01-01

    The aim of this study was to develop a tool to evaluate the implementation of a contraceptive program in health services and apply it to the 23 public health services in Maringá, Paraná State, Brazil. A theoretical-logical model was developed, corresponding to a 'target image' for the family planning program. Using the Delphi technique and consensus conference, six experts validated the program's target image, which included three dimensions and 60 evaluation criteria. A data collection instrument was prepared, in addition to a spreadsheet to evaluate the degree of the family planning program's implementation, constituting the Questionnaire for the Evaluation of Reproductive Health Services. The vast majority of the primary health units (91.3%) received an 'intermediate' score on implementation of the family planning program, while 8.7% were classified as 'incipient' and none were scored as 'advanced'. The 'advanced' degree of implementation in the structural dimension contrasted with the organizational and patient care dimensions. The instrument can be useful for evaluating reproductive health programs and is applicable to the health services planning and management processes.

  7. Evaluating an employee wellness program.

    PubMed

    Mukhopadhyay, Sankar; Wendel, Jeanne

    2013-12-01

    What criteria should be used to evaluate the impact of a new employee wellness program when the initial vendor contract expires? Published academic literature focuses on return-on-investment as the gold standard for wellness program evaluation, and a recent meta-analysis concludes that wellness programs can generate net savings after one or two years. In contrast, surveys indicate that fewer than half of these programs report net savings, and actuarial analysts argue that return-on-investment is an unrealistic metric for evaluating new programs. These analysts argue that evaluation of new programs should focus on contract management issues, such as the vendor's ability to: (i) recruit employees to participate and (ii) induce behavior change. We compute difference-in-difference propensity score matching estimates of the impact of a wellness program implemented by a mid-sized employer. The analysis includes one year of pre-implementation data and three years of post-implementation data. We find that the program successfully recruited a broad spectrum of employees to participate, and it successfully induced short-term behavior change, as manifested by increased preventive screening. However, the effects on health care expenditures are positive (but insignificant). If it is unrealistic to expect new programs to significantly reduce healthcare costs in a few years, then focusing on return-on-investment as the gold standard metric may lead to early termination of potentially useful wellness programs. Focusing short-term analysis of new programs on short-term measures may provide a more realistic evaluation strategy.

  8. Teaching and evaluation of ethics and professionalism

    PubMed Central

    Pauls, Merril A.

    2012-01-01

    Abstract Objective To document the scope of the teaching and evaluation of ethics and professionalism in Canadian family medicine postgraduate training programs, and to identify barriers to the teaching and evaluation of ethics and professionalism. Design A survey was developed in collaboration with the Committee on Ethics of the College of Family Physicians of Canada. The data are reported descriptively and in aggregate. Setting Canadian postgraduate family medicine training programs. Participants Between June and December of 2008, all 17 Canadian postgraduate family medicine training programs were invited to participate. Main outcome measures The first part of the survey explored the structure, resources, methods, scheduled hours, and barriers to teaching ethics and professionalism. The second section focused on end-of-rotation evaluations, other evaluation strategies, and barriers related to the evaluation of ethics and professionalism. Results Eighty-eight percent of programs completed the survey. Most respondents (87%) had learning objectives specifically for ethics and professionalism, and 87% had family doctors with training or interest in the area leading their efforts. Two-thirds of responding programs had less than 10 hours of scheduled instruction per year, and the most common barriers to effective teaching were the need for faculty development, competing learning needs, and lack of resident interest. Ninety-three percent of respondents assessed ethics and professionalism on their end-of-rotation evaluations, with 86% assessing specific domains. The most common barriers to evaluation were a lack of suitable tools and a lack of faculty comfort and interest. Conclusion By far most Canadian family medicine postgraduate training programs had learning objectives and designated faculty leads in ethics and professionalism, yet there was little curricular time dedicated to these areas and a perceived lack of resident interest and faculty expertise. Most programs evaluated ethics and professionalism as part of their end-of-rotation evaluations, but only a small number used novel means of evaluation, and most cited a lack of suitable assessment tools as an important barrier. PMID:23242906

  9. An Introduction to "My Environmental Education Evaluation Resource Assistant" (MEERA), a Web-Based Resource for Self-Directed Learning about Environmental Education Program Evaluation

    ERIC Educational Resources Information Center

    Zint, Michaela

    2010-01-01

    My Environmental Education Evaluation Resource Assistant or "MEERA" is a web-site designed to support environmental educators' program evaluation activities. MEERA has several characteristics that set it apart from other self-directed learning evaluation resources. Readers are encouraged to explore the site and to reflect on the role that…

  10. Can a workbook work? Examining whether a practitioner evaluation toolkit can promote instrumental use.

    PubMed

    Campbell, Rebecca; Townsend, Stephanie M; Shaw, Jessica; Karim, Nidal; Markowitz, Jenifer

    2015-10-01

    In large-scale, multi-site contexts, developing and disseminating practitioner-oriented evaluation toolkits are an increasingly common strategy for building evaluation capacity. Toolkits explain the evaluation process, present evaluation design choices, and offer step-by-step guidance to practitioners. To date, there has been limited research on whether such resources truly foster the successful design, implementation, and use of evaluation findings. In this paper, we describe a multi-site project in which we developed a practitioner evaluation toolkit and then studied the extent to which the toolkit and accompanying technical assistance was effective in promoting successful completion of local-level evaluations and fostering instrumental use of the findings (i.e., whether programs directly used their findings to improve practice, see Patton, 2008). Forensic nurse practitioners from six geographically dispersed service programs completed methodologically rigorous evaluations; furthermore, all six programs used the findings to create programmatic and community-level changes to improve local practice. Implications for evaluation capacity building are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Algunos Criterios para Evaluar Programas de Educacion Superior a Nivel de Posgrado: El Caso Particular de la Administracion Publica (Some Criteria to Evaluate Higher Education Programs at the Graduate Level: The Special Case of Public Administration).

    ERIC Educational Resources Information Center

    Valle, Victor M.

    Intended as a contribution to a workshop discussion on program evaluation in higher education, the paper covers five major evaluation issues. First, it deals with evaluation concepts, explaining the purposes of evaluation; pertinent terms; and the sources of evaluation in public health procedures, the scientific method, the systems approach, and…

  12. Empowerment evaluation: building communities of practice and a culture of learning.

    PubMed

    Fetterman, David M

    2002-02-01

    Empowerment evaluation is the use of evaluation concepts, techniques, and findings to foster improvement and self-determination. Program participants--including clients--conduct their own evaluations: an outside evaluator often serves as a coach or additional facilitator depending on internal program capabilities. Empowerment evaluation has three steps: 1) establishing a mission; 2) taking stock; and 3) planning for the future. These three steps build capacity. They also build a sense of community, often referred to as communities of practice. Empowerment evaluation also helps to create a culture of learning and evaluation within an organization or community.

  13. Traffic control device evaluation program : FY 2016.

    DOT National Transportation Integrated Search

    2017-03-01

    This report presents findings on three different activities conducted in the Traffic Control Device Evaluation Program during the 2016 fiscal year. The first two activities are evaluations of full-matrix color light-emitting diode changeable message ...

  14. The Superfund Innovative Technology Evaluation Program SUMMARY AND CLOSURE REPORT

    EPA Science Inventory

    The Superfund Innovative Technology Evaluation (SITE) Program promoted the development, commercialization, and implementation of innovative hazardous waste treatment technologies for 20 years. SITE offered a mechanism for conducting joint technology demonstration and evaluation ...

  15. Evaluation as a critical factor of success in local public health accreditation programs.

    PubMed

    Tremain, Beverly; Davis, Mary; Joly, Brenda; Edgar, Mark; Kushion, Mary L; Schmidt, Rita

    2007-01-01

    This article presents the variety of approaches used to conduct evaluations of performance improvement or accreditation systems, while illustrating the complexity of conducting evaluations to inform local public health practice. We, in addition, hope to inform the Exploring Accreditation Program about relevant experiences involving accreditation and performance assessment processes, specifically evaluation, as it debates and discusses a national voluntary model. A background of each state is given. To further explore these issues, interviews were conducted with each state's evaluator to gain more in-depth information on the many different evaluation strategies and approaches used. On the basis of the interviews, the authors provide several overall themes, which suggest that evaluation is a critical tool and success factor for performance assessment or accreditation programs.

  16. Planning an integrated agriculture and health program and designing its evaluation: Experience from Western Kenya.

    PubMed

    Cole, Donald C; Levin, Carol; Loechl, Cornelia; Thiele, Graham; Grant, Frederick; Girard, Aimee Webb; Sindi, Kirimi; Low, Jan

    2016-06-01

    Multi-sectoral programs that involve stakeholders in agriculture, nutrition and health care are essential for responding to nutrition problems such as vitamin A deficiency among pregnant and lactating women and their infants in many poor areas of lower income countries. Yet planning such multi-sectoral programs and designing appropriate evaluations, to respond to different disciplinary cultures of evidence, remain a challenge. We describe the context, program development process, and evaluation design of the Mama SASHA project (Sweetpotato Action for Security and Health in Africa) which promoted production and consumption of a bio-fortified, orange-fleshed sweetpotato (OFSP). In planning the program we drew upon information from needs assessments, stakeholder consultations, and a first round of the implementation evaluation of a pilot project. The multi-disciplinary team worked with partner organizations to develop a program theory of change and an impact pathway which identified aspects of the program that would be monitored and established evaluation methods. Responding to the growing demand for greater rigour in impact evaluations, we carried out quasi-experimental allocation by health facility catchment area, repeat village surveys for assessment of change in intervention and control areas, and longitudinal tracking of individual mother-child pairs. Mid-course corrections in program implementation were informed by program monitoring, regular feedback from implementers and partners' meetings. To assess economic efficiency and provide evidence for scaling we collected data on resources used and project expenses. Managing the multi-sectoral program and the mixed methods evaluation involved bargaining and trade-offs that were deemed essential to respond to the array of stakeholders, program funders and disciplines involved. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Planning an integrated agriculture and health program and designing its evaluation: Experience from Western Kenya

    PubMed Central

    Cole, Donald C.; Levin, Carol; Loechl, Cornelia; Thiele, Graham; Grant, Frederick; Girard, Aimee Webb; Sindi, Kirimi; Low, Jan

    2016-01-01

    Multi-sectoral programs that involve stakeholders in agriculture, nutrition and health care are essential for responding to nutrition problems such as vitamin A deficiency among pregnant and lactating women and their infants in many poor areas of lower income countries. Yet planning such multi-sectoral programs and designing appropriate evaluations, to respond to different disciplinary cultures of evidence, remain a challenge. We describe the context, program development process, and evaluation design of the Mama SASHA project (Sweetpotato Action for Security and Health in Africa) which promoted production and consumption of a bio-fortified, orange-fleshed sweetpotato (OFSP). In planning the program we drew upon information from needs assessments, stakeholder consultations, and a first round of the implementation evaluation of a pilot project. The multi-disciplinary team worked with partner organizations to develop a program theory of change and an impact pathway which identified aspects of the program that would be monitored and established evaluation methods. Responding to the growing demand for greater rigour in impact evaluations, we carried out quasi-experimental allocation by health facility catchment area, repeat village surveys for assessment of change in intervention and control areas, and longitudinal tracking of individual mother-child pairs. Mid-course corrections in program implementation were informed by program monitoring, regular feedback from implementers and partners’ meetings. To assess economic efficiency and provide evidence for scaling we collected data on resources used and project expenses. Managing the multi-sectoral program and the mixed methods evaluation involved bargaining and trade-offs that were deemed essential to respond to the array of stakeholders, program funders and disciplines involved. PMID:27003730

  18. Overview of a pharmacist anticoagulation certificate program.

    PubMed

    Kirk, Julienne K; Edwards, Rebecca; Brewer, Andrew; Miller, Cathey; Bray, Bryan; Groce, James B

    2017-07-01

    To describe the design of an ongoing anticoagulation certificate program and annual renewal update for pharmacists. Components of the anticoagulation certificate program include home study, pre- and posttest, live sessions, case discussions with evaluation and presentation, an implementation plan, and survey information (program evaluation and use in practice). Clinical reasoning skills were assessed through case work-up and evaluation prior to live presentation. An annual renewal program requires pharmacists to complete home study and case evaluations. A total of 361 pharmacists completed the anticoagulation certificate program between 2002 and 2015. Most (62%) practiced in ambulatory care and 38% in inpatient care settings (8% in both). In the past four years, 71% were working in or starting anticoagulation clinics in ambulatory and inpatient settings. In their evaluations of the program, an average of 90% of participants agreed or strongly agreed the lecture material was relevant and objectives were met. Pharmacists are able to apply knowledge and skills in management of anticoagulation. This structured practice-based continuing education program was intended to enhance pharmacy practice and has achieved that goal. The certificate program in anticoagulation was relevant to pharmacists who attended the program. Copyright © 2017. Published by Elsevier Inc.

  19. Connected Classroom: A Program Evaluation of the Professional Development Program of a One-to-One Educational Technology Initiative in South Carolina

    ERIC Educational Resources Information Center

    Grant, Kelly J.

    2016-01-01

    The purpose of this study was to evaluate the impact of the first year of a multi-year, district-wide professional development program for teachers that accompanied a one-to-one Apple device rollout for all students. A mixed-method research design was used to perform a logic model of program evaluation. Teacher self-reported proficiency in basic…

  20. Use of Bennett's Hierarchical Model in the Evaluation of the Extension Education Program for Cacao Farmers in the Northeast Region of the Dominican Republic. Summary of Research 54.

    ERIC Educational Resources Information Center

    De los Santos, Saturnino; Norland, Emmalou Van Tilburg

    A study evaluated the cacao farmer training program in the Dominican Republic by testing hypothesized relationships among reactions, knowledge and skills, attitudes, aspirations, and some selected demographic characteristics of farmers who attended programs. Bennett's hierarchical model of program evaluation was used as the framework of the study.…

  1. Evaluation in New Jersey Education: A Survey of Present Practices and Recommendations for Future Action.

    ERIC Educational Resources Information Center

    Pinkowski, Francis; And Others

    Current evaluation activities in the New Jersey school system are surveyed, and recommendations for future evaluation efforts are made. The current activities and future developments of school (or school district), statewide, and project (or program) evaluation are discussed individually. The following program objectives are suggested: to raise…

  2. The State of the Empirical Research Literature on Stakeholder Involvement in Program Evaluation

    ERIC Educational Resources Information Center

    Brandon, Paul R.; Fukunaga, Landry L.

    2014-01-01

    Evaluators widely agree that stakeholder involvement is a central aspect of effective program evaluation. With the exception of articles on collaborative evaluation approaches, however, a systematic review of the breadth and depth of the literature on stakeholder involvement has not been published. In this study, we examine peer-reviewed empirical…

  3. Workplace Literacy Program (WPL) at Chinatown Manpower Project, Inc. Final Evaluation.

    ERIC Educational Resources Information Center

    Friedenberg, Joan E.

    This document describes the procedures for and results of the external evaluation of the workplace literacy program for underemployed garment industry workers with low English skills at Chinatown Manpower Project, Inc. in Chinatown in New York City. The document describes the evaluation design and methodology as well as the evaluation results,…

  4. Building Evaluation Capacity in Local Programs for Multisite Nutrition Education Interventions

    ERIC Educational Resources Information Center

    Fourney, Andrew; Gregson, Jennifer; Sugerman, Sharon; Bellow, Andrew

    2011-01-01

    From 2004-2008, capacity to conduct program evaluation was built among the "Network for a Healthy California's" 48 largest local partners. Capacity building was done within a framework of Empowerment Evaluation and Utility-Focused evaluation. Tools included: a Scope of Work template, a handbook, a compendium of surveys, an evaluation…

  5. Dig into Learning: A Program Evaluation of an Agricultural Literacy Innovation

    ERIC Educational Resources Information Center

    Edwards, Erica Brown

    2016-01-01

    This study is a mixed-methods program evaluation of an agricultural literacy innovation in a local school district in rural eastern North Carolina. This evaluation describes the use of a theory-based framework, the Concerns-Based Adoption Model (CBAM), in accordance with Stufflebeam's Context, Input, Process, Product (CIPP) model by evaluating the…

  6. Conducting Program Evaluation with Hispanics in Rural Settings: Ethical Issues and Evaluation Challenges

    ERIC Educational Resources Information Center

    Loi, Claudia X. Aguado; McDermott, Robert J.

    2010-01-01

    Conducting evaluations that are both valid and ethical is imperative for the support and sustainability of programs that address underserved and vulnerable populations. A key component is to have evaluators who are knowledgeable about relevant cultural issues and sensitive to population needs. Hispanics in rural settings are vulnerable for many…

  7. Program Impact Evaluations: An Introduction for Managers of Title VII Projects. A Draft Guidebook.

    ERIC Educational Resources Information Center

    Bissell, Joan S.

    Intended to assist administrators in the planning, management, and utilization of evaluation, this guidebook is designed as an introduction and supplement to other evaluation materials for bilingual education programs being developed under federal sponsorship, including evaluation models for Title VII projects. General information is provided on…

  8. Participatory Evaluation and Learning: A Case Example Involving Ripple Effects Mapping of a Tourism Assessment Program

    ERIC Educational Resources Information Center

    Bhattacharyya, Rani; Templin, Elizabeth; Messer, Cynthia; Chazdon, Scott

    2017-01-01

    Engaging communities through research-based participatory evaluation and learning methods can be rewarding for both a community and Extension. A case study of a community tourism development program evaluation shows how participatory evaluation and learning can be mutually reinforcing activities. Many communities value the opportunity to reflect…

  9. Arts Integration and Students' Reading Achievement: A Formative Evaluation Study

    ERIC Educational Resources Information Center

    Hosfelt, Patricia D.

    2017-01-01

    The purpose of this dissertation was to evaluate essential components of an arts-integration program that may contribute to improved student achievement in elementary reading at the school of study through a formative evaluation. Stufflebeam's CIPP model of program evaluation served as the conceptual framework for the study's findings. Creative…

  10. Planning for the Evaluation of Special Educational Programs: A Resource Guide.

    ERIC Educational Resources Information Center

    Meierhenry, Wesley C.

    Developed along with a tape-slide package, the guide covers evaluation of special educational programs. Robert McIntyre discusses evaluation for decision making; Victor Baldwin considers sources of help and how to use them; and Helmut Hofmann treats objectives as guidelines for action, data collection, and budget planning and evaluation. Wesley…

  11. Public-Interest Values and Program Sustainability: Some Implications for Evaluation Practice

    ERIC Educational Resources Information Center

    Chelimsky, Eleanor

    2014-01-01

    Evaluating the longer-term sustainability of government programs and policies seems in many ways to go beyond the boundaries of typical evaluation practice. Not only have intervention failures over time been difficult to predict, but the question of sustainability itself tends to fall outside current evaluation thinking, timing and functions. This…

  12. 42 CFR 484.52 - Condition of participation: Evaluation of the agency's program.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... professional people outside the agency working in conjunction with consumers. The evaluation consists of an... 42 Public Health 5 2010-10-01 2010-10-01 false Condition of participation: Evaluation of the... Furnishing of Services § 484.52 Condition of participation: Evaluation of the agency's program. The HHA has...

  13. LANDFILL RECLAMATION - POTENTIAL FOR RECYCLING/REUSE AND RESULTS OF THE EVALUATION OF THE COLLIER COUNTY, FLORIDA MITE DEMONSTRATION

    EPA Science Inventory

    In October 1993, the US Environmental Protection Agency (EPA) issued the report "Evaluation of the Collier County Solid Waste Department and was evaluated as a part of EPA's Municipal Solid Waste Innovative Technology Evaluation (MITE) Program. he purpose of the MITE program is t...

  14. Students Training for Academic Readiness (STAR): Year Three Evaluation Report

    ERIC Educational Resources Information Center

    Rainey, Katharine; Sheehan, Daniel; Maloney, Catherine

    2010-01-01

    This report presents findings from the Year 3 evaluation of Texas' state-level Gaining Early Awareness and Readiness for Undergraduate Programs, or GEAR UP, grant. GEAR UP grant requirements include an evaluation component designed to assess program effectiveness and to measure progress toward project goals. To this end, the evaluation considers…

  15. A Product Evaluation of the Selective Abandonment Process for School Budgeting

    ERIC Educational Resources Information Center

    Loofe, Christopher M.

    2016-01-01

    The purpose of this study is to evaluate the degree to which the Selective Abandonment budget process objectives were achieved by analyzing stakeholder perceptions. Use of this evaluation may enable the district to become more effective, efficient, and more fiscally responsible when developing future program budgeting plans. Program evaluation was…

  16. Evaluation of Nosocomial Infection Control Programs in health services 1

    PubMed Central

    Menegueti, Mayra Gonçalves; Canini, Silvia Rita Marin da Silva; Bellissimo-Rodrigues, Fernando; Laus, Ana Maria

    2015-01-01

    OBJECTIVES: to evaluate the Nosocomial Infection Control Programs in hospital institutions regarding structure and process indicators. METHOD: this is a descriptive, exploratory and quantitative study conducted in 2013. The study population comprised 13 Nosocomial Infection Control Programs of health services in a Brazilian city of the state of São Paulo. Public domain instruments available in the Manual of Evaluation Indicators of Nosocomial Infection Control Practices were used. RESULTS: The indicators with the highest average compliance were "Evaluation of the Structure of the Nosocomial Infection Control Programs" (75%) and "Evaluation of the Epidemiological Surveillance System of Nosocomial Infection" (82%) and those with the lowest mean compliance scores were "Evaluation of Operational Guidelines" (58.97%) and "Evaluation of Activities of Control and Prevention of Nosocomial Infection" (60.29%). CONCLUSION: The use of indicators identified that, despite having produced knowledge about prevention and control of nosocomial infections, there is still a large gap between the practice and the recommendations. PMID:25806637

  17. The Relationship of Explicit-Implicit Evaluative Discrepancy to Exercise Dropout in Middle-Aged Adults.

    PubMed

    Berry, Tanya R; Rodgers, Wendy M; Divine, Alison; Hall, Craig

    2018-06-19

    Discrepancies between automatically activated associations (i.e., implicit evaluations) and explicit evaluations of motives (measured with a questionnaire) could lead to greater information processing to resolve discrepancies or self-regulatory failures that may affect behavior. This research examined the relationship of health and appearance exercise-related explicit-implicit evaluative discrepancies, the interaction between implicit and explicit evaluations, and the combined value of explicit and implicit evaluations (i.e., the summed scores) to dropout from a yearlong exercise program. Participants (N = 253) completed implicit health and appearance measures and explicit health and appearance motives at baseline, prior to starting the exercise program. The sum of implicit and explicit appearance measures was positively related to weeks in the program, and discrepancy between the implicit and explicit health measures was negatively related to length of time in the program. Implicit exercise evaluations and their relationships to oft-cited motives such as appearance and health may inform exercise dropout.

  18. A participatory approach to evaluating a national training and institutional change initiative: the BUILD longitudinal evaluation.

    PubMed

    Davidson, Pamela L; Maccalla, Nicole M G; Afifi, Abdelmonem A; Guerrero, Lourdes; Nakazono, Terry T; Zhong, Shujin; Wallace, Steven P

    2017-01-01

    The National Institutes of Health (NIH) funds training programs to increase the numbers and skills of scientists who obtain NIH research grants, but few programs have been rigorously evaluated. The sizeable recent NIH investment in developing programs to increase the diversity of the NIH-funded workforce, implemented through the Diversity Program Consortium (DPC), is unusual in that it also funds a Consortium-wide evaluation plan, which spans the activities of the 10 BUilding Infrastructure Leading to Diversity (BUILD) awardees and the National Research Mentoring Network (NRMN). The purpose of this article is to describe the evaluation design and innovations of the BUILD Program on students, faculty, and institutions of the 10 primarily undergraduate BUILD sites. Our approach to this multi-methods quasi-experimental longitudinal evaluation emphasizes stakeholder participation and collaboration. The evaluation plan specifies the major evaluation questions and key short- to long-term outcome measures (or Hallmarks of Success). The Coordination and Evaluation Center (CEC) embarked on a comprehensive evaluation strategy by developing a set of logic models that incorporate the Hallmarks of Success and other outcomes that were collaboratively identified by the DPC. Data were collected from each BUILD site through national surveys from the Higher Education Research Institute at UCLA (HERI), annual followup surveys that align with the HERI instruments, site visits and case studies, program encounter data ("tracker" data), and institutional data. The analytic approach involves comparing changes in Hallmarks (key outcomes) within institutions for biomedical students who participated versus those who did not participate in the BUILD program at each institution, as well as between institution patterns of biomedical students at the BUILD sites, and matched institutions that were not BUILD grantees. Case studies provide insights into the institutionalization of these new programs and help to explain the processes that lead to the observed outcomes. Ultimately, the results of the consortium-wide evaluation will be used to inform national policy in higher education and will provide relevant examples of institutional and educational programmatic changes required to diversify the biomedical workforce in the USA.

  19. Evaluation of a cross-cultural training program for Pakistani educators: Lessons learned and implications for program planning.

    PubMed

    Mazur, Rebecca; Woodland, Rebecca H

    2017-06-01

    In this paper, we share the results of a summative evaluation of PEILI, a US-based adult professional development/training program for secondary school Pakistani teachers. The evaluation was guided by the theories of cultural competence (American Psychological Association, 2003; Bamberger, 1999; Wadsworth, 2001) and established frameworks for the evaluation of professional development/training and instructional design (Bennett, 1975; Guskey, 2002; King, 2014; Kirkpatrick, 1967). The explicit and implicit stakeholder assumptions about the connections between program resources, activities, outputs, and outcomes are described. Participant knowledge and skills were measured via scores on a pre/posttest of professional knowledge, and a standards-based performance assessment rubric. In addition to measuring short-term program outcomes, we also sought to incorporate theory-driven thinking into the evaluation design. Hence, we examined participant self-efficacy and access to social capital, two evidenced-based determinants or "levers" that theoretically explain the transformative space between an intervention and its outcomes (Chen, 2012). Data about program determinants were collected and analyzed through a pre/posttest of self-efficacy and social network analysis. Key evaluation findings include participant acquisition of new instructional skills, increased self-efficacy, and the formation of a nascent professional support network. Lessons learned and implications for the design and evaluation of cross-cultural teacher professional development programs are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. [Process evaluation in relation to effectiveness assessment: experiences with school-based programs].

    PubMed

    Ariza, Carles; Villalbí, Joan R; Sánchez-Martínez, Francesca; Nebot, Manel

    2011-06-01

    Evaluation of public health interventions usually focus on the quality of design and research methods, and less on the quality of the intervention or process evaluation. In process evaluation of school-based interventions, key issues are how completely the intervention is carried out and adherence to the protocol. In addition, exploration of intermediate variables, such as those that influence (and often predict) preventable behavior, is highly useful. This article describes the basic concepts in this topic, using examples of the effectiveness of some preventive interventions carried out in schools. The interventions discussed were mainly quasi-experimental studies, based on data from programs promoted by public health teams in the city of Barcelona. Data from process evaluation of preventive programs in secondary schools that underwent formal assessment of their effectiveness is provided. The examples are drawn from preventive programs of HIV infection or unprotected sexual intercourse (PRESSEC program) and drug consumption prevention (the PASE, PASE.bcn and x kpts programs). These examples show why the intervention process influences the impact of the programs and their results. Thorough planning of process evaluation is essential to obtain valid indicators that will identify, in the effectiveness evaluation of the intervention, the most efficacious strategies to obtain positive outcomes. Copyright © 2011 Sociedad Española de Salud Pública y Administración Sanitaria. Published by Elsevier Espana. All rights reserved.

Top