NASA Technical Reports Server (NTRS)
Rey, Charles A.
1991-01-01
The development of high temperature containerless processing equipment and the design and evaluation of associated systems required for microgravity materials processing and property measurements are discussed. Efforts were directed towards the following task areas: design and development of a High Temperature Acoustic Levitator (HAL) for containerless processing and property measurements at high temperatures; testing of the HAL module to establish this technology for use as a positioning device for microgravity uses; construction and evaluation of a brassboard hot wall Acoustic Levitation Furnace; construction and evaluation of a noncontact temperature measurement (NCTM) system based on AGEMA thermal imaging camera; construction of a prototype Division of Amplitude Polarimetric Pyrometer for NCTM of levitated specimens; evaluation of and recommendations for techniques to control contamination in containerless materials processing chambers; and evaluation of techniques for heating specimens to high temperatures for containerless materials experimentation.
NASA Astrophysics Data System (ADS)
Rey, Charles A.
1991-03-01
The development of high temperature containerless processing equipment and the design and evaluation of associated systems required for microgravity materials processing and property measurements are discussed. Efforts were directed towards the following task areas: design and development of a High Temperature Acoustic Levitator (HAL) for containerless processing and property measurements at high temperatures; testing of the HAL module to establish this technology for use as a positioning device for microgravity uses; construction and evaluation of a brassboard hot wall Acoustic Levitation Furnace; construction and evaluation of a noncontact temperature measurement (NCTM) system based on AGEMA thermal imaging camera; construction of a prototype Division of Amplitude Polarimetric Pyrometer for NCTM of levitated specimens; evaluation of and recommendations for techniques to control contamination in containerless materials processing chambers; and evaluation of techniques for heating specimens to high temperatures for containerless materials experimentation.
A method to evaluate process performance by integrating time and resources
NASA Astrophysics Data System (ADS)
Wang, Yu; Wei, Qingjie; Jin, Shuang
2017-06-01
The purpose of process mining is to improve the existing process of the enterprise, so how to measure the performance of the process is particularly important. However, the current research on the performance evaluation method is still insufficient. The main methods of evaluation are mainly using time or resource. These basic statistics cannot evaluate process performance very well. In this paper, a method of evaluating the performance of the process based on time dimension and resource dimension is proposed. This method can be used to measure the utilization and redundancy of resources in the process. This paper will introduce the design principle and formula of the evaluation algorithm. Then, the design and the implementation of the evaluation method will be introduced. Finally, we will use the evaluating method to analyse the event log from a telephone maintenance process and propose an optimization plan.
Vagos, Paula; Rijo, Daniel; Santos, Isabel M
2016-04-01
Relatively little is known about measures used to investigate the validity and applications of social information processing theory. The Scenes for Social Information Processing in Adolescence includes items built using a participatory approach to evaluate the attribution of intent, emotion intensity, response evaluation, and response decision steps of social information processing. We evaluated a sample of 802 Portuguese adolescents (61.5% female; mean age = 16.44 years old) using this instrument. Item analysis and exploratory and confirmatory factor analytic procedures were used for psychometric examination. Two measures for attribution of intent were produced, including hostile and neutral; along with 3 emotion measures, focused on negative emotional states; 8 response evaluation measures; and 4 response decision measures, including prosocial and impaired social behavior. All of these measures achieved good internal consistency values and fit indicators. Boys seemed to favor and choose overt and relational aggression behaviors more often; girls conveyed higher levels of neutral attribution, sadness, and assertiveness and passiveness. The Scenes for Social Information Processing in Adolescence achieved adequate psychometric results and seems a valuable alternative for evaluating social information processing, even if it is essential to continue investigation into its internal and external validity. (c) 2016 APA, all rights reserved.
Newton, R. L.; Thomson, J. L.; Rau, K.; Duhe’, S.; Sample, A.; Singleton, N.; Anton, S. D.; Webber, L. S.; Williamson, D. A.
2011-01-01
Purpose To evaluate the implementation of intervention components of the Louisiana Health study, which was a multi-component childhood obesity prevention program conducted in rural schools. Design Content analysis. Setting Process evaluation assessed implementation in the classrooms, gym classes, and cafeterias. Subjects Classroom teachers (n = 232), physical education teachers (n = 53), food service managers (n = 33), and trained observers (n = 9). Measures Five process evaluation measures were created: Physical Education Questionnaire (PEQ), Intervention Questionnaire (IQ), Food Service Manager Questionnaire (FSMQ), Classroom Observation (CO) and School Nutrition Environment Observation (SNEO). Analysis Inter-rater reliability and internal consistency were conducted on all measures. ANOVA and Chi-square were used to compare differences across study groups on questionnaires and observations. Results The PEQ and one sub-scale from the FSMQ were eliminated because their reliability coefficients fell below acceptable standards. The sub-scale internal consistencies for the IQ, FSMQ, CO, and SNEO (all Cronbach’s α > .60) were acceptable. Conclusions After the initial 4 months of intervention, there was evidence that the Louisiana Health intervention was being implemented as it was designed. In summary, four process evaluation measures were found to be sufficiently reliable and valid for assessing the delivery of various aspects of a school-based obesity prevention program. These process measures could be modified to evaluate the delivery of other similar school-based interventions. PMID:21721969
ERIC Educational Resources Information Center
Guerra-Lopez, Ingrid; Toker, Sacip
2012-01-01
This article illustrates the application of the Impact Evaluation Process for the design of a performance measurement and evaluation framework for an urban high school. One of the key aims of this framework is to enhance decision-making by providing timely feedback about the effectiveness of various performance improvement interventions. The…
Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng
2017-12-01
Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.
A nationwide survey of state-mandated evaluation practices for domestic violence agencies.
Riger, Stephanie; Staggs, Susan L
2011-01-01
Many agencies serving survivors of domestic violence are required to evaluate their services. Three possible evaluation strategies include: a) process measurement, which typically involves a frequency count of agency activities, such as the number of counseling hours given; b) outcome evaluation, which measures the impact of agency activities on clients, such as increased understanding of the dynamics of abuse; or c) performance measurement, which assesses the extent to which agencies achieve their stated goals. Findings of a telephone survey of state funders of domestic violence agencies in the United States revealed that most states (67%) require only process measurement, while fewer than 10% require performance measurement. Most (69%) funders reported satisfaction with their evaluation strategy and emphasized the need for involvement of all stakeholders, especially grantees, in developing an evaluation.
Metrology: Calibration and measurement processes guidelines
NASA Technical Reports Server (NTRS)
Castrup, Howard T.; Eicke, Woodward G.; Hayes, Jerry L.; Mark, Alexander; Martin, Robert E.; Taylor, James L.
1994-01-01
The guide is intended as a resource to aid engineers and systems contracts in the design, implementation, and operation of metrology, calibration, and measurement systems, and to assist NASA personnel in the uniform evaluation of such systems supplied or operated by contractors. Methodologies and techniques acceptable in fulfilling metrology quality requirements for NASA programs are outlined. The measurement process is covered from a high level through more detailed discussions of key elements within the process, Emphasis is given to the flowdown of project requirements to measurement system requirements, then through the activities that will provide measurements with defined quality. In addition, innovations and techniques for error analysis, development of statistical measurement process control, optimization of calibration recall systems, and evaluation of measurement uncertainty are presented.
Library Programs. Evaluating Federally Funded Public Library Programs.
ERIC Educational Resources Information Center
Office of Educational Research and Improvement (ED), Washington, DC.
Following an introduction by Betty J. Turock, nine reports examine key issues in library evaluation: (1) "Output Measures and the Evaluation Process" (Nancy A. Van House) describes measurement as a concept to be understood in the larger context of planning and evaluation; (2) "Adapting Output Measures to Program Evaluation"…
Newton, Robert L; Thomson, Jessica L; Rau, Kristi K; Ragusa, Shelly A; Sample, Alicia D; Singleton, Nakisha N; Anton, Stephen D; Webber, Larry S; Williamson, Donald A
2011-01-01
To evaluate the implementation of intervention components of the Louisiana Health study, which was a multicomponent childhood obesity prevention program conducted in rural schools. Content analysis. Process evaluation assessed implementation in classrooms, gym classes, and cafeterias. Classroom teachers (n = 232), physical education teachers (n = 53), food service managers (n = 33), and trained observers (n = 9). Five process evaluation measures were created: Physical Education Questionnaire (PEQ), Intervention Questionnaire (IQ), Food Service Manager Questionnaire (FSMQ), Classroom Observation (CO), and School Nutrition Environment Observation (SNEO). Interrater reliability and internal consistency were assessed on all measures. Analysis of variance and χ(2) were used to compare differences across study groups on questionnaires and observations. The PEQ and one subscale from the FSMQ were eliminated because their reliability coefficients fell below acceptable standards. The subscale internal consistencies for the IQ, FSMQ, CO, and SNEO (all Cronbach α > .60) were acceptable. After the initial 4 months of intervention, there was evidence that the Louisiana Health intervention was being implemented as it was designed. In summary, four process evaluation measures were found to be sufficiently reliable and valid for assessing the delivery of various aspects of a school-based obesity prevention program. These process measures could be modified to evaluate the delivery of other similar school-based interventions.
Functional outcomes assessment in shoulder surgery
Wylie, James D; Beckmann, James T; Granger, Erin; Tashjian, Robert Z
2014-01-01
The effective evaluation and management of orthopaedic conditions including shoulder disorders relies upon understanding the level of disability created by the disease process. Validated outcome measures are critical to the evaluation process. Traditionally, outcome measures have been physician derived objective evaluations including range of motion and radiologic evaluations. However, these measures can marginalize a patient’s perception of their disability or outcome. As a result of these limitations, patient self-reported outcomes measures have become popular over the last quarter century and are currently primary tools to evaluate outcomes of treatment. Patient reported outcomes measures can be general health related quality of life measures, health utility measures, region specific health related quality of life measures or condition specific measures. Several patients self-reported outcomes measures have been developed and validated for evaluating patients with shoulder disorders. Computer adaptive testing will likely play an important role in the arsenal of measures used to evaluate shoulder patients in the future. The purpose of this article is to review the general health related quality-of-life measures as well as the joint-specific and condition specific measures utilized in evaluating patients with shoulder conditions. Advances in computer adaptive testing as it relates to assessing dysfunction in shoulder conditions will also be reviewed. PMID:25405091
Teacher Evaluations: Use or Misuse?
ERIC Educational Resources Information Center
Warring, Douglas F.
2015-01-01
This manuscript examines value added measures used in teacher evaluations. The evaluations are often based on limited observations and use student growth as measured by standardized tests. These measures typically do not use multiple measures or consider other factors in the teaching and learning process. This manuscript identifies some of the…
Does daily nurse staffing match ward workload variability? Three hospitals' experiences.
Gabbay, Uri; Bukchin, Michael
2009-01-01
Nurse shortage and rising healthcare resource burdens mean that appropriate workforce use is imperative. This paper aims to evaluate whether daily nursing staffing meets ward workload needs. Nurse attendance and daily nurses' workload capacity in three hospitals were evaluated. Statistical process control was used to evaluate intra-ward nurse workload capacity and day-to-day variations. Statistical process control is a statistics-based method for process monitoring that uses charts with predefined target measure and control limits. Standardization was performed for inter-ward analysis by converting ward-specific crude measures to ward-specific relative measures by dividing observed/expected. Two charts: acceptable and tolerable daily nurse workload intensity, were defined. Appropriate staffing indicators were defined as those exceeding predefined rates within acceptable and tolerable limits (50 percent and 80 percent respectively). A total of 42 percent of the overall days fell within acceptable control limits and 71 percent within tolerable control limits. Appropriate staffing indicators were met in only 33 percent of wards regarding acceptable nurse workload intensity and in only 45 percent of wards regarding tolerable workloads. The study work did not differentiate crude nurse attendance and it did not take into account patient severity since crude bed occupancy was used. Double statistical process control charts and certain staffing indicators were used, which is open to debate. Wards that met appropriate staffing indicators prove the method's feasibility. Wards that did not meet appropriate staffing indicators prove the importance and the need for process evaluations and monitoring. Methods presented for monitoring daily staffing appropriateness are simple to implement either for intra-ward day-to-day variation by using nurse workload capacity statistical process control charts or for inter-ward evaluation using standardized measure of nurse workload intensity. The real challenge will be to develop planning systems and implement corrective interventions such as dynamic and flexible daily staffing, which will face difficulties and barriers. The paper fulfils the need for workforce utilization evaluation. A simple method using available data for daily staffing appropriateness evaluation, which is easy to implement and operate, is presented. The statistical process control method enables intra-ward evaluation, while standardization by converting crude into relative measures enables inter-ward analysis. The staffing indicator definitions enable performance evaluation. This original study uses statistical process control to develop simple standardization methods and applies straightforward statistical tools. This method is not limited to crude measures, rather it uses weighted workload measures such as nursing acuity or weighted nurse level (i.e. grade/band).
Rigor + Results = Impact: Measuring Impact with Integrity (Invited)
NASA Astrophysics Data System (ADS)
Davis, H. B.; Scalice, D.
2013-12-01
Are you struggling to measure and explain the impact of your EPO efforts? The NASA Astrobiology Institute (NAI) is using an evaluation process to determine the impact of its 15 EPO projects with over 200 activities. What is the current impact? How can it be improved in the future? We have developed a process that preserves autonomy at the project implementation level while still painting a picture of the entire portfolio. The impact evaluation process looks at an education/public outreach activity through its entire project cycle. Working with an external evaluator, education leads: 1) rate the quality/health of an activity in each stage of its cycle, and 2) determine the impact based on the results of the evaluation and the rigor of the methods used. The process has created a way to systematically codify a project's health and its impact, while offering support for improving both impact and how it is measured.
Evaluation methodologies for an advanced information processing system
NASA Technical Reports Server (NTRS)
Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.
1984-01-01
The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.
EVALUATION OF A TEST METHOD FOR MEASURING INDOOR AIR EMISSIONS FROM DRY-PROCESS PHOTOCOPIERS
A large chamber test method for measuring indoor air emissions from office equipment was developed, evaluated, and revised based on the initial testing of four dry-process photocopiers. Because all chambers may not necessarily produce similar results (e.g., due to differences in ...
Evaluation Methods of The Text Entities
ERIC Educational Resources Information Center
Popa, Marius
2006-01-01
The paper highlights some evaluation methods to assess the quality characteristics of the text entities. The main concepts used in building and evaluation processes of the text entities are presented. Also, some aggregated metrics for orthogonality measurements are presented. The evaluation process for automatic evaluation of the text entities is…
Measuring Down: Evaluating Digital Storytelling as a Process for Narrative Health Promotion.
Gubrium, Aline C; Fiddian-Green, Alice; Lowe, Sarah; DiFulvio, Gloria; Del Toro-Mejías, Lizbeth
2016-05-15
Digital storytelling (DST) engages participants in a group-based process to create and share narrative accounts of life events. We present key evaluation findings of a 2-year, mixed-methods study that focused on effects of participating in the DST process on young Puerto Rican Latina's self-esteem, social support, empowerment, and sexual attitudes and behaviors. Quantitative results did not show significant changes in the expected outcomes. However, in our qualitative findings we identified several ways in which the DST made positive, health-bearing effects. We argue for the importance of "measuring down" to reflect the locally grounded, felt experiences of participants who engage in the process, as current quantitative scales do not "measure up" to accurately capture these effects. We end by suggesting the need to develop mixed-methods, culturally relevant, and sensitive evaluation tools that prioritize process effects as they inform intervention and health promotion. © The Author(s) 2016.
2011-09-01
Evaluation Process through Capabilities-Based Analysis 5. FUNDING NUMBERS 6. AUTHOR(S) Eric J. Lednicky 7. PERFORMING ORGANIZATION NAME(S) AND...ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING /MONITORING AGENCY NAME(S...14 C. MEASURES OF EFFECTIVENESS / MEASURES OF PERFORMANCE
Rosas, Scott R; Ridings, John W
2017-02-01
The past decade has seen an increase of measurement development research in social and health sciences that featured the use of concept mapping as a core technique. The purpose, application, and utility of concept mapping have varied across this emerging literature. Despite the variety of uses and range of outputs, little has been done to critically review how researchers have approached the application of concept mapping in the measurement development and evaluation process. This article focuses on a review of the current state of practice regarding the use of concept mapping as methodological tool in this process. We systematically reviewed 23 scale or measure development and evaluation studies, and detail the application of concept mapping in the context of traditional measurement development and psychometric testing processes. Although several limitations surfaced, we found several strengths in the contemporary application of the method. We determined concept mapping provides (a) a solid method for establishing content validity, (b) facilitates researcher decision-making, (c) insight into target population perspectives that are integrated a priori, and (d) a foundation for analytical and interpretative choices. Based on these results, we outline how concept mapping can be situated in the measurement development and evaluation processes for new instrumentation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lau, Nathan; Jamieson, Greg A; Skraaning, Gyrd
2016-07-01
We introduce Process Overview, a situation awareness characterisation of the knowledge derived from monitoring process plants. Process Overview is based on observational studies of process control work in the literature. The characterisation is applied to develop a query-based measure called the Process Overview Measure. The goal of the measure is to improve coupling between situation and awareness according to process plant properties and operator cognitive work. A companion article presents the empirical evaluation of the Process Overview Measure in a realistic process control setting. The Process Overview Measure demonstrated sensitivity and validity by revealing significant effects of experimental manipulations that corroborated with other empirical results. The measure also demonstrated adequate inter-rater reliability and practicality for measuring SA based on data collected by process experts. Practitioner Summary: The Process Overview Measure is a query-based measure for assessing operator situation awareness from monitoring process plants in representative settings.
Note: Evaluation of slurry particle size analyzers for chemical mechanical planarization process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jang, Sunjae; Kulkarni, Atul; Qin, Hongyi
In the chemical mechanical planarization (CMP) process, slurry particle size is important because large particles can cause defects. Hence, selection of an appropriate particle measuring system is necessary in the CMP process. In this study, a scanning mobility particle sizer (SMPS) and dynamic light scattering (DLS) were compared for particle size distribution (PSD) measurements. In addition, the actual particle size and shape were confirmed by transmission electron microscope (TEM) results. SMPS classifies the particle size according to the electrical mobility, and measures the particle concentration (single particle measurement). On the other hand, the DLS measures the particle size distribution bymore » analyzing scattered light from multiple particles (multiple particle measurement). For the slurry particles selected for evaluation, it is observed that SMPS shows bi-modal particle sizes 30 nm and 80 nm, which closely matches with the TEM measurements, whereas DLS shows only single mode distribution in the range of 90 nm to 100 nm and showing incapability of measuring small particles. Hence, SMPS can be a better choice for the evaluation of CMP slurry particle size and concentration measurements.« less
Sevenster, M; Buurman, J; Liu, P; Peters, J F; Chang, P J
2015-01-01
Accumulating quantitative outcome parameters may contribute to constructing a healthcare organization in which outcomes of clinical procedures are reproducible and predictable. In imaging studies, measurements are the principal category of quantitative para meters. The purpose of this work is to develop and evaluate two natural language processing engines that extract finding and organ measurements from narrative radiology reports and to categorize extracted measurements by their "temporality". The measurement extraction engine is developed as a set of regular expressions. The engine was evaluated against a manually created ground truth. Automated categorization of measurement temporality is defined as a machine learning problem. A ground truth was manually developed based on a corpus of radiology reports. A maximum entropy model was created using features that characterize the measurement itself and its narrative context. The model was evaluated in a ten-fold cross validation protocol. The measurement extraction engine has precision 0.994 and recall 0.991. Accuracy of the measurement classification engine is 0.960. The work contributes to machine understanding of radiology reports and may find application in software applications that process medical data.
Performance measurement: integrating quality management and activity-based cost management.
McKeon, T
1996-04-01
The development of an activity-based management system provides a framework for developing performance measures integral to quality and cost management. Performance measures that cross operational boundaries and embrace core processes provide a mechanism to evaluate operational results related to strategic intention and internal and external customers. The author discusses this measurement process that allows managers to evaluate where they are and where they want to be, and to set a course of action that closes the gap between the two.
Markovic, Gabriela; Schult, Marie-Louise; Bartfai, Aniko; Elg, Mattias
2017-01-31
Progress in early cognitive recovery after acquired brain injury is uneven and unpredictable, and thus the evaluation of rehabilitation is complex. The use of time-series measurements is susceptible to statistical change due to process variation. To evaluate the feasibility of using a time-series method, statistical process control, in early cognitive rehabilitation. Participants were 27 patients with acquired brain injury undergoing interdisciplinary rehabilitation of attention within 4 months post-injury. The outcome measure, the Paced Auditory Serial Addition Test, was analysed using statistical process control. Statistical process control identifies if and when change occurs in the process according to 3 patterns: rapid, steady or stationary performers. The statistical process control method was adjusted, in terms of constructing the baseline and the total number of measurement points, in order to measure a process in change. Statistical process control methodology is feasible for use in early cognitive rehabilitation, since it provides information about change in a process, thus enabling adjustment of the individual treatment response. Together with the results indicating discernible subgroups that respond differently to rehabilitation, statistical process control could be a valid tool in clinical decision-making. This study is a starting-point in understanding the rehabilitation process using a real-time-measurements approach.
Process-Oriented Measurement Using Electronic Tangibles
ERIC Educational Resources Information Center
Veerbeek, Jochanan; Verhaegh, Janneke; Elliott, Julian G.; Resing, Wilma C. M.
2017-01-01
This study evaluated a new measure for analyzing the process of children's problem solving in a series completion task. This measure focused on a process that we entitled the "Grouping of Answer Pieces" (GAP) that was employed to provide information on problem representation and restructuring. The task was conducted using an electronic…
Effects of image processing on the detective quantum efficiency
NASA Astrophysics Data System (ADS)
Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na
2010-04-01
Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.
Evaluation 101: How One Department Embraced the Process
ERIC Educational Resources Information Center
Gothard, Katina; Gorham, Jayne
2011-01-01
An evaluation system was designed and implemented by a faculty support department to measure the value, identify gaps, and improve the quality of their training at a midsize community college. The Targeted Evaluation Process (Combs & Falletta, 2000) was selected as the evaluation framework because of its applicability to training and nontraining…
ERIC Educational Resources Information Center
Labin, Susan N.
2014-01-01
A fundamental reason for doing evaluation capacity building (ECB) is to improve program outcomes. Developing common measures of outcomes and the activities, processes, and factors that lead to these outcomes is an important step in moving the science and the practice of ECB forward. This article identifies a number of existing ECB measurement…
ERIC Educational Resources Information Center
Walsh, Maura
2013-01-01
Although there is wide consensus that teacher evaluation processes should be used to identify and measure effective teaching, this has always been an elusive goal. How teachers perceive the evaluative process is a crucial determiner of how the results of the evaluations are utilized. In 2010, the Massachusetts Department of Elementary and…
The "Process" of Process Use: Methods for Longitudinal Assessment in a Multisite Evaluation
ERIC Educational Resources Information Center
Shaw, Jessica; Campbell, Rebecca
2014-01-01
Process use refers to the ways in which stakeholders and/or evaluands change as a function of participating in evaluation activities. Although the concept of process use has been well discussed in the literature, exploration of methodological strategies for the measurement and assessment of process use has been limited. Typically, empirical…
New parameters in adaptive testing of ferromagnetic materials utilizing magnetic Barkhausen noise
NASA Astrophysics Data System (ADS)
Pal'a, Jozef; Ušák, Elemír
2016-03-01
A new method of magnetic Barkhausen noise (MBN) measurement and optimization of the measured data processing with respect to non-destructive evaluation of ferromagnetic materials was tested. Using this method we tried to found, if it is possible to enhance sensitivity and stability of measurement results by replacing the traditional MBN parameter (root mean square) with some new parameter. In the tested method, a complex set of the MBN from minor hysteresis loops is measured. Afterward, the MBN data are collected into suitably designed matrices and optimal parameters of MBN with respect to maximum sensitivity to the evaluated variable are searched. The method was verified on plastically deformed steel samples. It was shown that the proposed measuring method and measured data processing bring an improvement of the sensitivity to the evaluated variable when comparing with measuring traditional MBN parameter. Moreover, we found a parameter of MBN, which is highly resistant to the changes of applied field amplitude and at the same time it is noticeably more sensitive to the evaluated variable.
Ground robotic measurement of aeolian processes
USDA-ARS?s Scientific Manuscript database
Models of aeolian processes rely on accurate measurements of the rates of sediment transport by wind, and careful evaluation of the environmental controls of these processes. Existing field approaches typically require intensive, event-based experiments involving dense arrays of instruments. These d...
Robotic tool positioning process using a multi-line off-axis laser triangulation sensor
NASA Astrophysics Data System (ADS)
Pinto, T. C.; Matos, G.
2018-03-01
Proper positioning of a friction stir welding head for pin insertion, driven by a closed chain robot, is important to ensure quality repair of cracks. A multi-line off-axis laser triangulation sensor was designed to be integrated to the robot, allowing relative measurements of the surface to be repaired. This work describes the sensor characteristics, its evaluation and the measurement process for tool positioning to a surface point of interest. The developed process uses a point of interest image and a measured point cloud to define the translation and rotation for tool positioning. Sensor evaluation and tests are described. Keywords: laser triangulation, 3D measurement, tool positioning, robotics.
Abildgaard, Johan S.; Saksvik, Per Ø.; Nielsen, Karina
2016-01-01
Organizational interventions aiming at improving employee health and wellbeing have proven to be challenging to evaluate. To analyze intervention processes two methodological approaches have widely been used: quantitative (often questionnaire data), or qualitative (often interviews). Both methods are established tools, but their distinct epistemological properties enable them to illuminate different aspects of organizational interventions. In this paper, we use the quantitative and qualitative process data from an organizational intervention conducted in a national postal service, where the Intervention Process Measure questionnaire (N = 285) as well as an extensive interview study (N = 50) were used. We analyze what type of knowledge about intervention processes these two methodologies provide and discuss strengths and weaknesses as well as potentials for mixed methods evaluation methodologies. PMID:27713707
Abildgaard, Johan S; Saksvik, Per Ø; Nielsen, Karina
2016-01-01
Organizational interventions aiming at improving employee health and wellbeing have proven to be challenging to evaluate. To analyze intervention processes two methodological approaches have widely been used: quantitative (often questionnaire data), or qualitative (often interviews). Both methods are established tools, but their distinct epistemological properties enable them to illuminate different aspects of organizational interventions. In this paper, we use the quantitative and qualitative process data from an organizational intervention conducted in a national postal service, where the Intervention Process Measure questionnaire ( N = 285) as well as an extensive interview study ( N = 50) were used. We analyze what type of knowledge about intervention processes these two methodologies provide and discuss strengths and weaknesses as well as potentials for mixed methods evaluation methodologies.
An Approach to the Evaluation of Hypermedia.
ERIC Educational Resources Information Center
Knussen, Christina; And Others
1991-01-01
Discusses methods that may be applied to the evaluation of hypermedia, based on six models described by Lawton. Techniques described include observation, self-report measures, interviews, automated measures, psychometric tests, checklists and criterion-based techniques, process models, Experimentally Measuring Usability (EMU), and a naturalistic…
Lee, Heewon; Contento, Isobel R.; Koch, Pamela
2012-01-01
Objective To use and review a conceptual model of process evaluation and to examine the implementation of a nutrition education curriculum, Choice, Control & Change, designed to promote dietary and physical activity behaviors that reduce obesity risk. Design A process evaluation study based on a systematic conceptual model. Setting Five middle schools in New York City. Participants 562 students in 20 classes and their science teachers (n=8). Main Outcome Measures Based on the model, teacher professional development, teacher implementation, and student reception were evaluated. Also measured were teacher characteristics, teachers’ curriculum evaluation, and satisfaction with teaching the curriculum. Analysis Descriptive statistics and Spearman’s Rho Correlation for quantitative analysis and content analysis for qualitative data were used. Results Mean score of the teacher professional development evaluation was 4.75 on a 5-point scale. Average teacher implementation rate was 73%, and student reception rate was 69%. Ongoing teacher support was highly valued by teachers. Teachers’ satisfaction with teaching the curriculum was highly correlated with students’ satisfaction (p <.05). Teachers’ perception of amount of student work was negatively correlated with implementation and with student satisfaction (p<.05). Conclusions and implications Use of a systematic conceptual model and comprehensive process measures improves understanding of the implementation process and helps educators to better implement interventions as designed. PMID:23321021
ERIC Educational Resources Information Center
Morgan, Philip
2008-01-01
The use of evaluation to examine and improve the quality of teaching and courses is now a component of most universities. However, despite the various methods and opportunities for evaluation, a lack of understanding of the processes, measures and value are some of the major impediments to effective evaluation. Evaluation requires an understanding…
A Nationwide Survey of State-Mandated Evaluation Practices for Domestic Violence Agencies
ERIC Educational Resources Information Center
Riger, Stephanie; Staggs, Susan L.
2011-01-01
Many agencies serving survivors of domestic violence are required to evaluate their services. Three possible evaluation strategies include: a) process measurement, which typically involves a frequency count of agency activities, such as the number of counseling hours given; b) outcome evaluation, which measures the impact of agency activities on…
Disparities in the diagnostic process of Duchenne and Becker muscular dystrophy.
Holtzer, Caleb; Meaney, F John; Andrews, Jennifer; Ciafaloni, Emma; Fox, Deborah J; James, Katherine A; Lu, Zhenqiang; Miller, Lisa; Pandya, Shree; Ouyang, Lijing; Cunniff, Christopher
2011-11-01
To determine whether sociodemographic factors are associated with delays at specific steps in the diagnostic process of Duchenne and Becker muscular dystrophy. We examined abstracted medical records for 540 males from population-based surveillance sites in Arizona, Colorado, Georgia, Iowa, and western New York. We used linear regressions to model the association of three sociodemographic characteristics with age at initial medical evaluation, first creatine kinase measurement, and earliest DNA analysis while controlling for changes in the diagnostic process over time. The analytical dataset included 375 males with information on family history of Duchenne and Becker muscular dystrophy, neighborhood poverty levels, and race/ethnicity. Black and Hispanic race/ethnicity predicted older ages at initial evaluation, creatine kinase measurement, and DNA testing (P < 0.05). A positive family history of Duchenne and Becker muscular dystrophy predicted younger ages at initial evaluation, creatine kinase measurement and DNA testing (P < 0.001). Higher neighborhood poverty was associated with earlier ages of evaluation (P < 0.05). Racial and ethnic disparities in the diagnostic process for Duchenne and Becker muscular dystrophy are evident even after adjustment for family history of Duchenne and Becker muscular dystrophy and changes in the diagnostic process over time. Black and Hispanic children are initially evaluated at older ages than white children, and the gap widens at later steps in the diagnostic process.
Interrupted Time Series Versus Statistical Process Control in Quality Improvement Projects.
Andersson Hagiwara, Magnus; Andersson Gäre, Boel; Elg, Mattias
2016-01-01
To measure the effect of quality improvement interventions, it is appropriate to use analysis methods that measure data over time. Examples of such methods include statistical process control analysis and interrupted time series with segmented regression analysis. This article compares the use of statistical process control analysis and interrupted time series with segmented regression analysis for evaluating the longitudinal effects of quality improvement interventions, using an example study on an evaluation of a computerized decision support system.
Using Analytic Hierarchy Process in Textbook Evaluation
ERIC Educational Resources Information Center
Kato, Shigeo
2014-01-01
This study demonstrates the application of the analytic hierarchy process (AHP) in English language teaching materials evaluation, focusing in particular on its potential for systematically integrating different components of evaluation criteria in a variety of teaching contexts. AHP is a measurement procedure wherein pairwise comparisons are made…
Benchmarking: measuring the outcomes of evidence-based practice.
DeLise, D C; Leasure, A R
2001-01-01
Measurement of the outcomes associated with implementation of evidence-based practice changes is becoming increasingly emphasized by multiple health care disciplines. A final step to the process of implementing and sustaining evidence-supported practice changes is that of outcomes evaluation and monitoring. The comparison of outcomes to internal and external measures is known as benchmarking. This article discusses evidence-based practice, provides an overview of outcomes evaluation, and describes the process of benchmarking to improve practice. A case study is used to illustrate this concept.
'Healthy Eating and Lifestyle in Pregnancy (HELP)' trial: Process evaluation framework.
Simpson, Sharon A; Cassidy, Dunla; John, Elinor
2014-07-01
We developed and tested in a cluster RCT a theory-driven group-based intervention for obese pregnant women. It was designed to support women to moderate weight gain during pregnancy and reduce BMI one year after birth, in addition to targeting secondary health and wellbeing outcomes. In line with MRC guidance on developing and evaluating complex interventions in health, we conducted a process evaluation alongside the trial. This paper describes the development of the process evaluation framework. This cluster RCT recruited 598 pregnant women. Women in the intervention group were invited to attend a weekly weight-management group. Following a review of relevant literature, we developed a process evaluation framework which outlined key process indicators that we wanted to address and how we would measure these. Central to the process evaluation was to understand the mechanism of effect of the intervention. We utilised a logic-modelling approach to describe the intervention which helped us focus on what potential mediators of intervention effect to measure, and how. The resulting process evaluation framework was designed to address 9 core elements; context, reach, exposure, recruitment, fidelity, recruitment, retention, contamination and theory-testing. These were assessed using a variety of qualitative and quantitative approaches. The logic model explained the processes by which intervention components bring about change in target outcomes through various mediators and theoretical pathways including self-efficacy, social support, self-regulation and motivation. Process evaluation is a key element in assessing the effect of any RCT. We developed a process evaluation framework and logic model, and the results of analyses using these will offer insights into why the intervention is or is not effective. Copyright © 2014.
Calhoun, William J.; Bhavnani, Suresh; Rose, Robert M.; Ameredes, Bill; Brasier, Allan R.
2015-01-01
Abstract There is growing consensus about the factors critical for development and productivity of multidisciplinary teams, but few studies have evaluated their longitudinal changes. We present a longitudinal study of 10 multidisciplinary translational teams (MTTs), based on team process and outcome measures, evaluated before and after 3 years of CTSA collaboration. Using a mixed methods approach, an expert panel of five judges (familiar with the progress of the teams) independently rated team performance based on four process and four outcome measures, and achieved a rating consensus. Although all teams made progress in translational domains, other process and outcome measures were highly variable. The trajectory profiles identified four categories of team performance. Objective bibliometric analysis of CTSA‐supported MTTs with positive growth in process scores showed that these teams tended to have enhanced scientific outcomes and published in new scientific domains, indicating the conduct of innovative science. Case exemplars revealed that MTTs that experienced growth in both process and outcome evaluative criteria also experienced greater innovation, defined as publications in different areas of science. Of the eight evaluative criteria, leadership‐related behaviors were the most resistant to the interventions introduced. Well‐managed MTTs demonstrate objective productivity and facilitate innovation. PMID:25801998
Wooten, Kevin C; Calhoun, William J; Bhavnani, Suresh; Rose, Robert M; Ameredes, Bill; Brasier, Allan R
2015-10-01
There is growing consensus about the factors critical for development and productivity of multidisciplinary teams, but few studies have evaluated their longitudinal changes. We present a longitudinal study of 10 multidisciplinary translational teams (MTTs), based on team process and outcome measures, evaluated before and after 3 years of CTSA collaboration. Using a mixed methods approach, an expert panel of five judges (familiar with the progress of the teams) independently rated team performance based on four process and four outcome measures, and achieved a rating consensus. Although all teams made progress in translational domains, other process and outcome measures were highly variable. The trajectory profiles identified four categories of team performance. Objective bibliometric analysis of CTSA-supported MTTs with positive growth in process scores showed that these teams tended to have enhanced scientific outcomes and published in new scientific domains, indicating the conduct of innovative science. Case exemplars revealed that MTTs that experienced growth in both process and outcome evaluative criteria also experienced greater innovation, defined as publications in different areas of science. Of the eight evaluative criteria, leadership-related behaviors were the most resistant to the interventions introduced. Well-managed MTTs demonstrate objective productivity and facilitate innovation. © 2015 Wiley Periodicals, Inc.
Report #16-N-0317, Sept 21, 2016. A peer review process that measures adherence to all the quality standards for federal Inspector General Inspection and Evaluation offices provides assurance that participating offices are being adequately evaluated.
ERIC Educational Resources Information Center
Tadlock, James; Nesbit, Lamar
The Jackson Municipal Separate School District, Mississippi, has instituted a mixed-criteria reduction-in-force procedure emphasizing classroom performance to a greater degree than seniority, certification, and staff development participation. The district evaluation process--measuring classroom teaching performance--generated data for the present…
A Participatory Action Research Approach To Evaluating Inclusive School Programs.
ERIC Educational Resources Information Center
Dymond, Stacy K.
2001-01-01
This article proposes a model for evaluating inclusive schools. Key elements of the model are inclusion of stakeholders in the evaluation process through a participatory action research approach, analysis of program processes and outcomes, use of multiple methods and measures, and obtaining perceptions from diverse stakeholder groups. (Contains…
ERIC Educational Resources Information Center
Stanko-Kaczmarek, Maja
2012-01-01
The main aim of this study was to gain a deeper understanding of the effect of intrinsic motivation on affect, subjective evaluation, and the creative process of young artists. Relations between motivation, affect, and evaluation were treated as a dynamic process and measured several times. The unique contribution of this study is that it…
How product trial changes quality perception of four new processed beef products.
Saeed, Faiza; Grunert, Klaus G; Therkildsen, Margrethe
2013-01-01
The purpose of this paper is the quantitative analysis of the change in quality perception of four new processed beef products from pre to post trial phases. Based on the Total Food Quality Model, differences in pre and post-trial phases were measured using repeated measures technique for cue evaluation, quality evaluation and purchase motive fulfillment. For two of the tested products, trial resulted in a decline of the evaluation of cues, quality and purchase motive fulfillment compared to pre-trial expectations. For these products, positive expectations were created by giving information about ingredients and ways of processing, which were not confirmed during trial. For the other two products, evaluations on key sensory dimensions based on trial exceeded expectations, whereas the other evaluations remained unchanged. Several demographic factors influenced the pattern of results, notably age and gender, which may be due to underlying differences in previous experience. The study gives useful insights for testing of new processed meat products before market introduction. Copyright © 2012 Elsevier Ltd. All rights reserved.
Lau, Nathan; Jamieson, Greg A; Skraaning, Gyrd
2016-03-01
The Process Overview Measure is a query-based measure developed to assess operator situation awareness (SA) from monitoring process plants. A companion paper describes how the measure has been developed according to process plant properties and operator cognitive work. The Process Overview Measure demonstrated practicality, sensitivity, validity and reliability in two full-scope simulator experiments investigating dramatically different operational concepts. Practicality was assessed based on qualitative feedback of participants and researchers. The Process Overview Measure demonstrated sensitivity and validity by revealing significant effects of experimental manipulations that corroborated with other empirical results. The measure also demonstrated adequate inter-rater reliability and practicality for measuring SA in full-scope simulator settings based on data collected on process experts. Thus, full-scope simulator studies can employ the Process Overview Measure to reveal the impact of new control room technology and operational concepts on monitoring process plants. Practitioner Summary: The Process Overview Measure is a query-based measure that demonstrated practicality, sensitivity, validity and reliability for assessing operator situation awareness (SA) from monitoring process plants in representative settings.
[What and how to evaluate clinical-surgical competence. The resident and staff surgeon perspective].
Cervantes-Sánchez, Carlos Roberto; Chávez-Vizcarra, Paola; Barragán-Ávila, María Cristina; Parra-Acosta, Haydee; Herrera-Mendoza, Renzo Eduardo
2016-01-01
Evaluation is a means for significant and rigorous improvement of the educational process. Therefore, competence evaluation should allow assessing the complex activity of medical care, as well as improving the training process. This is the case in the evaluation process of clinical-surgical competences. A cross-sectional study was designed to measure knowledge about the evaluation of clinical-surgical competences for the General Surgery residency program at the Faculty of Medicine, Universidad Autónoma de Chihuahua (UACH). A 55-item questionnaire divided into six sections was used (perception, planning, practice, function, instruments and strategies, and overall evaluation), with a six level Likert scale, performing a descriptive, correlation and comparative analysis, with a significance level of 0.001. In both groups perception of evaluation was considered as a further qualification. As regards tools, the best known was the written examination. As regards function, evaluation was considered as a further administrative requirement. In the correlation analysis, evaluation was perceived as qualification and was significantly associated with measurement, assessment and accreditation. In the comparative analysis between residents and staff surgeons, a significant difference was found as regards the perception of the evaluation as a measurement of knowledge (Student t test: p=0.04). The results provide information about the concept we have about the evaluation of clinical-surgical competences, considering it as a measure of learning achievement for a socially required certification. There is confusion as regards the perception of evaluation, its function, goals and scopes as benefit for those evaluated. Copyright © 2015 Academia Mexicana de Cirugía A.C. Published by Masson Doyma México S.A. All rights reserved.
Newman, Julie B; Reesman, Jennifer H; Vaughan, Christopher G; Gioia, Gerard A
2013-01-01
Deficit in the speed of cognitive processing is a commonly identified neuropsychological change in children recovering from a mild TBI. However, there are few validated child assessment instruments that allow for serial assessment over the course of recovery in this population. Pediatric ImPACT is a novel measure that purports to assess cognitive speed, learning, and efficiency in this population. The current study sought to validate the use of this new measure by comparing it to traditional paper and pencil measures of processing speed. One hundred and sixty-four children (71% male) age 5-12 with mild TBI evaluated in an outpatient concussion clinic were administered Pediatric ImPACT and other neuropsychological test measures as part of a flexible test battery. Performance on the Response Speed Composite of Pediatric ImPACT was more strongly associated with other measures of cognitive processing speed, than with measures of immediate/working memory and learning/memory in this sample of injured children. There is preliminary support for convergent and discriminant validity of Pediatric ImPACT as a measure for use in post-concussion evaluations of processing speed in children.
Developing an evaluation framework for clinical redesign programs: lessons learnt.
Samaranayake, Premaratne; Dadich, Ann; Fitzgerald, Anneke; Zeitz, Kathryn
2016-09-19
Purpose The purpose of this paper is to present lessons learnt through the development of an evaluation framework for a clinical redesign programme - the aim of which was to improve the patient journey through improved discharge practices within an Australian public hospital. Design/methodology/approach The development of the evaluation framework involved three stages - namely, the analysis of secondary data relating to the discharge planning pathway; the analysis of primary data including field-notes and interview transcripts on hospital processes; and the triangulation of these data sets to devise the framework. The evaluation framework ensured that resource use, process management, patient satisfaction, and staff well-being and productivity were each connected with measures, targets, and the aim of clinical redesign programme. Findings The application of business process management and a balanced scorecard enabled a different way of framing the evaluation, ensuring measurable outcomes were connected to inputs and outputs. Lessons learnt include: first, the importance of mixed-methods research to devise the framework and evaluate the redesigned processes; second, the need for appropriate tools and resources to adequately capture change across the different domains of the redesign programme; and third, the value of developing and applying an evaluative framework progressively. Research limitations/implications The evaluation framework is limited by its retrospective application to a clinical process redesign programme. Originality/value This research supports benchmarking with national and international practices in relation to best practice healthcare redesign processes. Additionally, it provides a theoretical contribution on evaluating health services improvement and redesign initiatives.
How Evaluation Processes Affect the Professional Development of Five Teachers in Higher Education
ERIC Educational Resources Information Center
Shagrir, Leah
2012-01-01
This paper presents research that investigates the nature of the connection between the professional development of five teachers in higher education and the evaluation processes they have to undergo. Since teaching, scholarship, and service are the three components that evaluation measures, this research examines how the teachers' professional…
The quality of instruments to assess the process of shared decision making: A systematic review.
Gärtner, Fania R; Bomhof-Roordink, Hanna; Smith, Ian P; Scholl, Isabelle; Stiggelbout, Anne M; Pieterse, Arwen H
2018-01-01
To inventory instruments assessing the process of shared decision making and appraise their measurement quality, taking into account the methodological quality of their validation studies. In a systematic review we searched seven databases (PubMed, Embase, Emcare, Cochrane, PsycINFO, Web of Science, Academic Search Premier) for studies investigating instruments measuring the process of shared decision making. Per identified instrument, we assessed the level of evidence separately for 10 measurement properties following a three-step procedure: 1) appraisal of the methodological quality using the COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) checklist, 2) appraisal of the psychometric quality of the measurement property using three possible quality scores, 3) best-evidence synthesis based on the number of studies, their methodological and psychometrical quality, and the direction and consistency of the results. The study protocol was registered at PROSPERO: CRD42015023397. We included 51 articles describing the development and/or evaluation of 40 shared decision-making process instruments: 16 patient questionnaires, 4 provider questionnaires, 18 coding schemes and 2 instruments measuring multiple perspectives. There is an overall lack of evidence for their measurement quality, either because validation is missing or methods are poor. The best-evidence synthesis indicated positive results for a major part of instruments for content validity (50%) and structural validity (53%) if these were evaluated, but negative results for a major part of instruments when inter-rater reliability (47%) and hypotheses testing (59%) were evaluated. Due to the lack of evidence on measurement quality, the choice for the most appropriate instrument can best be based on the instrument's content and characteristics such as the perspective that they assess. We recommend refinement and validation of existing instruments, and the use of COSMIN-guidelines to help guarantee high-quality evaluations.
Measuring the software process and product: Lessons learned in the SEL
NASA Technical Reports Server (NTRS)
Basili, V. R.
1985-01-01
The software development process and product can and should be measured. The software measurement process at the Software Engineering Laboratory (SEL) has taught a major lesson: develop a goal-driven paradigm (also characterized as a goal/question/metric paradigm) for data collection. Project analysis under this paradigm leads to a design for evaluating and improving the methodology of software development and maintenance.
ERIC Educational Resources Information Center
McCarthy, Mary L.
2006-01-01
This article challenges Perry's research using performance evaluations to determine whether the educational background of child welfare workers is predictive of performance. Institutional theory, an understanding of street-level bureaucracies, and evaluations of field education performance measures are offered as necessary frameworks for Perry's…
A Non-Intrusive GMA Welding Process Quality Monitoring System Using Acoustic Sensing.
Cayo, Eber Huanca; Alfaro, Sadek Crisostomo Absi
2009-01-01
Most of the inspection methods used for detection and localization of welding disturbances are based on the evaluation of some direct measurements of welding parameters. This direct measurement requires an insertion of sensors during the welding process which could somehow alter the behavior of the metallic transference. An inspection method that evaluates the GMA welding process evolution using a non-intrusive process sensing would allow not only the identification of disturbances during welding runs and thus reduce inspection time, but would also reduce the interference on the process caused by the direct sensing. In this paper a nonintrusive method for weld disturbance detection and localization for weld quality evaluation is demonstrated. The system is based on the acoustic sensing of the welding electrical arc. During repetitive tests in welds without disturbances, the stability acoustic parameters were calculated and used as comparison references for the detection and location of disturbances during the weld runs.
A Non-Intrusive GMA Welding Process Quality Monitoring System Using Acoustic Sensing
Cayo, Eber Huanca; Alfaro, Sadek Crisostomo Absi
2009-01-01
Most of the inspection methods used for detection and localization of welding disturbances are based on the evaluation of some direct measurements of welding parameters. This direct measurement requires an insertion of sensors during the welding process which could somehow alter the behavior of the metallic transference. An inspection method that evaluates the GMA welding process evolution using a non-intrusive process sensing would allow not only the identification of disturbances during welding runs and thus reduce inspection time, but would also reduce the interference on the process caused by the direct sensing. In this paper a nonintrusive method for weld disturbance detection and localization for weld quality evaluation is demonstrated. The system is based on the acoustic sensing of the welding electrical arc. During repetitive tests in welds without disturbances, the stability acoustic parameters were calculated and used as comparison references for the detection and location of disturbances during the weld runs. PMID:22399990
3D MEMS in Standard Processes: Fabrication, Quality Assurance, and Novel Measurement Microstructures
NASA Technical Reports Server (NTRS)
Lin, Gisela; Lawton, Russell A.
2000-01-01
Three-dimensional MEMS microsystems that are commercially fabricated require minimal post-processing and are easily integrated with CMOS signal processing electronics. Measurements to evaluate the fabrication process (such as cross-sectional imaging and device performance characterization) provide much needed feedback in terms of reliability and quality assurance. MEMS technology is bringing a new class of microscale measurements to fruition. The relatively small size of MEMS microsystems offers the potential for higher fidelity recordings compared to macrosize counterparts, as illustrated in the measurement of muscle cell forces.
National Security Technology Incubator Evaluation Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
This report describes the process by which the National Security Technology Incubator (NSTI) will be evaluated. The technology incubator is being developed as part of the National Security Preparedness Project (NSPP), funded by a Department of Energy (DOE)/National Nuclear Security Administration (NNSA) grant. This report includes a brief description of the components, steps, and measures of the proposed evaluation process. The purpose of the NSPP is to promote national security technologies through business incubation, technology demonstration and validation, and workforce development. The NSTI will focus on serving businesses with national security technology applications by nurturing them through critical stages ofmore » early development. An effective evaluation process of the NSTI is an important step as it can provide qualitative and quantitative information on incubator performance over a given period. The vision of the NSTI is to be a successful incubator of technologies and private enterprise that assist the NNSA in meeting new challenges in national safety and security. The mission of the NSTI is to identify, incubate, and accelerate technologies with national security applications at various stages of development by providing hands-on mentoring and business assistance to small businesses and emerging or growing companies. To achieve success for both incubator businesses and the NSTI program, an evaluation process is essential to effectively measure results and implement corrective processes in the incubation design if needed. The evaluation process design will collect and analyze qualitative and quantitative data through performance evaluation system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, J M; Ehinger, M H; Joseph, C
1978-10-01
Development work on a computerized system for nuclear materials control and accounting in a nuclear fuel reprocessing plant is described and evaluated. Hardware and software were installed and tested to demonstrate key measurement, measurement control, and accounting requirements at accountability input/output points using natural uranium. The demonstration included a remote data acquisition system which interfaces process and special instrumentation to a cenral processing unit.
Gawronski, Bertram; Mitchell, Derek G V; Balas, Robert
2015-10-01
Evaluative conditioning (EC) is defined as the change in the evaluation of a conditioned stimulus (CS) because of its pairing with a valenced unconditioned stimulus (US). Counter to views that EC is the product of automatic learning processes, recent research has revealed various characteristics of nonautomatic processing in EC. The current research investigated the controllability of EC by testing the effectiveness of 3 emotion-focused strategies in preventing the acquisition of conditioned preferences: (a) suppression of emotional reactions to the US, (b) reappraisal of the valence of the US, and (c) facial blocking of emotional responses. Although all 3 strategies reduced EC effects on self-reported evaluations by impairing recollective memory for CS-US pairings, they were ineffective in reducing EC effects on an evaluative priming measure. Regardless of the measure, effective control did not depend on the level of arousal elicited by the US. The results suggest that the 3 strategies can influence deliberate CS evaluations through memory-related processes, but they are ineffective in reducing EC effects on spontaneous evaluative responses. Implications for mental process theories of EC are discussed. (c) 2015 APA, all rights reserved).
USDA-ARS?s Scientific Manuscript database
The measurement of sugar concentration and dry matter in processing potatoes is a time and resource intensive activity, cannot be performed in the field, and does not easily measure within tuber variation. A proposed method to improve the phenotyping of processing potatoes is to employ hyperspectral...
Measures of Student Learning: Building Administrator
ERIC Educational Resources Information Center
Rhode Island Department of Education, 2014
2014-01-01
The purpose of this Guidebook is to describe the process and basic requirements for the student learning measures that are used as part of the building administrator evaluation and support process. For aspects of the process that have room for flexibility and school/district-level discretion, the different options have been clearly separated and…
Measures of Student Learning: Teacher
ERIC Educational Resources Information Center
Rhode Island Department of Education, 2014
2014-01-01
The purpose of this Guidebook is to describe the process and basic requirements for the student learning measures that are used as part of the teacher evaluation and support process. For aspects of the process that have room for flexibility and school/district-level discretion, the different options have been clearly separated and labeled with a…
Measurement of Quality of Nursing Practice in Congenital Cardiac Care.
Connor, Jean Anne; Mott, Sandra; Green, Angela; Larson, Carol; Hickey, Patricia
2016-03-01
The impact of nursing care on patients' outcomes has been demonstrated in adult and pediatric settings. However, limited attention has been given to standardized measurement of pediatric nursing care. A collaborative group, the Consortium for Congenital Cardiac Care Measurement of Nursing Practice, was formed to address this gap. The purpose of this study was to assess the current state of measurement of the quality of pediatric cardiovascular nursing in freestanding children's hospitals across the United States. A qualitative descriptive design was used to assess the state of measurement of nursing care from the perspective of experts in pediatric cardiovascular nursing. Nurse leaders from 20 sites participated in audiotaped phone interviews. The data were analyzed by using conventional content analysis. Each level of data coding was increasingly comprehensive. Guided by Donabedian's quality framework of structure, process, and outcome, 2 encompassing patterns emerged: (1) structure and process of health care delivery and (2) structure and process of evaluation of care. Similarities in the structure of health care delivery included program expansion and subsequent hiring of nurses with a bachelor of science in nursing and experienced nurses to provide safety and optimal outcomes for patients. Programs varied in how they evaluated care in terms of structure, measurement, collection and dissemination of data. External factors and response to internal processes of health care delivery were similar in different programs; evaluation was more varied. Seven opportunities for measurement that address both structure and process of nursing care were identified to be developed as benchmarks. ©2016 American Association of Critical-Care Nurses.
2013-01-01
Background Numerous worksite health promotion program (WHPPs) have been implemented the past years to improve employees’ health and lifestyle (i.e., physical activity, nutrition, smoking, alcohol use and relaxation). Research primarily focused on the effectiveness of these WHPPs. Whereas process evaluations provide essential information necessary to improve large scale implementation across other settings. Therefore, this review aims to: (1) further our understanding of the quality of process evaluations alongside effect evaluations for WHPPs, (2) identify barriers/facilitators affecting implementation, and (3) explore the relationship between effectiveness and the implementation process. Methods Pubmed, EMBASE, PsycINFO, and Cochrane (controlled trials) were searched from 2000 to July 2012 for peer-reviewed (randomized) controlled trials published in English reporting on both the effectiveness and the implementation process of a WHPP focusing on physical activity, smoking cessation, alcohol use, healthy diet and/or relaxation at work, targeting employees aged 18-65 years. Results Of the 307 effect evaluations identified, twenty-two (7.2%) published an additional process evaluation and were included in this review. The results showed that eight of those studies based their process evaluation on a theoretical framework. The methodological quality of nine process evaluations was good. The most frequently reported process components were dose delivered and dose received. Over 50 different implementation barriers/facilitators were identified. The most frequently reported facilitator was strong management support. Lack of resources was the most frequently reported barrier. Seven studies examined the link between implementation and effectiveness. In general a positive association was found between fidelity, dose and the primary outcome of the program. Conclusions Process evaluations are not systematically performed alongside effectiveness studies for WHPPs. The quality of the process evaluations is mostly poor to average, resulting in a lack of systematically measured barriers/facilitators. The narrow focus on implementation makes it difficult to explore the relationship between effectiveness and implementation. Furthermore, the operationalisation of process components varied between studies, indicating a need for consensus about defining and operationalising process components. PMID:24341605
Melo, E Correa
2003-08-01
The author describes the reasons why evaluation processes should be applied to the Veterinary Services of Member Countries, either for trade in animals and animal products and by-products between two countries, or for establishing essential measures to improve the Veterinary Service concerned. The author also describes the basic elements involved in conducting an evaluation process, including the instruments for doing so. These basic elements centre on the following:--designing a model, or desirable image, against which a comparison can be made--establishing a list of processes to be analysed and defining the qualitative and quantitative mechanisms for this analysis--establishing a multidisciplinary evaluation team and developing a process for standardising the evaluation criteria.
Self Evaluation of Organizations.
ERIC Educational Resources Information Center
Pooley, Richard C.
Evaluation within human service organizations is defined in terms of accepted evaluation criteria, with reasonable expectations shown and structured into a model of systematic evaluation practice. The evaluation criteria of program effort, performance, adequacy, efficiency and process mechanisms are discussed, along with measurement information…
Marchetti, Bárbara V; Candotti, Cláudia T; Raupp, Eduardo G; Oliveira, Eduardo B C; Furlanetto, Tássia S; Loss, Jefferson F
The purpose of this study was to assess a radiographic method for spinal curvature evaluation in children, based on spinous processes, and identify its normality limits. The sample consisted of 90 radiographic examinations of the spines of children in the sagittal plane. Thoracic and lumbar curvatures were evaluated using angular (apex angle [AA]) and linear (sagittal arrow [SA]) measurements based on the spinous processes. The same curvatures were also evaluated using the Cobb angle (CA) method, which is considered the gold standard. For concurrent validity (AA vs CA), Pearson's product-moment correlation coefficient, root-mean-square error, Pitman- Morgan test, and Bland-Altman analysis were used. For reproducibility (AA, SA, and CA), the intraclass correlation coefficient, standard error of measurement, and minimal detectable change measurements were used. A significant correlation was found between CA and AA measurements, as was a low root-mean-square error. The mean difference between the measurements was 0° for thoracic and lumbar curvatures, and the mean standard deviations of the differences were ±5.9° and 6.9°, respectively. The intraclass correlation coefficients of AA and SA were similar to or higher than the gold standard (CA). The standard error of measurement and minimal detectable change of the AA were always lower than the CA. This study determined the concurrent validity, as well as intra- and interrater reproducibility, of the radiographic measurements of kyphosis and lordosis in children. Copyright © 2017. Published by Elsevier Inc.
Ajslev, Jeppe; Brandt, Mikkel; Møller, Jeppe Lykke; Skals, Sebastian; Vinstrup, Jonas; Jakobsen, Markus Due; Sundstrup, Emil; Madeleine, Pascal; Andersen, Lars Louis
2016-05-26
Previous research has shown that reducing physical workload among workers in the construction industry is complicated. In order to address this issue, we developed a process evaluation in a formative mixed-methods design, drawing on existing knowledge of the potential barriers for implementation. We present the design of a mixed-methods process evaluation of the organizational, social, and subjective practices that play roles in the intervention study, integrating technical measurements to detect excessive physical exertion measured with electromyography and accelerometers, video documentation of working tasks, and a 3-phased workshop program. The evaluation is designed in an adapted process evaluation framework, addressing recruitment, reach, fidelity, satisfaction, intervention delivery, intervention received, and context of the intervention companies. Observational studies, interviews, and questionnaires among 80 construction workers organized in 20 work gangs, as well as health and safety staff, contribute to the creation of knowledge about these phenomena. At the time of publication, the process of participant recruitment is underway. Intervention studies are challenging to conduct and evaluate in the construction industry, often because of narrow time frames and ever-changing contexts. The mixed-methods design presents opportunities for obtaining detailed knowledge of the practices intra-acting with the intervention, while offering the opportunity to customize parts of the intervention.
Beyond bipolar conceptualizations and measures: the case of attitudes and evaluative space.
Cacioppo, J T; Gardner, W L; Berntson, G G
1997-01-01
All organisms must be capable of differentiating hostile from hospitable stimuli to survive. Typically, this evaluative discrimination is conceptualized as being bipolar (hostile-hospitable). This conceptualization is certainly evident in the area of attitudes, where the ubiquitous bipolar attitude measure, by gauging the net affective predisposition toward a stimulus, treats positive and negative evaluative processes as equivalent, reciprocally activated, and interchangeable. Contrary to conceptualizations of this evaluative process as bipolar, recent evidence suggests that distinguishable motivational systems underlie assessments of the positive and negative significance of a stimulus. Thus, a stimulus may vary in terms of the strength of positive evaluative activation and the strength of negative evaluative activation it evokes. Low activation of positive and negative evaluative processes by a stimulus reflects attitude neutrality or indifference, whereas high activation of positive and negative evaluative processes reflects attitude ambivalence. As such, attitudes can be represented more completely within a bivariate space than along a bipolar continuum. Evidence is reviewed showing that the positive and negative evaluative processes underlying many attitudes are distinguishable (stochastically and functionally independent), are characterized by distinct activation functions (positivity offset and negativity bias principles), are related differentially to attitude ambivalence (corollary of ambivalence asymmetries), have distinguishable antecedents (heteroscedacity principle), and tend to gravitate from a bivariate toward a bipolar structure when the underlying beliefs are the target of deliberation or a guide for behavior (principle of motivational certainty). The implications for society phenomena such as political elections and democratic structures are discussed.
A study for high accuracy measurement of residual stress by deep hole drilling technique
NASA Astrophysics Data System (ADS)
Kitano, Houichi; Okano, Shigetaka; Mochizuki, Masahito
2012-08-01
The deep hole drilling technique (DHD) received much attention in recent years as a method for measuring through-thickness residual stresses. However, some accuracy problems occur when residual stress evaluation is performed by the DHD technique. One of the reasons is that the traditional DHD evaluation formula applies to the plane stress condition. The second is that the effects of the plastic deformation produced in the drilling process and the deformation produced in the trepanning process are ignored. In this study, a modified evaluation formula, which is applied to the plane strain condition, is proposed. In addition, a new procedure is proposed which can consider the effects of the deformation produced in the DHD process by investigating the effects in detail by finite element (FE) analysis. Then, the evaluation results obtained by the new procedure are compared with that obtained by traditional DHD procedure by FE analysis. As a result, the new procedure evaluates the residual stress fields better than the traditional DHD procedure when the measuring object is thick enough that the stress condition can be assumed as the plane strain condition as in the model used in this study.
Raymond, Nancy C; Wyman, Jean F; Dighe, Satlaj; Harwood, Eileen M; Hang, Mikow
2018-06-01
Process evaluation is an important tool in quality improvement efforts. This article illustrates how a systematic and continuous evaluation process can be used to improve the quality of faculty career development programs by using the University of Minnesota's Building Interdisciplinary Research Careers in Women's Health (BIRCWH) K12 program as an exemplar. Data from a rigorous process evaluation incorporating quantitative and qualitative measurements were analyzed and reviewed by the BIRCWH program leadership on a regular basis. Examples are provided of how this evaluation model and processes were used to improve many aspects of the program, thereby improving scholar, mentor, and advisory committee members' satisfaction and scholar outcomes. A rigorous evaluation plan can increase the effectiveness and impact of a research career development plan.
Mouse-tracking evidence for parallel anticipatory option evaluation.
Cranford, Edward A; Moss, Jarrod
2017-12-23
In fast-paced, dynamic tasks, the ability to anticipate the future outcome of a sequence of events is crucial to quickly selecting an appropriate course of action among multiple alternative options. There are two classes of theories that describe how anticipation occurs. Serial theories assume options are generated and evaluated one at a time, in order of quality, whereas parallel theories assume simultaneous generation and evaluation. The present research examined the option evaluation process during a task designed to be analogous to prior anticipation tasks, but within the domain of narrative text comprehension. Prior research has relied on indirect, off-line measurement of the option evaluation process during anticipation tasks. Because the movement of the hand can provide a window into underlying cognitive processes, online metrics such as continuous mouse tracking provide more fine-grained measurements of cognitive processing as it occurs in real time. In this study, participants listened to three-sentence stories and predicted the protagonists' final action by moving a mouse toward one of three possible options. Each story was presented with either one (control condition) or two (distractor condition) plausible ending options. Results seem most consistent with a parallel option evaluation process because initial mouse trajectories deviated further from the best option in the distractor condition compared to the control condition. It is difficult to completely rule out all possible serial processing accounts, although the results do place constraints on the time frame in which a serial processing explanation must operate.
[Analysis of variance of repeated data measured by water maze with SPSS].
Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang
2007-01-01
To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (P
Evaluating the Risks: A Bernoulli Process Model of HIV Infection and Risk Reduction.
ERIC Educational Resources Information Center
Pinkerton, Steven D.; Abramson, Paul R.
1993-01-01
A Bernoulli process model of human immunodeficiency virus (HIV) is used to evaluate infection risks associated with various sexual behaviors (condom use, abstinence, or monogamy). Results suggest that infection is best mitigated through measures that decrease infectivity, such as condom use. (SLD)
Affective Evaluations of Exercising: The Role of Automatic-Reflective Evaluation Discrepancy.
Brand, Ralf; Antoniewicz, Franziska
2016-12-01
Sometimes our automatic evaluations do not correspond well with those we can reflect on and articulate. We present a novel approach to the assessment of automatic and reflective affective evaluations of exercising. Based on the assumptions of the associative-propositional processes in evaluation model, we measured participants' automatic evaluations of exercise and then shared this information with them, asked them to reflect on it and rate eventual discrepancy between their reflective evaluation and the assessment of their automatic evaluation. We found that mismatch between self-reported ideal exercise frequency and actual exercise frequency over the previous 14 weeks could be regressed on the discrepancy between a relatively negative automatic and a more positive reflective evaluation. This study illustrates the potential of a dual-process approach to the measurement of evaluative responses and suggests that mistrusting one's negative spontaneous reaction to exercise and asserting a very positive reflective evaluation instead leads to the adoption of inflated exercise goals.
CALiPER Exploratory Study: Accounting for Uncertainty in Lumen Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergman, Rolf; Paget, Maria L.; Richman, Eric E.
2011-03-31
With a well-defined and shared understanding of uncertainty in lumen measurements, testing laboratories can better evaluate their processes, contributing to greater consistency and credibility of lighting testing a key component of the U.S. Department of Energy (DOE) Commercially Available LED Product Evaluation and Reporting (CALiPER) program. Reliable lighting testing is a crucial underlying factor contributing toward the success of many energy-efficient lighting efforts, such as the DOE GATEWAY demonstrations, Lighting Facts Label, ENERGY STAR® energy efficient lighting programs, and many others. Uncertainty in measurements is inherent to all testing methodologies, including photometric and other lighting-related testing. Uncertainty exists for allmore » equipment, processes, and systems of measurement in individual as well as combined ways. A major issue with testing and the resulting accuracy of the tests is the uncertainty of the complete process. Individual equipment uncertainties are typically identified, but their relative value in practice and their combined value with other equipment and processes in the same test are elusive concepts, particularly for complex types of testing such as photometry. The total combined uncertainty of a measurement result is important for repeatable and comparative measurements for light emitting diode (LED) products in comparison with other technologies as well as competing products. This study provides a detailed and step-by-step method for determining uncertainty in lumen measurements, working closely with related standards efforts and key industry experts. This report uses the structure proposed in the Guide to Uncertainty Measurements (GUM) for evaluating and expressing uncertainty in measurements. The steps of the procedure are described and a spreadsheet format adapted for integrating sphere and goniophotometric uncertainty measurements is provided for entering parameters, ordering the information, calculating intermediate values and, finally, obtaining expanded uncertainties. Using this basis and examining each step of the photometric measurement and calibration methods, mathematical uncertainty models are developed. Determination of estimated values of input variables is discussed. Guidance is provided for the evaluation of the standard uncertainties of each input estimate, covariances associated with input estimates and the calculation of the result measurements. With this basis, the combined uncertainty of the measurement results and finally, the expanded uncertainty can be determined.« less
ERIC Educational Resources Information Center
Carroll, Erin Ashley
2013-01-01
Creativity is understood intuitively, but it is not easily defined and therefore difficult to measure. This makes it challenging to evaluate the ability of a digital tool to support the creative process. When evaluating creativity support tools (CSTs), it is critical to look beyond traditional time, error, and other productivity measurements that…
ERIC Educational Resources Information Center
Haider, Zubair; Latif, Farah; Akhtar, Samina; Mushtaq, Maria
2012-01-01
Validity, reliability and item analysis are critical to the process of evaluating the quality of an educational measurement. The present study evaluates the quality of an assessment constructed to measure elementary school student's achievement in English. In this study, the survey model of descriptive research was used as a research method.…
Computer systems performance measurement techniques.
DOT National Transportation Integrated Search
1971-06-01
Computer system performance measurement techniques, tools, and approaches are presented as a foundation for future recommendations regarding the instrumentation of the ARTS ATC data processing subsystem for purposes of measurement and evaluation.
Evaluation and development plan of NRTA measurement methods for the Rokkasho Reprocessing Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T.K.; Hakkila, E.A.; Flosterbuer, S.F.
Near-real-time accounting (NRTA) has been proposed as a safeguards method at the Rokkasho Reprocessing Plant (RRP), a large-scale commercial boiling water and pressurized water reactors spent-fuel reprocessing facility. NRTA for RRP requires material balance closures every month. To develop a more effective and practical NRTA system for RRP, we have evaluated NRTA measurement techniques and systems that might be implemented in both the main process and the co-denitration process areas at RRP to analyze the concentrations of plutonium in solutions and mixed oxide powder. Based on the comparative evaluation, including performance, reliability, design criteria, operation methods, maintenance requirements, and estimatedmore » costs for each possible measurement method, recommendations for development were formulated. This paper discusses the evaluations and reports on the recommendation of the NRTA development plan for potential implementation at RRP.« less
An Effective Measured Data Preprocessing Method in Electrical Impedance Tomography
Yu, Chenglong; Yue, Shihong; Wang, Jianpei; Wang, Huaxiang
2014-01-01
As an advanced process detection technology, electrical impedance tomography (EIT) has widely been paid attention to and studied in the industrial fields. But the EIT techniques are greatly limited to the low spatial resolutions. This problem may result from the incorrect preprocessing of measuring data and lack of general criterion to evaluate different preprocessing processes. In this paper, an EIT data preprocessing method is proposed by all rooting measured data and evaluated by two constructed indexes based on all rooted EIT measured data. By finding the optimums of the two indexes, the proposed method can be applied to improve the EIT imaging spatial resolutions. In terms of a theoretical model, the optimal rooting times of the two indexes range in [0.23, 0.33] and in [0.22, 0.35], respectively. Moreover, these factors that affect the correctness of the proposed method are generally analyzed. The measuring data preprocessing is necessary and helpful for any imaging process. Thus, the proposed method can be generally and widely used in any imaging process. Experimental results validate the two proposed indexes. PMID:25165735
ERIC Educational Resources Information Center
Halpin, Peter F.; Torrente, Catalina
2014-01-01
Using reliable and valid measures of students' outcomes which are sensitive to change is critical for obtaining interpretable and therefore useful results from evaluations of school-based interventions. While measurement development for use in experimental evaluations receives a great deal of attention in the U.S., it lags behind in low-income…
NASA Astrophysics Data System (ADS)
Noda, Toshihiko; Takao, Hidekuni; Ashiki, Mitsuaki; Ebi, Hiroyuki; Sawada, Kazuaki; Ishida, Makoto
2004-04-01
In this study, a microchip for measurement of hemoglobin in human blood has been proposed, fabricated and evaluated. The measurement principle of hemoglobin is based on the “cyanmethemoglobin method” that calculates the cyanmethemoglobin concentration by absorption photometry. A glass/silicon/silicon structure was used for the microchip. The middle silicon layer includes flow channels, and 45° mirrors formed at each end of the flow channels. Photodiodes and metal oxide semiconductor (MOS) integrated circuits were fabricated on the bottom silicon layer. The performance of the microchip for hemoglobin measurement was evaluated using a solution of red food color instead of a real blood sample. The fabricated microchip exhibited a similar performance to a nonminiaturized absorption cell which has the same optical path length. Signal processing output varied with solution concentration from 5.32 V to 5.55 V with very high stability due to differential signal processing.
A multidimensional evaluation of a nursing information-literacy program.
Fox, L M; Richter, J M; White, N E
1996-01-01
The goal of an information-literacy program is to develop student skills in locating, evaluating, and applying information for use in critical thinking and problem solving. This paper describes a multidimensional evaluation process for determining nursing students' growth in cognitive and affective domains. Results indicate improvement in student skills as a result of a nursing information-literacy program. Multidimensional evaluation produces a well-rounded picture of student progress based on formal measurement as well as informal feedback. Developing new educational programs can be a time-consuming challenge. It is important, when expending so much effort, to ensure that the goals of the new program are achieved and benefits to students demonstrated. A multidimensional approach to evaluation can help to accomplish those ends. In 1988, The University of Northern Colorado School of Nursing began working with a librarian to integrate an information-literacy component, entitled Pathways to Information Literacy, into the curriculum. This article describes the program and discusses how a multidimensional evaluation process was used to assess program effectiveness. The evaluation process not only helped to measure the effectiveness of the program but also allowed the instructors to use several different approaches to evaluation. PMID:8826621
The quality of instruments to assess the process of shared decision making: A systematic review
Bomhof-Roordink, Hanna; Smith, Ian P.; Scholl, Isabelle; Stiggelbout, Anne M.; Pieterse, Arwen H.
2018-01-01
Objective To inventory instruments assessing the process of shared decision making and appraise their measurement quality, taking into account the methodological quality of their validation studies. Methods In a systematic review we searched seven databases (PubMed, Embase, Emcare, Cochrane, PsycINFO, Web of Science, Academic Search Premier) for studies investigating instruments measuring the process of shared decision making. Per identified instrument, we assessed the level of evidence separately for 10 measurement properties following a three-step procedure: 1) appraisal of the methodological quality using the COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) checklist, 2) appraisal of the psychometric quality of the measurement property using three possible quality scores, 3) best-evidence synthesis based on the number of studies, their methodological and psychometrical quality, and the direction and consistency of the results. The study protocol was registered at PROSPERO: CRD42015023397. Results We included 51 articles describing the development and/or evaluation of 40 shared decision-making process instruments: 16 patient questionnaires, 4 provider questionnaires, 18 coding schemes and 2 instruments measuring multiple perspectives. There is an overall lack of evidence for their measurement quality, either because validation is missing or methods are poor. The best-evidence synthesis indicated positive results for a major part of instruments for content validity (50%) and structural validity (53%) if these were evaluated, but negative results for a major part of instruments when inter-rater reliability (47%) and hypotheses testing (59%) were evaluated. Conclusions Due to the lack of evidence on measurement quality, the choice for the most appropriate instrument can best be based on the instrument’s content and characteristics such as the perspective that they assess. We recommend refinement and validation of existing instruments, and the use of COSMIN-guidelines to help guarantee high-quality evaluations. PMID:29447193
ERIC Educational Resources Information Center
Love, John M.; And Others
This report presents recommendations for measures to be used in assessing the impact of Project Developmental Continuity (PDC). Chapter I reviews the purpose of the impact study and presents the basic considerations guiding the selection of measures. Chapter II describes the review process that led to the final recommendations. Chapter III…
NASA Technical Reports Server (NTRS)
Bentley, P. B.
1975-01-01
The measurement of the volume flow-rate of blood in an artery or vein requires both an estimate of the flow velocity and its spatial distribution and the corresponding cross-sectional area. Transcutaneous measurements of these parameters can be performed using ultrasonic techniques that are analogous to the measurement of moving objects by use of a radar. Modern digital data recording and preprocessing methods were applied to the measurement of blood-flow velocity by means of the CW Doppler ultrasonic technique. Only the average flow velocity was measured and no distribution or size information was obtained. Evaluations of current flowmeter design and performance, ultrasonic transducer fabrication methods, and other related items are given. The main thrust was the development of effective data-handling and processing methods by application of modern digital techniques. The evaluation resulted in useful improvements in both the flowmeter instrumentation and the ultrasonic transducers. Effective digital processing algorithms that provided enhanced blood-flow measurement accuracy and sensitivity were developed. Block diagrams illustrative of the equipment setup are included.
Mariani, Rachele; Maskit, Bernard; Bucci, Wilma; De Coro, Alessandra
2013-01-01
The referential process is defined in the context of Bucci's multiple code theory as the process by which nonverbal experience is connected to language. The English computerized measures of the referential process, which have been applied in psychotherapy research, include the Weighted Referential Activity Dictionary (WRAD), and measures of Reflection, Affect and Disfluency. This paper presents the development of the Italian version of the IWRAD by modeling Italian texts scored by judges, and shows the application of the IWRAD and other Italian measures in three psychodynamic treatments evaluated for personality change using the Shedler-Westen Assessment Procedure (SWAP-200). Clinical predictions based on applications of the English measures were supported.
NASA Astrophysics Data System (ADS)
Tatebe, Hironobu; Kato, Kunihito; Yamamoto, Kazuhiko; Katsuta, Yukio; Nonaka, Masahiko
2005-12-01
Now a day, many evaluation methods for the food industry by using image processing are proposed. These methods are becoming new evaluation method besides the sensory test and the solid-state measurement that are using for the quality evaluation. An advantage of the image processing is to be able to evaluate objectively. The goal of our research is structure evaluation of sponge cake by using image processing. In this paper, we propose a feature extraction method of the bobble structure in the sponge cake. Analysis of the bubble structure is one of the important properties to understand characteristics of the cake from the image. In order to take the cake image, first we cut cakes and measured that's surface by using the CIS scanner. Because the depth of field of this type scanner is very shallow, the bubble region of the surface has low gray scale values, and it has a feature that is blur. We extracted bubble regions from the surface images based on these features. First, input image is binarized, and the feature of bubble is extracted by the morphology analysis. In order to evaluate the result of feature extraction, we compared correlation with "Size of the bubble" of the sensory test result. From a result, the bubble extraction by using morphology analysis gives good correlation. It is shown that our method is as well as the subjectivity evaluation.
Sturkenboom, Ingrid H; Graff, Maud J; Borm, George F; Veenhuizen, Yvonne; Bloem, Bastiaan R; Munneke, Marten; Nijhuis-van der Sanden, Maria W
2013-02-01
To evaluate the feasibility of a randomized controlled trial including process and potential impact of occupational therapy in Parkinson's disease. Process and outcome were quantitatively and qualitatively evaluated in an exploratory multicentre, two-armed randomized controlled trial at three months. Forty-three community-dwelling patients with Parkinson's disease and difficulties in daily activities, their primary caregivers and seven occupational therapists. Ten weeks of home-based occupational therapy according to the Dutch guidelines of occupational therapy in Parkinson's disease versus no occupational therapy in the control group. Process evaluation measured accrual, drop-out, intervention delivery and protocol adherence. Primary outcome measures of patients assessed daily functioning: Canadian Occupational Performance Measure (COPM) and Assessment of Motor and Process Skills. Primary outcome for caregivers was caregiver burden: Zarit Burden Inventory. Participants' perspectives of the intervention were explored using questionnaires and in-depth interviews. Inclusion was 23% (43/189), drop-out 7% (3/43) and unblinding of assessors 33% (13/40). Full intervention protocol adherence was 74% (20/27), but only 60% (71/119) of baseline Canadian Occupational Performance Measure priorities were addressed in the intervention. The outcome measures revealed negligible to small effects in favour of the intervention group. Almost all patients and caregivers of the intervention group were satisfied with the results. They perceived: 'more grip on the situation' and used 'practical advices that make life easier'. Therapists were satisfied, but wished for a longer intervention period. The positive perceived impact of occupational therapy warrants a large-scale trial. Adaptations in instructions and training are needed to use the Canadian Occupational Performance Measure as primary outcome measure.
Lee, Heewon; Contento, Isobel R; Koch, Pamela
2013-03-01
To use and review a conceptual model of process evaluation and to examine the implementation of a nutrition education curriculum, Choice, Control & Change, designed to promote dietary and physical activity behaviors that reduce obesity risk. A process evaluation study based on a systematic conceptual model. Five middle schools in New York City. Five hundred sixty-two students in 20 classes and their science teachers (n = 8). Based on the model, teacher professional development, teacher implementation, and student reception were evaluated. Also measured were teacher characteristics, teachers' curriculum evaluation, and satisfaction with teaching the curriculum. Descriptive statistics and Spearman ρ correlation for quantitative analysis and content analysis for qualitative data were used. Mean score of the teacher professional development evaluation was 4.75 on a 5-point scale. Average teacher implementation rate was 73%, and the student reception rate was 69%. Ongoing teacher support was highly valued by teachers. Teacher satisfaction with teaching the curriculum was highly correlated with student satisfaction (P < .05). Teacher perception of amount of student work was negatively correlated with implementation and with student satisfaction (P < .05). Use of a systematic conceptual model and comprehensive process measures improves understanding of the implementation process and helps educators to better implement interventions as designed. Copyright © 2013 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Conceptualising the effectiveness of impact assessment processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chanchitpricha, Chaunjit, E-mail: chaunjit@g.sut.ac.th; Bond, Alan, E-mail: alan.bond@uea.ac.uk; Unit for Environmental Sciences and Management School of Geo and Spatial Sciences, Internal Box 375, North West University
2013-11-15
This paper aims at conceptualising the effectiveness of impact assessment processes through the development of a literature-based framework of criteria to measure impact assessment effectiveness. Four categories of effectiveness were established: procedural, substantive, transactive and normative, each containing a number of criteria; no studies have previously brought together all four of these categories into such a comprehensive, criteria-based framework and undertaken systematic evaluation of practice. The criteria can be mapped within a cycle/or cycles of evaluation, based on the ‘logic model’, at the stages of input, process, output and outcome to enable the identification of connections between the criteria acrossmore » the categories of effectiveness. This framework is considered to have potential application in measuring the effectiveness of many impact assessment processes, including strategic environmental assessment (SEA), environmental impact assessment (EIA), social impact assessment (SIA) and health impact assessment (HIA). -- Highlights: • Conceptualising effectiveness of impact assessment processes. • Identification of factors influencing effectiveness of impact assessment processes. • Development of criteria within a framework for evaluating IA effectiveness. • Applying the logic model to examine connections between effectiveness criteria.« less
Fuzzy Relational Databases: Representational Issues and Reduction Using Similarity Measures.
ERIC Educational Resources Information Center
Prade, Henri; Testemale, Claudette
1987-01-01
Compares and expands upon two approaches to dealing with fuzzy relational databases. The proposed similarity measure is based on a fuzzy Hausdorff distance and estimates the mismatch between two possibility distributions using a reduction process. The consequences of the reduction process on query evaluation are studied. (Author/EM)
Implementing an ROI Measurement Process at Dell Computer.
ERIC Educational Resources Information Center
Tesoro, Ferdinand
1998-01-01
This return-on-investment (ROI) evaluation study determined the business impact of the sales negotiation training course to Dell Computer Corporation. A five-step ROI measurement process was used: Plan-Develop-Analyze-Communicate-Leverage. The corporate sales information database was used to compare pre- and post-training metrics for both training…
Jaegers, Lisa; Dale, Ann Marie; Weaver, Nancy; Buchholz, Bryan; Welch, Laura; Evanoff, Bradley
2014-03-01
Intervention studies in participatory ergonomics (PE) are often difficult to interpret due to limited descriptions of program planning and evaluation. In an ongoing PE program with floor layers, we developed a logic model to describe our program plan, and process and summative evaluations designed to describe the efficacy of the program. The logic model was a useful tool for describing the program elements and subsequent modifications. The process evaluation measured how well the program was delivered as intended, and revealed the need for program modifications. The summative evaluation provided early measures of the efficacy of the program as delivered. Inadequate information on program delivery may lead to erroneous conclusions about intervention efficacy due to Type III error. A logic model guided the delivery and evaluation of our intervention and provides useful information to aid interpretation of results. © 2013 Wiley Periodicals, Inc.
Jaegers, Lisa; Dale, Ann Marie; Weaver, Nancy; Buchholz, Bryan; Welch, Laura; Evanoff, Bradley
2013-01-01
Background Intervention studies in participatory ergonomics (PE) are often difficult to interpret due to limited descriptions of program planning and evaluation. Methods In an ongoing PE program with floor layers, we developed a logic model to describe our program plan, and process and summative evaluations designed to describe the efficacy of the program. Results The logic model was a useful tool for describing the program elements and subsequent modifications. The process evaluation measured how well the program was delivered as intended, and revealed the need for program modifications. The summative evaluation provided early measures of the efficacy of the program as delivered. Conclusions Inadequate information on program delivery may lead to erroneous conclusions about intervention efficacy due to Type III error. A logic model guided the delivery and evaluation of our intervention and provides useful information to aid interpretation of results. PMID:24006097
Racial disparities in African Americans with diabetes: process and outcome mismatch.
Bulger, John B; Shubrook, Jay H; Snow, Richard
2012-08-01
Over the past 2 decades, numerous studies have demonstrated the existence of racial disparities in patient care in the United States. Specifically, African Americans with diabetes are less likely to have recommended process of care measures performed and outcome benchmarks for quality of care. To evaluate the delivery of diabetes care (processes and outcomes) associated with racial categories using a national web-based registry-the American Osteopathic Association Clinical Assessment Program (AOA-CAP). A retrospective analysis of data retrieved from the AOA-CAP database on outcomes and process measures for diabetes. A total of 10,699 Caucasian and African American patients who received diabetes care had data entered into the AOA-CAP registry between July 1, 2005, and October 30, 2010. African Americans represented 3123 patients (29%), Caucasians 7576 (71%). Demographic, process of care, and outcomes comparisons between ethnicities were carried out using ?2 and t tests. Composite measures of process and outcomes of diabetes care were created to investigate the effect of race on care. The process of care composite measure was significantly different among African American patients (P = .02) who were more likely to receive all indicated care than Caucasian patients (33.9% vs 31.6%). Evaluation of the composite outcome measure, which quantifies the percentage of patients achieving control of all 3 intermediate outcomes, was (P <.001) lower in African Americans than in Caucasians (8.1% vs 12.3%). African American patients with diabetes were as likely or more likely to have recommended process of care measures performed. In spite of this, intermediate diabetes outcomes were still poorer in the same African American population.
Helfrich, Christian D; Dolan, Emily D; Fihn, Stephan D; Rodriguez, Hector P; Meredith, Lisa S; Rosland, Ann-Marie; Lempa, Michele; Wakefield, Bonnie J; Joos, Sandra; Lawler, Lauren H; Harvey, Henry B; Stark, Richard; Schectman, Gordon; Nelson, Karin M
2014-12-01
Team-based care is central to the patient-centered medical home (PCMH), but most PCMH evaluations measure team structure exclusively. We assessed team-based care in terms of team structure, process and effectiveness, and the association with improvements in teams׳ abilities to deliver patient-centered care. We fielded a cross-sectional survey among 913 VA primary care clinics implementing a PCMH model in 2012. The dependent variable was clinic-level respondent-reported improvements in delivery of patient-centered care. Independent variables included three sets of measures: (1) team structure, (2) team process, and (3) team effectiveness. We adjusted for clinic workload and patient comorbidity. 4819 surveys were returned (25% estimated response rate). The highest ratings were for team structure (median of 89% of respondents being assigned to a teamlet, i.e., a PCP working with the same clinical associate, nurse care manager and clerk) and lowest for team process (median of 10% of respondents reporting the lowest level of stress/chaos). In multivariable regression, perceived improvements in patient-centered care were most strongly associated with participatory decision making (β=32, P<0.0001) and history of change in the clinic (β=18, P=0008) (both team processes). A stressful/chaotic clinic environment was associated with higher barriers to patient centered care (β=0.16-0.34, P=<0.0001), and lower improvements in patient-centered care (β=-0.19, P=0.001). Team process and effectiveness measures, often omitted from PCMH evaluations, had stronger associations with perceived improvements in patient-centered care than team structure measures. Team process and effectiveness measures may facilitate synthesis of evaluation findings and help identify positive outlier clinics. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Drbúl, Mário; Šajgalík, Michal; Litvaj, lvan; Babík, Ondrej
2016-12-01
Each part as a final product and its surface is composed of various geometric elements, although at first glance seem as smooth and shiny. During the manufacturing process, there is a number of influences (e.g. selected manufacturing technology, production process, human factors, the strategy of measurement, scanning speed, shape of the measurement contact tip, temperature, or the surface tension and the like), which hinder the production of component with ideally shaped elements. From the economic and design point of view (in accordance with determined GPS standards), there is necessary fast and accurate analyze and evaluate these elements. Presented article deals with the influence of scanning speed and measuring strategy for assessment of shape deviations.
Measurement Assurance for End-Item Users
NASA Technical Reports Server (NTRS)
Mimbs, Scott M.
2008-01-01
The goal of a Quality Management System (QMS) as specified in ISO 9001 and AS9100 is to assure the end product meets specifications and customer requirements. Measuring devices, often called measuring and test equipments (MTE), provide the evidence of product conformity to the prescribed requirements. Therefore the processes which employ MTE can become a weak link to the overall QMS if proper attention is not given to development and execution of these processes. Traditionally, calibration of MTE is given more focus in industry standards and process control efforts than the equally important proper usage of the same equipment. It is a common complaint of calibration laboratory personnel that MTE users are only interested in "a sticker." If the QMS requires the MTE "to demonstrate conformity of the product," then the quality of the measurement process must be adequate for the task. This leads to an ad hoc definition; measurement assurance is a discipline that assures that all processes, activities, environments, standards, and procedures involved in making a measurement produce a result that can be rigorously evaluated for validity and accuracy. To evaluate that the existing measurement processes are providing an adequate level of quality to support the decisions based upon this measurement data, an understanding of measurement assurance basics is essential. This topic is complimentary to the calibration standard, ANSI/NCSL Z540.3-2006, which targets the calibration of MTE at the organizational level. This paper will discuss general measurement assurance when MTE is used to provide evidence of product conformity, therefore the target audience of this paper is end item users of MTE. A central focus of the paper will be the verification of tolerances and the associated risks, so calibration professionals may find the paper useful in communication with their customers, MTE users.
ERIC Educational Resources Information Center
Landolfi, Adrienne M.
2016-01-01
As accountability measures continue to increase within education, public school systems have integrated standards-based evaluation systems to formally assess professional practices among educators. The purpose of this study was to explore the extent in which the communication process between evaluators and teachers impacts teacher performance…
USDA-ARS?s Scientific Manuscript database
Process evaluations of large-scale school based programs are necessary to aid in the interpretation of the outcome data. The Louisiana Health (LA Health) study is a multi-component childhood obesity prevention study for middle school children. The Physical Education (PEQ), Intervention (IQ), and F...
Process Evaluation of a Parenting Program for Low-Income Families in South Africa
ERIC Educational Resources Information Center
Lachman, Jamie M.; Kelly, Jane; Cluver, Lucie; Ward, Catherine L.; Hutchings, Judy; Gardner, Frances
2018-01-01
Objective: This mixed-methods process evaluation examined the feasibility of a parenting program delivered by community facilitators to reduce the risk of child maltreatment in low-income families with children aged 3-8 years in Cape Town, South Africa (N = 68). Method: Quantitative measures included attendance registers, fidelity checklists,…
ERIC Educational Resources Information Center
Heidemeier, Heike; Staudinger, Ursula M.
2012-01-01
This study demonstrates how self-evaluation processes explain subgroup differences in ratings of life satisfaction (population heterogeneity). Life domains differ with regard to the constraints they impose on beliefs in internal control. We hypothesized that these differences are linked with cognitive biases in ratings of life satisfaction. In…
Grazing Incidence Wavefront Sensing and Verification of X-Ray Optics Performance
NASA Technical Reports Server (NTRS)
Saha, Timo T.; Rohrbach, Scott; Zhang, William W.
2011-01-01
Evaluation of interferometrically measured mirror metrology data and characterization of a telescope wavefront can be powerful tools in understanding of image characteristics of an x-ray optical system. In the development of soft x-ray telescope for the International X-Ray Observatory (IXO), we have developed new approaches to support the telescope development process. Interferometrically measuring the optical components over all relevant spatial frequencies can be used to evaluate and predict the performance of an x-ray telescope. Typically, the mirrors are measured using a mount that minimizes the mount and gravity induced errors. In the assembly and mounting process the shape of the mirror segments can dramatically change. We have developed wavefront sensing techniques suitable for the x-ray optical components to aid us in the characterization and evaluation of these changes. Hartmann sensing of a telescope and its components is a simple method that can be used to evaluate low order mirror surface errors and alignment errors. Phase retrieval techniques can also be used to assess and estimate the low order axial errors of the primary and secondary mirror segments. In this paper we describe the mathematical foundation of our Hartmann and phase retrieval sensing techniques. We show how these techniques can be used in the evaluation and performance prediction process of x-ray telescopes.
Monetary and affective judgments of consumer goods: modes of evaluation matter.
Seta, John J; Seta, Catherine E; McCormick, Michael; Gallagher, Ashleigh H
2014-01-01
Participants who evaluated 2 positively valued items separately reported more positive attraction (using affective and monetary measures) than those who evaluated the same two items as a unit. In Experiments 1-3, this separate/unitary evaluation effect was obtained when participants evaluated products that they were purchasing for a friend. Similar findings were obtained in Experiments 4 and 5 when we considered the amount participants were willing to spend to purchase insurance for items that they currently owned. The averaging/summation model was contrasted with several theoretical perspectives and implicated averaging and summation integration processes in how items are evaluated. The procedural and theoretical similarities and differences between this work and related research on unpacking, comparison processes, public goods, and price bundling are discussed. Overall, the results support the operation of integration processes and contribute to an understanding of how these processes influence the evaluation and valuation of private goods.
On the automatic activation of attitudes: a quarter century of evaluative priming research.
Herring, David R; White, Katherine R; Jabeen, Linsa N; Hinojos, Michelle; Terrazas, Gabriela; Reyes, Stephanie M; Taylor, Jennifer H; Crites, Stephen L
2013-09-01
Evaluation is a fundamental concept in psychological science. Limitations of self-report measures of evaluation led to an explosion of research on implicit measures of evaluation. One of the oldest and most frequently used implicit measurement paradigms is the evaluative priming paradigm developed by Fazio, Sanbonmatsu, Powell, and Kardes (1986). This paradigm has received extensive attention in psychology and is used to investigate numerous phenomena ranging from prejudice to depression. The current review provides a meta-analysis of a quarter century of evaluative priming research: 73 studies yielding 125 independent effect sizes from 5,367 participants. Because judgments people make in evaluative priming paradigms can be used to tease apart underlying processes, this meta-analysis examined the impact of different judgments to test the classic encoding and response perspectives of evaluative priming. As expected, evidence for automatic evaluation was found, but the results did not exclusively support either of the classic perspectives. Results suggest that both encoding and response processes likely contribute to evaluative priming but are more nuanced than initially conceptualized by the classic perspectives. Additionally, there were a number of unexpected findings that influenced evaluative priming such as segmenting trials into discrete blocks. We argue that many of the findings of this meta-analysis can be explained with 2 recent evaluative priming perspectives: the attentional sensitization/feature-specific attention allocation and evaluation window perspectives. (c) 2013 APA, all rights reserved.
ERIC Educational Resources Information Center
Minix, Nancy; And Others
The process used to evaluate progress in identifying the goals to be used in evaluating teacher performance under the Kentucky Career Ladder Program is described. The process pertains to two areas of teacher development: (1) professional growth and development, and (2) professional leadership and initiative. A total of 1,650 individuals were asked…
Walter, Alexander I; Helgenberger, Sebastian; Wiek, Arnim; Scholz, Roland W
2007-11-01
Most Transdisciplinary Research (TdR) projects combine scientific research with the building of decision making capacity for the involved stakeholders. These projects usually deal with complex, societally relevant, real-world problems. This paper focuses on TdR projects, which integrate the knowledge of researchers and stakeholders in a collaborative transdisciplinary process through structured methods of mutual learning. Previous research on the evaluation of TdR has insufficiently explored the intended effects of transdisciplinary processes on the real world (societal effects). We developed an evaluation framework for assessing the societal effects of transdisciplinary processes. Outputs (measured as procedural and product-related involvement of the stakeholders), impacts (intermediate effects connecting outputs and outcomes) and outcomes (enhanced decision making capacity) are distinguished as three types of societal effects. Our model links outputs and outcomes of transdisciplinary processes via the impacts using a mediating variables approach. We applied this model in an ex post evaluation of a transdisciplinary process. 84 out of 188 agents participated in a survey. The results show significant mediation effects of the two impacts "network building" and "transformation knowledge". These results indicate an influence of a transdisciplinary process on the decision making capacity of stakeholders, especially through social network building and the generation of knowledge relevant for action.
New knowledge network evaluation method for design rationale management
NASA Astrophysics Data System (ADS)
Jing, Shikai; Zhan, Hongfei; Liu, Jihong; Wang, Kuan; Jiang, Hao; Zhou, Jingtao
2015-01-01
Current design rationale (DR) systems have not demonstrated the value of the approach in practice since little attention is put to the evaluation method of DR knowledge. To systematize knowledge management process for future computer-aided DR applications, a prerequisite is to provide the measure for the DR knowledge. In this paper, a new knowledge network evaluation method for DR management is presented. The method characterizes the DR knowledge value from four perspectives, namely, the design rationale structure scale, association knowledge and reasoning ability, degree of design justification support and degree of knowledge representation conciseness. The DR knowledge comprehensive value is also measured by the proposed method. To validate the proposed method, different style of DR knowledge network and the performance of the proposed measure are discussed. The evaluation method has been applied in two realistic design cases and compared with the structural measures. The research proposes the DR knowledge evaluation method which can provide object metric and selection basis for the DR knowledge reuse during the product design process. In addition, the method is proved to be more effective guidance and support for the application and management of DR knowledge.
Computer measurement of arterial disease
NASA Technical Reports Server (NTRS)
Armstrong, J.; Selzer, R. H.; Barndt, R.; Blankenhorn, D. H.; Brooks, S.
1980-01-01
Image processing technique quantifies human atherosclerosis by computer analysis of arterial angiograms. X-ray film images are scanned and digitized, arterial shadow is tracked, and several quantitative measures of lumen irregularity are computed. In other tests, excellent agreement was found between computer evaluation of femoral angiograms on living subjects and evaluation by teams of trained angiographers.
ERIC Educational Resources Information Center
Hardré, Patricia L.; Hackett, Shannon
2015-01-01
This manuscript chronicles the process and products of a redesign for evaluation of the graduate college experience (GCE) which was initiated by a university graduate college, based on its observed need to reconsider and update its measures and methods for assessing graduate students' experiences. We examined the existing instrumentation and…
A Narrative Review of Generic Intervention Fidelity Measures
ERIC Educational Resources Information Center
Di Rezze, Briano; Law, Mary; Gorter, Jan Willem; Eva, Kevin; Pollock, Nancy
2012-01-01
To increase the rigor of pediatric rehabilitation research, there is a need to evaluate the degree to which an intervention is conducted as planned (i.e., fidelity). Generic fidelity measures evaluate more than one intervention and often include nonspecific attributes of the therapy process common to both interventions. The objective of this study…
Indirect measures as a signal for evaluative change.
Perugini, Marco; Richetin, Juliette; Zogmaister, Cristina
2014-01-01
Implicit and explicit attitudes can be changed by using evaluative learning procedures. In this contribution we investigated an asymmetric effect of order of administration of indirect and direct measures on the detection of evaluative change: A change in explicit attitudes is more likely detected if they are measured after implicit attitudes, whereas these latter change regardless of the order. This effect was demonstrated in two studies (n=270; n=138) using the self-referencing task whereas it was not found in a third study (n=151) that used a supraliminal sequential evaluative conditioning paradigm. In all studies evaluative change was present only for contingency aware participants. We discuss a potential explanation underlying the order of measure effect entailing that, in some circumstances, an indirect measure is not only a measure but also a signal that can be detected through self-perception processes and further elaborated at the propositional level.
Automatic real time evaluation of red blood cell elasticity by optical tweezers
NASA Astrophysics Data System (ADS)
Moura, Diógenes S.; Silva, Diego C. N.; Williams, Ajoke J.; Bezerra, Marcos A. C.; Fontes, Adriana; de Araujo, Renato E.
2015-05-01
Optical tweezers have been used to trap, manipulate, and measure individual cell properties. In this work, we show that the association of a computer controlled optical tweezers system with image processing techniques allows rapid and reproducible evaluation of cell deformability. In particular, the deformability of red blood cells (RBCs) plays a key role in the transport of oxygen through the blood microcirculation. The automatic measurement processes consisted of three steps: acquisition, segmentation of images, and measurement of the elasticity of the cells. An optical tweezers system was setup on an upright microscope equipped with a CCD camera and a motorized XYZ stage, computer controlled by a Labview platform. On the optical tweezers setup, the deformation of the captured RBC was obtained by moving the motorized stage. The automatic real-time homemade system was evaluated by measuring RBCs elasticity from normal donors and patients with sickle cell anemia. Approximately 150 erythrocytes were examined, and the elasticity values obtained by using the developed system were compared to the values measured by two experts. With the automatic system, there was a significant time reduction (60 × ) of the erythrocytes elasticity evaluation. Automated system can help to expand the applications of optical tweezers in hematology and hemotherapy.
Follow Through Classroom Process Measurement and Pupil Growth (1970-71). Final Report.
ERIC Educational Resources Information Center
Soar, Robert S.
This study presents results from the last year of a three-year adjunctive evaluation of classroom process measurement and pupil growth. A total of 289 classrooms involving eight experimental programs, ranging from open classrooms to contingency management classes, along with a comparison sample, were observed. Four observation instruments were…
Non-formal educator use of evaluation results.
Baughman, Sarah; Boyd, Heather H; Franz, Nancy K
2012-08-01
Increasing demands for accountability in educational programming have resulted in increasing calls for program evaluation in educational organizations. Many organizations include conducting program evaluations as part of the job responsibilities of program staff. Cooperative Extension is a complex organization offering non-formal educational programs through land grant universities. Many Extension services require non-formal educational program evaluations be conducted by field-based Extension educators. Evaluation research has focused primarily on the efforts of professional, external evaluators. The work of program staff with many responsibilities including program evaluation has received little attention. This study examined how field based Extension educators (i.e. program staff) in four Extension services use the results of evaluations of programs that they have conducted themselves. Four types of evaluation use are measured and explored; instrumental use, conceptual use, persuasive use and process use. Results indicate that there are few programmatic changes as a result of evaluation findings among the non-formal educators surveyed in this study. Extension educators tend to use evaluation results to persuade others about the value of their programs and learn from the evaluation process. Evaluation use is driven by accountability measures with very little program improvement use as measured in this study. Practical implications include delineating accountability and program improvement tasks within complex organizations in order to align evaluation efforts and to improve the results of both. There is some evidence that evaluation capacity building efforts may be increasing instrumental use by educators evaluating their own programs. Copyright © 2011 Elsevier Ltd. All rights reserved.
Kim, Heung-Kyu; Lee, Seong Hyeon; Choi, Hyunjoo
2015-01-01
Using an inverse analysis technique, the heat transfer coefficient on the die-workpiece contact surface of a hot stamping process was evaluated as a power law function of contact pressure. This evaluation was to determine whether the heat transfer coefficient on the contact surface could be used for finite element analysis of the entire hot stamping process. By comparing results of the finite element analysis and experimental measurements of the phase transformation, an evaluation was performed to determine whether the obtained heat transfer coefficient function could provide reasonable finite element prediction for workpiece properties affected by the hot stamping process. PMID:28788046
NASA Astrophysics Data System (ADS)
Schmidt, J. B.
1985-09-01
This thesis investigates ways of improving the real-time performance of the Stockpoint Logistics Integrated Communication Environment (SPLICE). Performance evaluation through continuous monitoring activities and performance studies are the principle vehicles discussed. The method for implementing this performance evaluation process is the measurement of predefined performance indexes. Performance indexes for SPLICE are offered that would measure these areas. Existing SPLICE capability to carry out performance evaluation is explored, and recommendations are made to enhance that capability.
Bachmann, Monica; de Boer, Wout; Schandelmaier, Stefan; Leibold, Andrea; Marelli, Renato; Jeger, Joerg; Hoffmann-Richter, Ulrike; Mager, Ralph; Schaad, Heinz; Zumbrunn, Thomas; Vogel, Nicole; Bänziger, Oskar; Busse, Jason W; Fischer, Katrin; Kunz, Regina
2016-07-29
Work capacity evaluations by independent medical experts are widely used to inform insurers whether injured or ill workers are capable of engaging in competitive employment. In many countries, evaluation processes lack a clearly structured approach, standardized instruments, and an explicit focus on claimants' functional abilities. Evaluation of subjective complaints, such as mental illness, present additional challenges in the determination of work capacity. We have therefore developed a process for functional evaluation of claimants with mental disorders which complements usual psychiatric evaluation. Here we report the design of a study to measure the reliability of our approach in determining work capacity among patients with mental illness applying for disability benefits. We will conduct a multi-center reliability study, in which 20 psychiatrists trained in our functional evaluation process will assess 30 claimants presenting with mental illness for eligibility to receive disability benefits [Reliability of Functional Evaluation in Psychiatry, RELY-study]. The functional evaluation process entails a five-step structured interview and a reporting instrument (Instrument of Functional Assessment in Psychiatry [IFAP]) to document the severity of work-related functional limitations. We will videotape all evaluations which will be viewed by three psychiatrists who will independently rate claimants' functional limitations. Our primary outcome measure is the evaluation of claimant's work capacity as a percentage (0 to 100 %), and our secondary outcomes are the 12 mental functions and 13 functional capacities assessed by the IFAP-instrument. Inter-rater reliability of four psychiatric experts will be explored using multilevel models to estimate the intraclass correlation coefficient (ICC). Additional analyses include subgroups according to mental disorder, the typicality of claimants, and claimant perceived fairness of the assessment process. We hypothesize that a structured functional approach will show moderate reliability (ICC ≥ 0.6) of psychiatric evaluation of work capacity. Enrollment of actual claimants with mental disorders referred for evaluation by disability/accident insurers will increase the external validity of our findings. Finding moderate levels of reliability, we will continue with a randomized trial to test the reliability of a structured functional approach versus evaluation-as-usual.
Performance and evaluation of real-time multicomputer control systems
NASA Technical Reports Server (NTRS)
Shin, K. G.
1983-01-01
New performance measures, detailed examples, modeling of error detection process, performance evaluation of rollback recovery methods, experiments on FTMP, and optimal size of an NMR cluster are discussed.
Characterization of integrated optical CD for process control
NASA Astrophysics Data System (ADS)
Yu, Jackie; Uchida, Junichi; van Dommelen, Youri; Carpaij, Rene; Cheng, Shaunee; Pollentier, Ivan; Viswanathan, Anita; Lane, Lawrence; Barry, Kelly A.; Jakatdar, Nickhil
2004-05-01
The accurate measurement of CD (critical dimension) and its application to inline process control are key challenges for high yield and OEE (overall equipment efficiency) in semiconductor production. CD-SEM metrology, although providing the resolution necessary for CD evaluation, suffers from the well-known effect of resist shrinkage, making accuracy and stability of the measurements an issue. For sub-100 nm in-line process control, where accuracy and stability as well as speed are required, CD-SEM metrology faces serious limitations. In contrast, scatterometry, using broadband optical spectra taken from grating structures, does not suffer from such limitations. This technology is non-destructive and, in addition to CD, provides profile information and film thickness in a single measurement. Using Timbre's Optical Digital Profililometry (ODP) technology, we characterized the Process Window, using a iODP101 integrated optical CD metrology into a TEL Clean Track at IMEC. We demonstrate the Optical CD's high sensitivity to process change and its insensitivity to measurement noise. We demonstrate the validity of ODP modeling by showing its accurate response to known process changes built into the evaluation and its excellent correlation to CD-SEM. We will further discuss the intrinsic Optical CD metrology factors that affect the tool precision, accuracy and its correlation to CD-SEM.
Measuring techniques in the measuring program for the windpowered unit GROWIAN
NASA Astrophysics Data System (ADS)
Koerber, F.
1984-02-01
The measuring strategy in the GROWIAN program and the measuring systems are presented. Power, load, and behavior during operation were checked. The determining physical characteristics, mainly mechanical and electrical, are obtained with 200 measuring points; they are recorded and evaluated by a data processing system.
Measuring the emulsification dynamics and stability of self-emulsifying drug delivery systems.
Vasconcelos, Teófilo; Marques, Sara; Sarmento, Bruno
2018-02-01
Self-emulsifying drug delivery systems (SEDDS) are one of the most promising technologies in the drug delivery field, particularly for addressing solubility and bioavailability issues of drugs. The development of these drug carriers excessively relies in visual observations and indirect determinations. The present manuscript intended to describe a method able to measure the emulsification of SEDDS, both micro and nano-emulsions, able to measure the droplet size and to evaluate the physical stability of these formulations. Additionally, a new process to evaluate the physical stability of SEDDS after emulsification was also proposed, based on a cycle of mechanical stress followed by a resting period. The use of a multiparameter continuous evaluation during the emulsification process and stability was of upmost value to understand SEDDS emulsification process. Based on this method, SEDDS were classified as fast and slow emulsifiers. Moreover, emulsification process and stabilization of emulsion was subject of several considerations regarding the composition of SEDDS as major factor that affects stability to physical stress and the use of multicomponent with different properties to develop a stable and robust SEDDS formulation. Drug loading level is herein suggested to impact droplets size of SEDDS after dispersion and SEDDS stability to stress conditions. The proposed protocol allows an online measurement of SEDDS droplet size during emulsification and a rationale selection of excipients based on its emulsification and stabilization performance. Copyright © 2017. Published by Elsevier B.V.
Bartfai, Aniko; Markovic, Gabriela; Sargenius Landahl, Kristina; Schult, Marie-Louise
2014-05-08
To describe the design of the study aiming to examine intensive targeted cognitive rehabilitation of attention in the acute (<4 months) and subacute rehabilitation phases (4-12 months) after acquired brain injury and to evaluate the effects on function, activity and participation (return to work). Within a prospective, randomised, controlled study 120 consecutive patients with stroke or traumatic brain injury were randomised to 20 hours of intensive attention training by Attention Process Training or by standard, activity based training. Progress was evaluated by Statistical Process Control and by pre and post measurement of functional and activity levels. Return to work was also evaluated in the post-acute phase. Primary endpoints were the changes in the attention measure, Paced Auditory Serial Addition Test and changes in work ability. Secondary endpoints included measurement of cognitive functions, activity and work return. There were 3, 6 and 12-month follow ups focussing on health economics. The study will provide information on rehabilitation of attention in the early phases after ABI; effects on function, activity and return to work. Further, the application of Statistical Process Control might enable closer investigation of the cognitive changes after acquired brain injury and demonstrate the usefulness of process measures in rehabilitation. The study was registered at ClinicalTrials.gov Protocol. NCT02091453, registered: 19 March 2014.
Balasubramanian, Bijal A; Cohen, Deborah J; Davis, Melinda M; Gunn, Rose; Dickinson, L Miriam; Miller, William L; Crabtree, Benjamin F; Stange, Kurt C
2015-03-10
In healthcare change interventions, on-the-ground learning about the implementation process is often lost because of a primary focus on outcome improvements. This paper describes the Learning Evaluation, a methodological approach that blends quality improvement and implementation research methods to study healthcare innovations. Learning Evaluation is an approach to multi-organization assessment. Qualitative and quantitative data are collected to conduct real-time assessment of implementation processes while also assessing changes in context, facilitating quality improvement using run charts and audit and feedback, and generating transportable lessons. Five principles are the foundation of this approach: (1) gather data to describe changes made by healthcare organizations and how changes are implemented; (2) collect process and outcome data relevant to healthcare organizations and to the research team; (3) assess multi-level contextual factors that affect implementation, process, outcome, and transportability; (4) assist healthcare organizations in using data for continuous quality improvement; and (5) operationalize common measurement strategies to generate transportable results. Learning Evaluation principles are applied across organizations by the following: (1) establishing a detailed understanding of the baseline implementation plan; (2) identifying target populations and tracking relevant process measures; (3) collecting and analyzing real-time quantitative and qualitative data on important contextual factors; (4) synthesizing data and emerging findings and sharing with stakeholders on an ongoing basis; and (5) harmonizing and fostering learning from process and outcome data. Application to a multi-site program focused on primary care and behavioral health integration shows the feasibility and utility of Learning Evaluation for generating real-time insights into evolving implementation processes. Learning Evaluation generates systematic and rigorous cross-organizational findings about implementing healthcare innovations while also enhancing organizational capacity and accelerating translation of findings by facilitating continuous learning within individual sites. Researchers evaluating change initiatives and healthcare organizations implementing improvement initiatives may benefit from a Learning Evaluation approach.
Evaluating MC&A effectiveness to verify the presence of nuclear materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, P. G.; Morzinski, J. A.; Ostenak, Carl A.
Traditional materials accounting is focused exclusively on the material balance area (MBA), and involves periodically closing a material balance based on accountability measurements conducted during a physical inventory. In contrast, the physical inventory for Los Alamos National Laboratory's near-real-time accounting system is established around processes and looks more like an item inventory. That is, the intent is not to measure material for accounting purposes, since materials have already been measured in the normal course of daily operations. A given unit process operates many times over the course of a material balance period. The product of a given unit process maymore » move for processing within another unit process in the same MBA or may be transferred out of the MBA. Since few materials are unmeasured the physical inventory for a near-real-time process area looks more like an item inventory. Thus, the intent of the physical inventory is to locate the materials on the books and verify information about the materials contained in the books. Closing a materials balance for such an area is a matter of summing all the individual mass balances for the batches processed by all unit processes in the MBA. Additionally, performance parameters are established to measure the program's effectiveness. Program effectiveness for verifying the presence of nuclear material is required to be equal to or greater than a prescribed performance level, process measurements must be within established precision and accuracy values, physical inventory results meet or exceed performance requirements, and inventory differences are less than a target/goal quantity. This approach exceeds DOE established accounting and physical inventory program requirements. Hence, LANL is committed to this approach and to seeking opportunities for further improvement through integrated technologies. This paper will provide a detailed description of this evaluation process.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohimer, J.P.
The use of laser-based analytical methods in nuclear-fuel processing plants is considered. The species and locations for accountability, process control, and effluent control measurements in the Coprocessing, Thorex, and reference Purex fuel processing operations are identified and the conventional analytical methods used for these measurements are summarized. The laser analytical methods based upon Raman, absorption, fluorescence, and nonlinear spectroscopy are reviewed and evaluated for their use in fuel processing plants. After a comparison of the capabilities of the laser-based and conventional analytical methods, the promising areas of application of the laser-based methods in fuel processing plants are identified.
ERIC Educational Resources Information Center
do Vale Placa, Rebeca; Ragghianti Zangrando, Mariana S.; Sant'Ana, Adriana C. P.; Greghi, Sebastião L. A.; de Rezende, Maria Lucia R.; Damante, Carla A.
2015-01-01
The evaluation of education environment is essential to provide to the professors a better understanding of the teaching process. One valuable tool for this assessment is the Dundee Ready Educational Environment Measure (DREEM). This questionnaire has 50 questions and is divided in five dimensions: D1--Perceptions of teaching, D2--Perceptions of…
ERIC Educational Resources Information Center
Rusconi, Patrice; Marelli, Marco; D'Addario, Marco; Russo, Selena; Cherubini, Paolo
2014-01-01
Evidence evaluation is a crucial process in many human activities, spanning from medical diagnosis to impression formation. The present experiments investigated which, if any, normative model best conforms to people's intuition about the value of the obtained evidence. Psychologists, epistemologists, and philosophers of science have proposed…
The use of Goal Attainment Scaling in a community health promotion initiative with seniors.
Kloseck, Marita
2007-07-03
Evaluating collaborative community health promotion initiatives presents unique challenges, including engaging community members and other stakeholders in the evaluation process, and measuring the attainment of goals at the collective community level. Goal Attainment Scaling (GAS) is a versatile, under-utilized evaluation tool adaptable to a wide range of situations. GAS actively involves all partners in the evaluation process and has many benefits when used in community health settings. The purpose of this paper is to describe the use of GAS as a potential means of measuring progress and outcomes in community health promotion and community development projects. GAS methodology was used in a local community of seniors (n = 2500; mean age = 76 +/- 8.06 SD; 77% female, 23% male) to a) collaboratively set health promotion and community partnership goals and b) objectively measure the degree of achievement, over- or under-achievement of the established health promotion goals. Goal attainment was measured in a variety of areas including operationalizing a health promotion centre in a local mall, developing a sustainable mechanism for recruiting and training volunteers to operate the health promotion centre, and developing and implementing community health education programs. Goal attainment was evaluated at 3 monthly intervals for one year, then re-evaluated again at year 2. GAS was found to be a feasible and responsive method of measuring community health promotion and community development progress. All project goals were achieved at one year or sooner. The overall GAS score for the total health promotion project increased from 16.02 at baseline (sum of scale scores = -30, average scale score = -2) to 54.53 at one year (sum of scale scores = +4, average scale score = +0.27) showing project goals were achieved above the expected level. With GAS methodology an amalgamated score of 50 represents the achievement of goals at the expected level. GAS provides a "participatory", flexible evaluation approach that involves community members, research partners and other stakeholders in the evaluation process. GAS was found to be "user-friendly" and readily understandable by seniors and other community partners not familiar with program evaluation.
Application of online measures to monitor and evaluate multiplatform fusion performance
NASA Astrophysics Data System (ADS)
Stubberud, Stephen C.; Kowalski, Charlene; Klamer, Dale M.
1999-07-01
A primary concern of multiplatform data fusion is assessing the quality and utility of data shared among platforms. Constraints such as platform and sensor capability and task load necessitate development of an on-line system that computes a metric to determine which other platform can provide the best data for processing. To determine data quality, we are implementing an approach based on entropy coupled with intelligent agents. To determine data quality, we are implementing an approach based on entropy coupled with intelligent agents. Entropy measures quality of processed information such as localization, classification, and ambiguity in measurement-to-track association. Lower entropy scores imply less uncertainty about a particular target. When new information is provided, we compuete the level of improvement a particular track obtains from one measurement to another. The measure permits us to evaluate the utility of the new information. We couple entropy with intelligent agents that provide two main data gathering functions: estimation of another platform's performance and evaluation of the new measurement data's quality. Both functions result from the entropy metric. The intelligent agent on a platform makes an estimate of another platform's measurement and provides it to its own fusion system, which can then incorporate it, for a particular target. A resulting entropy measure is then calculated and returned to its own agent. From this metric, the agent determines a perceived value of the offboard platform's measurement. If the value is satisfactory, the agent requests the measurement from the other platform, usually by interacting with the other platform's agent. Once the actual measurement is received, again entropy is computed and the agent assesses its estimation process and refines it accordingly.
Bridge, P D; Gallagher, R E; Berry-Bobovski, L C
2000-01-01
Fundamental to the development of educational programs and curricula is the evaluation of processes and outcomes. Unfortunately, many otherwise well-designed programs do not incorporate stringent evaluation methods and are limited in measuring program development and effectiveness. Using an advertising lesson in a school-based tobacco-use prevention curriculum as a case study, the authors examine the role of evaluation in the development, implementation, and enhancement of the curricular lesson. A four-phase formative and summative evaluation design was developed to divide the program-evaluation continuum into a structured process that would aid in the management of the evaluation, as well as assess curricular components. Formative and summative evaluation can provide important guidance in the development, implementation, and enhancement of educational curricula. Evaluation strategies identified unexpected barriers and allowed the project team to make necessary "time-relevant" curricular adjustments during each stage of the process.
Establishment of metrological traceability in porosity measurements by x-ray computed tomography
NASA Astrophysics Data System (ADS)
Hermanek, Petr; Carmignato, Simone
2017-09-01
Internal porosity is an inherent phenomenon to many manufacturing processes, such as casting, additive manufacturing, and others. Since these defects cannot be completely avoided by improving production processes, it is important to have a reliable method to detect and evaluate them accurately. The accurate evaluation becomes even more important concerning current industrial trends to minimize size and weight of products on one side, and enhance their complexity and performance on the other. X-ray computed tomography (CT) has emerged as a promising instrument for holistic porosity measurements offering several advantages over equivalent methods already established in the detection of internal defects. The main shortcomings of the conventional techniques pertain to too general information about total porosity content (e.g. Archimedes method) or the destructive way of testing (e.g. microscopy of cross-sections). On the contrary, CT is a nondestructive technique providing complete information about size, shape and distribution of internal porosity. However, due to the lack of international standards and the fact that it is relatively a new measurement technique, CT as a measurement technology has not yet reached maturity. This study proposes a procedure for the establishment of measurement traceability in porosity measurements by CT including the necessary evaluation of measurement uncertainty. The traceability transfer is carried out through a novel reference standard calibrated by optical and tactile coordinate measuring systems. The measurement uncertainty is calculated following international standards and guidelines. In addition, the accuracy of porosity measurements by CT with the associated measurement uncertainty is evaluated using the reference standard.
New agreement measures based on survival processes
Guo, Ying; Li, Ruosha; Peng, Limin; Manatunga, Amita K.
2013-01-01
Summary The need to assess agreement arises in many scenarios in biomedical sciences when measurements were taken by different methods on the same subjects. When the endpoints are survival outcomes, the study of agreement becomes more challenging given the special characteristics of time-to-event data. In this paper, we propose a new framework for assessing agreement based on survival processes that can be viewed as a natural representation of time-to-event outcomes. Our new agreement measure is formulated as the chance-corrected concordance between survival processes. It provides a new perspective for studying the relationship between correlated survival outcomes and offers an appealing interpretation as the agreement between survival times on the absolute distance scale. We provide a multivariate extension of the proposed agreement measure for multiple methods. Furthermore, the new framework enables a natural extension to evaluate time-dependent agreement structure. We develop nonparametric estimation of the proposed new agreement measures. Our estimators are shown to be strongly consistent and asymptotically normal. We evaluate the performance of the proposed estimators through simulation studies and then illustrate the methods using a prostate cancer data example. PMID:23844617
Sansgiry, S S; Cady, P S
1997-01-01
Currently, marketed over-the-counter (OTC) medication labels were simulated and tested in a controlled environment to understand consumer evaluation of OTC label information. Two factors, consumers' age (younger and older adults) and label designs (picture-only, verbal-only, congruent picture-verbal, and noncongruent picture-verbal) were controlled and tested to evaluate consumer information processing. The effects exerted by the independent variables, namely, comprehension of label information (understanding) and product evaluations (satisfaction, certainty, and perceived confusion) were evaluated on the dependent variable purchase intention. Intention measured as purchase recommendation was significantly related to product evaluations and affected by the factor label design. Participants' level of perceived confusion was more important than actual understanding of information on OTC medication labels. A Label Evaluation Process Model was developed which could be used for future testing of OTC medication labels.
A Computer-Based Instructional Support Network: Design, Development, and Evaluation
1990-09-01
mail systems, the ISN would be relatively easy to learn and use. Survey Related Projects A literature review was conducted using the Manpower and... the impact of the ISN upon student attrition, performance, and attitudes . The context of the evaluation was a sequence of NPS Continuing Education (CE...waiLed for the subject to complete the task alone. Process Measures The following forms of process data ( relating to evaluation objectives I and 2) were
Adapting large batteries of research measures for immigrants.
Aroian, Karen J
2013-06-01
A four-step, streamlined process to adapt a large battery of measures for a study of mother-child adjustment in Arab Muslim immigrants and the lessons learned are described. The streamlined process includes adapting content, translation, pilot testing, and extensive psychometric evaluation but omits in-depth qualitative inquiry to identify the full content domain of the constructs of interest and cognitive interviews to assess how respondents interpret items. Lessons learned suggest that the streamlined process is not sufficient for certain measures, particularly when there is little published information about how the measure performs with different groups, the measure requires substantial item revision to achieve content equivalence, and the measure is both challenging to translate and has little to no redundancy. When these conditions are present, condition-specific procedures need to be added to the streamlined process.
NASA Astrophysics Data System (ADS)
Qiu, Liming; Shen, Rongxi; Song, Dazhao; Wang, Enyuan; Liu, Zhentang; Niu, Yue; Jia, Haishan; Xia, Shankui; Zheng, Xiangxin
2017-12-01
An accurate and non-destructive evaluation method for the hydraulic measure impact range in coal seams is urgently needed. Aiming at the application demands, a theoretical study and field test are presented using the direct current (DC) method to evaluate the impact range of coal seam hydraulic measures. We firstly analyzed the law of the apparent resistivity response of an abnormal conductive zone in a coal seam, and then investigated the principle of non-destructive testing of the coal seam hydraulic measure impact range using the DC method, and used an accurate evaluation method based on the apparent resistivity cloud chart. Finally, taking hydraulic fracturing and hydraulic flushing as examples, field experiments were carried out in coal mines to evaluate the impact ranges. The results showed that: (1) in the process of hydraulic fracturing, coal conductivity was enhanced by high-pressure water in the coal seam, and after hydraulic fracturing, the boundary of the apparent resistivity decrease area was the boundary impact range. (2) In the process of hydraulic flushing, coal conductivity was reduced by holes and cracks in the coal seam, and after hydraulic flushing, the boundary of the apparent resistivity increase area was the boundary impact range. (3) After the implementation of the hydraulic measures, there may be some blind zones in the coal seam; in hydraulic fracturing blind zones, the apparent resistivity increased or stayed constant, while in hydraulic flushing blind zones, the apparent resistivity decreased or stayed constant. The DC method realized a comprehensive and non-destructive evaluation of the impact range of the hydraulic measures, and greatly reduced the time and cost of evaluation.
Tombaugh, Tom N; Rees, Laura; Stormer, Peter; Harrison, Allyson G; Smith, Andra
2007-01-01
In spite of the fact that reaction time (RT) measures are sensitive to the effects of traumatic brain injury (TBI), few RT procedures have been developed for use in standard clinical evaluations. The computerized test of information processing (CTIP) [Tombaugh, T. N., & Rees, L. (2000). Manual for the computerized tests of information processing (CTIP). Ottawa, Ont.: Carleton University] was designed to measure the degree to which TBI decreases the speed at which information is processed. The CTIP consists of three computerized programs that progressively increase the amount of information that is processed. Results of the current study demonstrated that RT increased as the difficulty of the CTIP tests increased (known as the complexity effect), and as severity of injury increased (from mild to severe TBI). The current study also demonstrated the importance of selecting a non-biased measure of variability. Overall, findings suggest that the CTIP is an easy to administer and sensitive measure of information processing speed.
Quantification is Neither Necessary Nor Sufficient for Measurement
NASA Astrophysics Data System (ADS)
Mari, Luca; Maul, Andrew; Torres Irribarra, David; Wilson, Mark
2013-09-01
Being an infrastructural, widespread activity, measurement is laden with stereotypes. Some of these concern the role of measurement in the relation between quality and quantity. In particular, it is sometimes argued or assumed that quantification is necessary for measurement; it is also sometimes argued or assumed that quantification is sufficient for or synonymous with measurement. To assess the validity of these positions the concepts of measurement and quantitative evaluation should be independently defined and their relationship analyzed. We contend that the defining characteristic of measurement should be the structure of the process, not a feature of its results. Under this perspective, quantitative evaluation is neither sufficient nor necessary for measurement.
Quality measurement and benchmarking of HPV vaccination services: a new approach.
Maurici, Massimo; Paulon, Luca; Campolongo, Alessandra; Meleleo, Cristina; Carlino, Cristiana; Giordani, Alessandro; Perrelli, Fabrizio; Sgricia, Stefano; Ferrante, Maurizio; Franco, Elisabetta
2014-01-01
A new measurement process based upon a well-defined mathematical model was applied to evaluate the quality of human papillomavirus (HPV) vaccination centers in 3 of 12 Local Health Units (ASLs) within the Lazio Region of Italy. The quality aspects considered for evaluation were communicational efficiency, organizational efficiency and comfort. The overall maximum achievable value was 86.10%, while the HPV vaccination quality scores for ASL1, ASL2 and ASL3 were 73.07%, 71.08%, and 67.21%, respectively. With this new approach it is possible to represent the probabilistic reasoning of a stakeholder who evaluates the quality of a healthcare provider. All ASLs had margins for improvements and optimal quality results can be assessed in terms of better performance conditions, confirming the relationship between the resulting quality scores and HPV vaccination coverage. The measurement process was structured into three steps and involved four stakeholder categories: doctors, nurses, parents and vaccinated women. In Step 1, questionnaires were administered to collect different stakeholders' points of view (i.e., subjective data) that were elaborated to obtain the best and worst performance conditions when delivering a healthcare service. Step 2 of the process involved the gathering of performance data during the service delivery (i.e., objective data collection). Step 3 of the process involved the elaboration of all data: subjective data from step 1 are used to define a "standard" to test objective data from step 2. This entire process led to the creation of a set of scorecards. Benchmarking is presented as a result of the probabilistic meaning of the evaluated scores.
Single photon laser altimeter simulator and statistical signal processing
NASA Astrophysics Data System (ADS)
Vacek, Michael; Prochazka, Ivan
2013-05-01
Spaceborne altimeters are common instruments onboard the deep space rendezvous spacecrafts. They provide range and topographic measurements critical in spacecraft navigation. Simultaneously, the receiver part may be utilized for Earth-to-satellite link, one way time transfer, and precise optical radiometry. The main advantage of single photon counting approach is the ability of processing signals with very low signal-to-noise ratio eliminating the need of large telescopes and high power laser source. Extremely small, rugged and compact microchip lasers can be employed. The major limiting factor, on the other hand, is the acquisition time needed to gather sufficient volume of data in repetitive measurements in order to process and evaluate the data appropriately. Statistical signal processing is adopted to detect signals with average strength much lower than one photon per measurement. A comprehensive simulator design and range signal processing algorithm are presented to identify a mission specific altimeter configuration. Typical mission scenarios (celestial body surface landing and topographical mapping) are simulated and evaluated. The high interest and promising single photon altimeter applications are low-orbit (˜10 km) and low-radial velocity (several m/s) topographical mapping (asteroids, Phobos and Deimos) and landing altimetry (˜10 km) where range evaluation repetition rates of ˜100 Hz and 0.1 m precision may be achieved. Moon landing and asteroid Itokawa topographical mapping scenario simulations are discussed in more detail.
Psychometric assessment of the processes of change scale for sun protection.
Sillice, Marie A; Babbin, Steven F; Redding, Colleen A; Rossi, Joseph S; Paiva, Andrea L; Velicer, Wayne F
2018-01-01
The fourteen-factor Processes of Change Scale for Sun Protection assesses behavioral and experiential strategies that underlie the process of sun protection acquisition and maintenance. Variations of this measure have been used effectively in several randomized sun protection trials, both for evaluation and as a basis for intervention. However, there are no published studies, to date, that evaluate the psychometric properties of the scale. The present study evaluated factorial invariance and scale reliability in a national sample (N = 1360) of adults involved in a Transtheoretical model tailored intervention for exercise and sun protection, at baseline. Invariance testing ranged from least to most restrictive: Configural Invariance (constraints only factor structure and zero loadings); Pattern Identity Invariance (equal factor loadings across target groups); and Strong Factorial Invariance (equal factor loadings and measurement errors). Multi-sample structural equation modeling tested the invariance of the measurement model across seven subgroups: age, education, ethnicity, gender, race, skin tone, and Stage of Change for Sun Protection. Strong factorial invariance was found across all subgroups. Internal consistency coefficient Alpha and factor rho reliability, respectively, were .83 and .80 for behavioral processes, .91 and .89 for experiential processes, and .93 and .91 for the global scale. These results provide strong empirical evidence that the scale is consistent, has internal validity and can be used in research interventions with population-based adult samples.
NASA Astrophysics Data System (ADS)
Li, Tianxing; Zhou, Junxiang; Deng, Xiaozhong; Li, Jubo; Xing, Chunrong; Su, Jianxin; Wang, Huiliang
2018-07-01
A manufacturing error of a cycloidal gear is the key factor affecting the transmission accuracy of a robot rotary vector (RV) reducer. A methodology is proposed to realize the digitized measurement and data processing of the cycloidal gear manufacturing error based on the gear measuring center, which can quickly and accurately measure and evaluate the manufacturing error of the cycloidal gear by using both the whole tooth profile measurement and a single tooth profile measurement. By analyzing the particularity of the cycloidal profile and its effect on the actual meshing characteristics of the RV transmission, the cycloid profile measurement strategy is planned, and the theoretical profile model and error measurement model of cycloid-pin gear transmission are established. Through the digital processing technology, the theoretical trajectory of the probe and the normal vector of the measured point are calculated. By means of precision measurement principle and error compensation theory, a mathematical model for the accurate calculation and data processing of manufacturing error is constructed, and the actual manufacturing error of the cycloidal gear is obtained by the optimization iterative solution. Finally, the measurement experiment of the cycloidal gear tooth profile is carried out on the gear measuring center and the HEXAGON coordinate measuring machine, respectively. The measurement results verify the correctness and validity of the measurement theory and method. This methodology will provide the basis for the accurate evaluation and the effective control of manufacturing precision of the cycloidal gear in a robot RV reducer.
Hadzidiakos, Daniel; Horn, Nadja; Degener, Roland; Buchner, Axel; Rehberg, Benno
2009-08-01
There have been reports of memory formation during general anesthesia. The process-dissociation procedure has been used to determine if these are controlled (explicit/conscious) or automatic (implicit/unconscious) memories. This study used the process-dissociation procedure with the original measurement model and one which corrected for guessing to determine if more accurate results were obtained in this setting. A total of 160 patients scheduled for elective surgery were enrolled. Memory for words presented during propofol and remifentanil general anesthesia was tested postoperatively by using a word-stem completion task in a process-dissociation procedure. To assign possible memory effects to different levels of anesthetic depth, the authors measured depth of anesthesia using the BIS XP monitor (Aspect Medical Systems, Norwood, MA). Word-stem completion performance showed no evidence of memory for intraoperatively presented words. Nevertheless, an evaluation of these data using the original measurement model for process-dissociation data suggested an evidence of controlled (C = 0.05; 95% confidence interval [CI] 0.02-0.08) and automatic (A = 0.11; 95% CI 0.09-0.12) memory processes (P < 0.01). However, when the data were evaluated with an extended measurement model taking base rates into account adequately, no evidence for controlled (C = 0.00; 95% CI -0.04 to 0.04) or automatic (A = 0.00; 95% CI -0.02 to 0.02) memory processes was obtained. The authors report and discuss parallel findings for published data sets that were generated by using the process-dissociation procedure. Patients had no memories for auditory information presented during propofol/remifentanil anesthesia after midazolam premedication. The use of the process-dissociation procedure with the original measurement model erroneously detected memories, whereas the extended model, corrected for guessing, correctly revealed no memory.
ERIC Educational Resources Information Center
Saleh, Ayman; Fuchs, Catherine; Taylor, Warren D.; Niarhos, Frances
2018-01-01
Objective: Neurocognitive evaluations are commonly integrated with clinical assessment to evaluate adult Attention Deficit Hyperactivity Disorder (ADHD). Study goal is to identify measures most strongly related to ADHD diagnosis and to determine their utility in screening processes. Participants: 230 students who were evaluated at the Vanderbilt…
ERIC Educational Resources Information Center
Moreau, Katherine Ann; Clarkin, Chantalle Louise
2012-01-01
Background: Although pediatric healthcare organizations have widely implemented the philosophy of family-centered care (FCC), evaluators and health professionals have not explored how to preserve the philosophy of FCC in evaluation processes. Purpose: To illustrate how fourth generation evaluation, in theory, could facilitate collaboration between…
ERIC Educational Resources Information Center
Gerjets, Peter; Kammerer, Yvonne; Werner, Benita
2011-01-01
Web searching for complex information requires to appropriately evaluating diverse sources of information. Information science studies identified different criteria applied by searchers to evaluate Web information. However, the explicit evaluation instructions used in these studies might have resulted in a distortion of spontaneous evaluation…
Quality Measures for the Care of Adult Patients with Obstructive Sleep Apnea
Aurora, R. Nisha; Collop, Nancy A.; Jacobowitz, Ofer; Thomas, Sherene M.; Quan, Stuart F.; Aronsky, Amy J.
2015-01-01
Obstructive sleep apnea (OSA) is a prevalent disorder associated with a multitude of adverse outcomes when left untreated. There is significant heterogeneity in the evaluation and management of OSA resulting in variation in cost and outcomes. Thus, the goal for developing these measures was to have a way to evaluate the outcomes and reliability of the processes involved with the standard care approaches used in the diagnosis and management of OSA. The OSA quality care measures presented here focus on both outcomes and processes. The AASM commissioned the Adult OSA Quality Measures Workgroup to develop quality care measures aimed at optimizing care for adult patients with OSA. These quality care measures developed by the Adult OSA Quality Measures Workgroup are an extension of the original Centers for Medicare & Medicaid Services (CMS) approved Physician Quality Reporting System (PQRS) measures group for OSA. The measures are based on the available scientific evidence, focus on public safety, and strive to improve quality of life and cardiovascular outcomes for individual OSA patients. The three outcomes that were selected were as follows: (1) improve disease detection and categorization; (2) improve quality of life; and (3) reduce cardiovascular risk. After selecting these relevant outcomes, a total of ten process measures were chosen that could be applied and assessed for the purpose of accomplishing these outcomes. In the future, the measures described in this document may be reported through the PQRS in addition to, or as a replacement for, the current OSA measures group. The overall objective for the development of these measures is that implementation of these quality measures will result in improved patient outcomes, reduce the public health burden of OSA, and provide a measurable standard for evaluating and managing OSA. Citation: Aurora RN, Collop NA, Jacobowitz O, Thomas SM, Quan SF, Aronsky AJ. Quality measures for the care of adult patients with obstructive sleep apnea. J Clin Sleep Med 2015;11(3):357–383. PMID:25700878
Development of an evaluation framework for African-European hospital patient safety partnerships.
Rutter, Paul; Syed, Shamsuzzoha B; Storr, Julie; Hightower, Joyce D; Bagheri-Nejad, Sepideh; Kelley, Edward; Pittet, Didier
2014-04-01
Patient safety is recognised as a significant healthcare problem worldwide, and healthcare-associated infections are an important aspect. African Partnerships for Patient Safety is a WHO programme that pairs hospitals in Africa with hospitals in Europe with the objective to work together to improve patient safety. To describe the development of an evaluation framework for hospital-to-hospital partnerships participating in the programme. The framework was structured around the programme's three core objectives: facilitate strong interhospital partnerships, improve in-hospital patient safety and spread best practices nationally. Africa-based clinicians, their European partners and experts in patient safety were closely involved in developing the evaluation framework in an iterative process. The process defined six domains of partnership strength, each with measurable subdomains. We developed a questionnaire to measure these subdomains. Participants selected six indicators of hospital patient safety improvement from a short-list of 22 based on their relevance, sensitivity to intervention and measurement feasibility. Participants proposed 20 measures of spread, which were refined into a two-part conceptual framework, and a data capture tool created. Taking a highly participatory approach that closely involved its end users, we developed an evaluation framework and tools to measure partnership strength, patient safety improvements and the spread of best practice.
NASA Astrophysics Data System (ADS)
Heimann, M.; Prentice, I. C.; Foley, J.; Hickler, T.; Kicklighter, D. W.; McGuire, A. D.; Melillo, J. M.; Ramankutty, N.; Sitch, S.
2001-12-01
Models of biophysical and biogeochemical proceses are being used -either offline or in coupled climate-carbon cycle (C4) models-to assess climate- and CO2-induced feedbacks on atmospheric CO2. Observations of atmospheric CO2 concentration, and supplementary tracers including O2 concentrations and isotopes, offer unique opportunities to evaluate the large-scale behaviour of models. Global patterns, temporal trends, and interannual variability of the atmospheric CO2 concentration and its seasonal cycle provide crucial benchmarks for simulations of regionally-integrated net ecosystem exchange; flux measurements by eddy correlation allow a far more demanding model test at the ecosystem scale than conventional indicators, such as measurements of annual net primary production; and large-scale manipulations, such as the Duke Forest Free Air Carbon Enrichment (FACE) experiment, give a standard to evaluate modelled phenomena such as ecosystem-level CO2 fertilization. Model runs including historical changes of CO2, climate and land use allow comparison with regional-scale monthly CO2 balances as inferred from atmospheric measurements. Such comparisons are providing grounds for some confidence in current models, while pointing to processes that may still be inadequately treated. Current plans focus on (1) continued benchmarking of land process models against flux measurements across ecosystems and experimental findings on the ecosystem-level effects of enhanced CO2, reactive N inputs and temperature; (2) improved representation of land use, forest management and crop metabolism in models; and (3) a strategy for the evaluation of C4 models in a historical observational context.
Poll, Gerard H; Miller, Carol A; Mainela-Arnold, Elina; Adams, Katharine Donnelly; Misra, Maya; Park, Ji Sook
2013-01-01
More limited working memory capacity and slower processing for language and cognitive tasks are characteristics of many children with language difficulties. Individual differences in processing speed have not consistently been found to predict language ability or severity of language impairment. There are conflicting views on whether working memory and processing speed are integrated or separable abilities. To evaluate four models for the relations of individual differences in children's processing speed and working memory capacity in sentence imitation. The models considered whether working memory and processing speed are integrated or separable, as well as the effect of the number of operations required per sentence. The role of working memory as a mediator of the effect of processing speed on sentence imitation was also evaluated. Forty-six children with varied language and reading abilities imitated sentences. Working memory was measured with the Competing Language Processing Task (CLPT), and processing speed was measured with a composite of truth-value judgment and rapid automatized naming tasks. Mixed-effects ordinal regression models evaluated the CLPT and processing speed as predictors of sentence imitation item scores. A single mediator model evaluated working memory as a mediator of the effect of processing speed on sentence imitation total scores. Working memory was a reliable predictor of sentence imitation accuracy, but processing speed predicted sentence imitation only as a component of a processing speed by number of operations interaction. Processing speed predicted working memory capacity, and there was evidence that working memory acted as a mediator of the effect of processing speed on sentence imitation accuracy. The findings support a refined view of working memory and processing speed as separable factors in children's sentence imitation performance. Processing speed does not independently explain sentence imitation accuracy for all sentence types, but contributes when the task requires more mental operations. Processing speed also has an indirect effect on sentence imitation by contributing to working memory capacity. © 2013 Royal College of Speech and Language Therapists.
Evaluation of Apache Hadoop for parallel data analysis with ROOT
NASA Astrophysics Data System (ADS)
Lehrack, S.; Duckeck, G.; Ebke, J.
2014-06-01
The Apache Hadoop software is a Java based framework for distributed processing of large data sets across clusters of computers, using the Hadoop file system (HDFS) for data storage and backup and MapReduce as a processing platform. Hadoop is primarily designed for processing large textual data sets which can be processed in arbitrary chunks, and must be adapted to the use case of processing binary data files which cannot be split automatically. However, Hadoop offers attractive features in terms of fault tolerance, task supervision and control, multi-user functionality and job management. For this reason, we evaluated Apache Hadoop as an alternative approach to PROOF for ROOT data analysis. Two alternatives in distributing analysis data were discussed: either the data was stored in HDFS and processed with MapReduce, or the data was accessed via a standard Grid storage system (dCache Tier-2) and MapReduce was used only as execution back-end. The focus in the measurements were on the one hand to safely store analysis data on HDFS with reasonable data rates and on the other hand to process data fast and reliably with MapReduce. In the evaluation of the HDFS, read/write data rates from local Hadoop cluster have been measured and compared to standard data rates from the local NFS installation. In the evaluation of MapReduce, realistic ROOT analyses have been used and event rates have been compared to PROOF.
MEASUREMENT OF INDOOR AIR EMISSIONS FROM DRY-PROCESS PHOTOCOPY MACHINES
The article provides background information on indoor air emissions from office equipment, with emphasis on dry-process photocopy machines. The test method is described in detail along with results of a study to evaluate the test method using four dry-process photocopy machines. ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, S
2015-06-15
Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignmentmore » of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors using routine measurement of electron beam energy constancy.« less
Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela
2015-01-01
Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes. PMID:26233047
Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela
2015-07-01
Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.
Lin, Steve; Scales, Damon C
2016-06-28
High-quality cardiopulmonary resuscitation (CPR) has been shown to improve survival outcomes after cardiac arrest. The current standard in studies evaluating CPR quality is to measure CPR process measures-for example, chest compression rate, depth, and fraction. Published studies evaluating CPR feedback devices have yielded mixed results. Newer approaches that seek to optimize CPR by measuring physiological endpoints during the resuscitation may lead to individualized patient care and improved patient outcomes.
Energy and Cost Savings of Retro-Commissioning and Retrofit Measures for Large Office Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Weimin; Zhang, Jian; Moser, Dave
2012-08-03
This paper evaluates the energy and cost savings of seven retro-commissioning measures and 29 retrofit measures applicable to most large office buildings. The baseline model is for a hypothetical building with characteristics of large office buildings constructed before 1980. Each retro-commissioning measure is evaluated against the original baseline in terms of its potential of energy and cost savings while each retrofit measure is evaluated against the commissioned building. All measures are evaluated in five locations (Miami, Las Vegas, Seattle, Chicago and Duluth) to understand the impact of weather conditions on energy and cost savings. The results show that implementation ofmore » the seven operation and maintenance measures as part of a retro-commissioning process can yield an average of about 22% of energy use reduction and 14% of energy cost reduction. Widening zone temperature deadband, lowering VAV terminal minimum air flow set points and lighting upgrades are effective retrofit measures to be considered.« less
1982-03-01
sustained monitoring tasks. Human Factors, 1979, 21, 647-653. Craik , F. I. M., & Lockhart , R. Levels of processing : A framework for memory research...original levels approach to human memory ( Craik & Lockhart , 1972) contended that verbal stimuli could be classified along a continuum ranging from...AND AN EVALUATION OF EFFORT AS A MEASURr OF LEVELS OF PROCESSING Arthur D. Fisk, William L. Derrick, and Walter Schneider REPORT HARL-ONR-8105 E. C
Ku-band signal design study. [space shuttle orbiter data processing network
NASA Technical Reports Server (NTRS)
Rubin, I.
1978-01-01
Analytical tools, methods and techniques for assessing the design and performance of the space shuttle orbiter data processing system (DPS) are provided. The computer data processing network is evaluated in the key areas of queueing behavior synchronization and network reliability. The structure of the data processing network is described as well as the system operation principles and the network configuration. The characteristics of the computer systems are indicated. System reliability measures are defined and studied. System and network invulnerability measures are computed. Communication path and network failure analysis techniques are included.
Binaural speech processing in individuals with auditory neuropathy.
Rance, G; Ryan, M M; Carew, P; Corben, L A; Yiu, E; Tan, J; Delatycki, M B
2012-12-13
Auditory neuropathy disrupts the neural representation of sound and may therefore impair processes contingent upon inter-aural integration. The aims of this study were to investigate binaural auditory processing in individuals with axonal (Friedreich ataxia) and demyelinating (Charcot-Marie-Tooth disease type 1A) auditory neuropathy and to evaluate the relationship between the degree of auditory deficit and overall clinical severity in patients with neuropathic disorders. Twenty-three subjects with genetically confirmed Friedreich ataxia and 12 subjects with Charcot-Marie-Tooth disease type 1A underwent psychophysical evaluation of basic auditory processing (intensity discrimination/temporal resolution) and binaural speech perception assessment using the Listening in Spatialized Noise test. Age, gender and hearing-level-matched controls were also tested. Speech perception in noise for individuals with auditory neuropathy was abnormal for each listening condition, but was particularly affected in circumstances where binaural processing might have improved perception through spatial segregation. Ability to use spatial cues was correlated with temporal resolution suggesting that the binaural-processing deficit was the result of disordered representation of timing cues in the left and right auditory nerves. Spatial processing was also related to overall disease severity (as measured by the Friedreich Ataxia Rating Scale and Charcot-Marie-Tooth Neuropathy Score) suggesting that the degree of neural dysfunction in the auditory system accurately reflects generalized neuropathic changes. Measures of binaural speech processing show promise for application in the neurology clinic. In individuals with auditory neuropathy due to both axonal and demyelinating mechanisms the assessment provides a measure of functional hearing ability, a biomarker capable of tracking the natural history of progressive disease and a potential means of evaluating the effectiveness of interventions. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ducoté, Julien; Dettoni, Florent; Bouyssou, Régis; Le-Gratiet, Bertrand; Carau, Damien; Dezauzier, Christophe
2015-03-01
Patterning process control of advanced nodes has required major changes over the last few years. Process control needs of critical patterning levels since 28nm technology node is extremely aggressive showing that metrology accuracy/sensitivity must be finely tuned. The introduction of pitch splitting (Litho-Etch-Litho-Etch) at 14FDSOInm node requires the development of specific metrologies to adopt advanced process control (for CD, overlay and focus corrections). The pitch splitting process leads to final line CD uniformities that are a combination of the CD uniformities of the two exposures, while the space CD uniformities are depending on both CD and OVL variability. In this paper, investigations of CD and OVL process control of 64nm minimum pitch at Metal1 level of 14FDSOI technology, within the double patterning process flow (Litho, hard mask etch, line etch) are presented. Various measurements with SEMCD tools (Hitachi), and overlay tools (KT for Image Based Overlay - IBO, and ASML for Diffraction Based Overlay - DBO) are compared. Metrology targets are embedded within a block instanced several times within the field to perform intra-field process variations characterizations. Specific SEMCD targets were designed for independent measurement of both line CD (A and B) and space CD (A to B and B to A) for each exposure within a single measurement during the DP flow. Based on those measurements correlation between overlay determined with SEMCD and with standard overlay tools can be evaluated. Such correlation at different steps through the DP flow is investigated regarding the metrology type. Process correction models are evaluated with respect to the measurement type and the intra-field sampling.
DOT National Transportation Integrated Search
2014-08-01
The evaluation of the curing process of a fresh concrete is critical to its construction process and monitoring. Traditionally stress : sensor and compressive wave sensor were often used to measure concrete properties. Bender element (BE) test, a non...
Development of an Online Toolkit for Measuring Performance in Health Emergency Response Exercises.
Agboola, Foluso; Bernard, Dorothy; Savoia, Elena; Biddinger, Paul D
2015-10-01
Exercises that simulate emergency scenarios are accepted widely as an essential component of a robust Emergency Preparedness program. Unfortunately, the variability in the quality of the exercises conducted, and the lack of standardized processes to measure performance, has limited the value of exercises in measuring preparedness. In order to help health organizations improve the quality and standardization of the performance data they collect during simulated emergencies, a model online exercise evaluation toolkit was developed using performance measures tested in over 60 Emergency Preparedness exercises. The exercise evaluation toolkit contains three major components: (1) a database of measures that can be used to assess performance during an emergency response exercise; (2) a standardized data collection tool (form); and (3) a program that populates the data collection tool with the measures that have been selected by the user from the database. The evaluation toolkit was pilot tested from January through September 2014 in collaboration with 14 partnering organizations representing 10 public health agencies and four health care agencies from eight states across the US. Exercise planners from the partnering organizations were asked to use the toolkit for their exercise evaluation process and were interviewed to provide feedback on the use of the toolkit, the generated evaluation tool, and the usefulness of the data being gathered for the development of the exercise after-action report. Ninety-three percent (93%) of exercise planners reported that they found the online database of performance measures appropriate for the creation of exercise evaluation forms, and they stated that they would use it again for future exercises. Seventy-two percent (72%) liked the exercise evaluation form that was generated from the toolkit, and 93% reported that the data collected by the use of the evaluation form were useful in gauging their organization's performance during the exercise. Seventy-nine percent (79%) of exercise planners preferred the evaluation form generated by the toolkit to other forms of evaluations. Results of this project show that users found the newly developed toolkit to be user friendly and more relevant to measurement of specific public health and health care capabilities than other tools currently available. The developed toolkit may contribute to the further advancement of developing a valid approach to exercise performance measurement.
Berry, Tanya R; Rodgers, Wendy M; Divine, Alison; Hall, Craig
2018-06-19
Discrepancies between automatically activated associations (i.e., implicit evaluations) and explicit evaluations of motives (measured with a questionnaire) could lead to greater information processing to resolve discrepancies or self-regulatory failures that may affect behavior. This research examined the relationship of health and appearance exercise-related explicit-implicit evaluative discrepancies, the interaction between implicit and explicit evaluations, and the combined value of explicit and implicit evaluations (i.e., the summed scores) to dropout from a yearlong exercise program. Participants (N = 253) completed implicit health and appearance measures and explicit health and appearance motives at baseline, prior to starting the exercise program. The sum of implicit and explicit appearance measures was positively related to weeks in the program, and discrepancy between the implicit and explicit health measures was negatively related to length of time in the program. Implicit exercise evaluations and their relationships to oft-cited motives such as appearance and health may inform exercise dropout.
Measurement issues in the evaluation of chronic disease self-management programs.
Nolte, Sandra; Elsworth, Gerald R; Newman, Stanton; Osborne, Richard H
2013-09-01
To provide an in-depth analysis of outcome measures used in the evaluation of chronic disease self-management programs consistent with the Stanford curricula. Based on a systematic review on self-management programs, effect sizes derived from reported outcome measures are categorized according to the quality of life appraisal model developed by Schwartz and Rapkin which classifies outcomes from performance-based measures (e.g., clinical outcomes) to evaluation-based measures (e.g., emotional well-being). The majority of outcomes assessed in self-management trials are based on evaluation-based methods. Overall, effects on knowledge--the only performance-based measure observed in selected trials--are generally medium to large. In contrast, substantially more inconsistent results are found for both perception- and evaluation-based measures that mostly range between nil and small positive effects. Effectiveness of self-management interventions and resulting recommendations for health policy makers are most frequently derived from highly variable evaluation-based measures, that is, types of outcomes that potentially carry a substantial amount of measurement error and/or bias such as response shift. Therefore, decisions regarding the value and efficacy of chronic disease self-management programs need to be interpreted with care. More research, especially qualitative studies, is needed to unravel cognitive processes and the role of response shift bias in the measurement of change.
Teacher Evaluation and Music Education: Joining the National Discussion
ERIC Educational Resources Information Center
Overland, Corin T.
2014-01-01
Between 2009 and 2014, thirty-seven states in the United States have adopted or significantly amended their teacher evaluation laws, mostly shifting toward using measurements of student growth on achievement tests. Yet, the processes to evaluate core subjects have not always transitioned smoothly to nontested or artistic content, causing some…
Student Image, Student Evaluation and Education.
ERIC Educational Resources Information Center
Tate, Eugene D.
In this paper the author investigates the function of student evaluation in relation to the educational process. He concludes that traditional approaches are inadequate because they view evaluation as either a static measure of information comprehension or as a coercive tool. The author, instead of viewing education as static, linear…
Let's Talk about Race: Evaluating a College Interracial Discussion Group on Race
ERIC Educational Resources Information Center
Ashby, Kimberly M.; Collins, Dana L.; Helms, Janet E.; Manlove, Joshua
2018-01-01
The authors evaluate Dialogues on Race, an interracial group intervention in which undergraduate student facilitators led conversations about race with their peers. The evaluation process is described, including developing collaborative relationships, identifying program goals, selecting measures, and analyzing and presenting results. The authors…
Evaluating the Impact of Training: A Collection of Federal Agency Evaluation Practices.
ERIC Educational Resources Information Center
Salinger, Ruth; Bartlett, Joan
The purpose of this document is to share various approaches used by federal agencies to assess needs and measure training effectiveness. The emphasis in the descriptions is on the evaluation process rather than on the results. One program was evaluated by employing return-on-investment (ROI) data and using volunteer line personnel who conducted…
Evaluating the Impact of Training: A Collection of Federal Agency Evaluation Practices. Volume 2.
ERIC Educational Resources Information Center
Salinger, Ruth; Roberts, Cynthia
The purpose of this publication on agency training evaluation practices is to share approaches used by federal agencies to assess needs and measure training effectiveness. Emphasis is placed on the process of evaluation. Names of the agencies and highlights of the examples used by each follow: (1) Plant Protection and Quarantine (Department of…
ERIC Educational Resources Information Center
Smits, Pernelle A.; Champagne, Francois; Farand, Lambert
2012-01-01
The evaluation of interventions is becoming increasing common and now often seeks to involve managers in the process. Such practical participatory evaluation (PPE) aims to increase the use of evaluation results through the participation of stakeholders. This study focuses on the propensity of health managers for PPE, as measured through the…
ERIC Educational Resources Information Center
Van Wart, Geraldine
This fourth year evaluation reports the effects and usage of "Carrascolendas," a children's television series in Spanish and English. Research was conducted in Texas schools and encompassed three phases: a field experiment to measure learning effects; attitudinal surveys among teachers, parents, and children; and a process evaluation of…
Weinreich, André; Funcke, Jakob Maria
2014-01-01
Drawing on recent findings, this study examines whether valence concordant electromyography (EMG) responses can be explained as an unconditional effect of mere stimulus processing or as somatosensory simulation driven by task-dependent processing strategies. While facial EMG over the Corrugator supercilii and the Zygomaticus major was measured, each participant performed two tasks with pictures of album covers. One task was an affective evaluation task and the other was to attribute the album covers to one of five decades. The Embodied Emotion Account predicts that valence concordant EMG is more likely to occur if the task necessitates a somatosensory simulation of the evaluative meaning of stimuli. Results support this prediction with regard to Corrugator supercilii in that valence concordant EMG activity was only present in the affective evaluation task but not in the non-evaluative task. Results for the Zygomaticus major were ambiguous. Our findings are in line with the view that EMG activity is an embodied part of the evaluation process and not a mere physical outcome.
Candidate Quality Measures for Hand Surgery.
2017-11-01
Quality measures are tools used by physicians, health care systems, and payers to evaluate performance, monitor the outcomes of interventions, and inform quality improvement efforts. A paucity of quality measures exist that address hand surgery care. We completed a RAND/UCLA (University of California Los Angeles) Delphi Appropriateness process with the goal of developing and evaluating candidate hand surgery quality measures to be used for national quality measure development efforts. A consortium of 9 academic upper limb surgeons completed a RAND/UCLA Delphi Appropriateness process to evaluate the importance, scientific acceptability, usability, and feasibility of 44 candidate quality measures. These addressed hand problems the panelists felt were most appropriate for quality measure development. Panelists rated the measures on an ordinal scale between 1 (definitely not valid) and 9 (definitely valid) in 2 rounds (preliminary round and final round) with an intervening face-to-face discussion. Ratings from 1 to 3 were considered not valid, 4 to 6 as equivocal or uncertain, and 7 to 9 as valid. If no more than 2 of the 9 ratings were outside the 3-point range that included the median (1-3, 4-6, or 7-9), the panelists were considered to be in agreement. If 3 or more of the panelists' ratings of a measure were within the 1 to 3 range and 3 or more ratings were in the 7 to 9 range, the panelists were considered to be in disagreement. There was agreement on 43% (19) of the measures as important, 27% (12) as scientifically sound, 48% (21) as usable, and 59% (26) as feasible to complete. Ten measures met all 4 of these criteria and were, therefore, considered valid measurements of quality. Quality measures that were developed address outcomes (patient-reported outcomes for assessment and improvement of function) and processes of care (utilization rates of imaging, antibiotics, occupational therapy, ultrasound, and operative treatment). The consortium developed 10 measures of hand surgery quality using a validated methodology. These measures merit further development. Quality measures can be used to evaluate the quality of care provided by physicians and health systems and can inform quality and value-based reimbursement models. Copyright © 2017 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
van Berkel, Jantien; Boot, Cécile R L; Proper, Karin I; Bongers, Paulien M; van der Beek, Allard J
2013-01-01
To evaluate the process of the implementation of an intervention aimed at improving work engagement and energy balance, and to explore associations between process measures and compliance. Process measures were assessed using a combination of quantitative and qualitative methods. The mindfulness training was attended at least once by 81.3% of subjects, and 54.5% were highly compliant. With regard to e-coaching and homework exercises, 6.3% and 8.0%, respectively, were highly compliant. The training was appreciated with a 7.5 score and e-coaching with a 6.8 score. Appreciation of training and e-coaching, satisfaction with trainer and coach, and practical facilitation were significantly associated with compliance. The intervention was implemented well on the level of the mindfulness training, but poorly on the level of e-coaching and homework time investment. To increase compliance, attention should be paid to satisfaction and trainer-participant relationship.
NASA Astrophysics Data System (ADS)
Pradeep, Krishna; Poiroux, Thierry; Scheer, Patrick; Juge, André; Gouget, Gilles; Ghibaudo, Gérard
2018-07-01
This work details the analysis of wafer level global process variability in 28 nm FD-SOI using split C-V measurements. The proposed approach initially evaluates the native on wafer process variability using efficient extraction methods on split C-V measurements. The on-wafer threshold voltage (VT) variability is first studied and modeled using a simple analytical model. Then, a statistical model based on the Leti-UTSOI compact model is proposed to describe the total C-V variability in different bias conditions. This statistical model is finally used to study the contribution of each process parameter to the total C-V variability.
ERIC Educational Resources Information Center
Brannick, Michael T., Ed.; Salas, Eduardo, Ed.; Prince, Carolyn, Ed.
This volume presents thoughts on measuring team performance written by experts currently working with teams in fields such as training, evaluation, and process consultation. The chapters are: (1) "An Overview of Team Performance Measurement" (Michael T. Brannick and Carolyn Prince); (2) "A Conceptual Framework for Teamwork Measurement" (Terry L.…
Selecting Models for Measuring Change When True Experimental Conditions Do Not Exist.
ERIC Educational Resources Information Center
Fortune, Jim C.; Hutson, Barbara A.
1984-01-01
Measuring change when true experimental conditions do not exist is a difficult process. This article reviews the artifacts of change measurement in evaluations and quasi-experimental designs, delineates considerations in choosing a model to measure change under nonideal conditions, and suggests ways to organize models to facilitate selection.…
Deliberation before determination: the definition and evaluation of good decision making.
Elwyn, Glyn; Miron-Shatz, Talya
2010-06-01
In this article, we examine definitions of suggested approaches to measure the concept of good decisions, highlight the ways in which they converge, and explain why we have concerns about their emphasis on post-hoc estimations and post-decisional outcomes, their prescriptive concept of knowledge, and their lack of distinction between the process of deliberation, and the act of decision determination. There has been a steady trend to involve patients in decision making tasks in clinical practice, part of a shift away from paternalism towards the concept of informed choice. An increased understanding of the uncertainties that exist in medicine, arising from a weak evidence base and, in addition, the stochastic nature of outcomes at the individual level, have contributed to shifting the responsibility for decision making from physicians to patients. This led to increasing use of decision support and communication methods, with the ultimate aim of improving decision making by patients. Interest has therefore developed in attempting to define good decision making and in the development of measurement approaches. We pose and reflect whether decisions can be judged good or not, and, if so, how this goodness might be evaluated. We hypothesize that decisions cannot be measured by reference to their outcomes and offer an alternative means of assessment, which emphasizes the deliberation process rather than the decision's end results. We propose decision making comprises a pre-decisional process and an act of decision determination and consider how this model of decision making serves to develop a new approach to evaluating what constitutes a good decision making process. We proceed to offer an alternative, which parses decisions into the pre-decisional deliberation process, the act of determination and post-decisional outcomes. Evaluating the deliberation process, we propose, should comprise of a subjective sufficiency of knowledge, as well as emotional processing and affective forecasting of the alternatives. This should form the basis for a good act of determination.
Evaluation of the Air Void Analyzer
2013-07-01
lack of measurement would help explain the difference in values shown. Brief descriptions of other unpublished testing (Wang et al. 2008) CTL Group...structure measurements taken from the controlled laboratory mixtures. A three-phase approach was used to evaluate the machine. First, a global ...method. Hypothesis testing using t-statistics was performed to increase understanding of the data collected globally in terms of the processes used for
[Evaluation of measurement uncertainty of welding fume in welding workplace of a shipyard].
Ren, Jie; Wang, Yanrang
2015-12-01
To evaluate the measurement uncertainty of welding fume in the air of the welding workplace of a shipyard, and to provide quality assurance for measurement. According to GBZ/T 192.1-2007 "Determination of dust in the air of workplace-Part 1: Total dust concentration" and JJF 1059-1999 "Evaluation and expression of measurement uncertainty", the uncertainty for determination of welding fume was evaluated and the measurement results were completely described. The concentration of welding fume was 3.3 mg/m(3), and the expanded uncertainty was 0.24 mg/m(3). The repeatability for determination of dust concentration introduced an uncertainty of 1.9%, the measurement using electronic balance introduced a standard uncertainty of 0.3%, and the measurement of sample quality introduced a standard uncertainty of 3.2%. During the determination of welding fume, the standard uncertainty introduced by the measurement of sample quality is the dominant uncertainty. In the process of sampling and measurement, quality control should be focused on the collection efficiency of dust, air humidity, sample volume, and measuring instruments.
Metrology for Fuel Cell Manufacturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stocker, Michael; Stanfield, Eric
2015-02-04
The project was divided into three subprojects. The first subproject is Fuel Cell Manufacturing Variability and Its Impact on Performance. The objective was to determine if flow field channel dimensional variability has an impact on fuel cell performance. The second subproject is Non-contact Sensor Evaluation for Bipolar Plate Manufacturing Process Control and Smart Assembly of Fuel Cell Stacks. The objective was to enable cost reduction in the manufacture of fuel cell plates by providing a rapid non-contact measurement system for in-line process control. The third subproject is Optical Scatterfield Metrology for Online Catalyst Coating Inspection of PEM Soft Goods. Themore » objective was to evaluate the suitability of Optical Scatterfield Microscopy as a viable measurement tool for in situ process control of catalyst coatings.« less
NASA Astrophysics Data System (ADS)
Dunn, Michael
2008-10-01
For over 30 years, the Oak Ridge National Laboratory (ORNL) has performed research and development to provide more accurate nuclear cross-section data in the resonance region. The ORNL Nuclear Data (ND) Program consists of four complementary areas of research: (1) cross-section measurements at the Oak Ridge Electron Linear Accelerator; (2) resonance analysis methods development with the SAMMY R-matrix analysis software; (3) cross-section evaluation development; and (4) cross-section processing methods development with the AMPX software system. The ND Program is tightly coupled with nuclear fuel cycle analyses and radiation transport methods development efforts at ORNL. Thus, nuclear data work is performed in concert with nuclear science and technology needs and requirements. Recent advances in each component of the ORNL ND Program have led to improvements in resonance region measurements, R-matrix analyses, cross-section evaluations, and processing capabilities that directly support radiation transport research and development. Of particular importance are the improvements in cross-section covariance data evaluation and processing capabilities. The benefit of these advances to nuclear science and technology research and development will be discussed during the symposium on Nuclear Physics Research Connections to Nuclear Energy.
USDA-ARS?s Scientific Manuscript database
The importance of measurement uncertainty in terms of calculation of model evaluation error statistics has been recently stated in the literature. The impact of measurement uncertainty on calibration results indicates the potential vague zone in the field of watershed modeling where the assumption ...
Arocha, Mariana A; Basilio, Juan; Llopis, Jaume; Di Bella, Enrico; Roig, Miguel; Ardu, Stefano; Mayoral, Juan R
2014-07-01
The aim of this study was to determine, by using a spectrophotometer device, the colour stainability of two indirect CAD/CAM processed composites in comparison with two conventionally laboratory-processed composites after being immersed 4 weeks in staining solutions such as coffee, black tea and red wine, using distilled water as control group. Two indirect CAD/CAM composites (Lava Ultimate and Paradigm MZ100) and two conventionally laboratory-processed composites (SR Adoro and Premise Indirect) of shade A2 were selected (160 disc samples). Colour stainability was measured after 4 weeks of immersion in three staining solutions (black tea, coffee, red wine) and distilled water. Specimen's colour was measured each week by means of a spectrophotometer (CIE L*a*b* system). Statistical analysis was carried out performing repeated ANOVA measurements and Tukey's HSD test to evaluate differences in ΔE00 measurements between groups; the interactions among composites, staining solutions and time duration were also evaluated. All materials showed significant discoloration (p<0.01) when compared to control group. The highest ΔE00 observed was with red wine, whereas black tea showed the lowest one. Indirect laboratory-processed resin composites showed the highest colour stability compared with CAD/CAM resin blocks. CAD/CAM processed composites immersed in staining solutions showed lower colour stability when compared to conventionally laboratory-processed resin composites. The demand for CAD/CAM restorations has been increasing; however, colour stainability for such material has been insufficiently studied. Moreover, this has not been performed comparing CAD/CAM processed composites versus laboratory-processed indirect composites by immersing in staining solutions for long immersion periods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ford, James H.; Oliver, Karen A.; Giles, Miriam; Cates-Wessel, Kathryn; Krahn, Dean; Levin, Frances R.
2017-01-01
Background and Objectives In 2000, the American Board of Medical Specialties implemented the Maintenance of Certification (MOC), a structured process to help physicians identify and implement a quality improvement project to improve patient care. This study reports on findings from an MOC Performance in Practice (PIP) module designed and evaluated by addiction psychiatrists who are members of the American Academy of Addiction Psychiatry (AAAP). Method A 3-phase process was utilized to recruit AAAP members to participate in the study. The current study utilized data from 154 self-selected AAAP members who evaluated the effectiveness of the MOC Tobacco Cessation PIP. Results Of the physicians participating, 76% (n 120) completed the Tobacco PIP. A paired t-test analysis revealed that reported changes in clinical measure documentation were significant across all six measures. Targeted improvement efforts focused on a single clinical measure. Results found that simple change projects designed to improve clinical practice led to substantial changes in self-reported chart documentation for the selected measure. Conclusions The current findings suggest that addiction psychiatrists can leverage the MOC process to improve clinical care. PMID:27973746
NASA Astrophysics Data System (ADS)
Bowling, Shannon Raye
The aircraft maintenance industry is a complex system consisting of human and machine components, because of this; much emphasis has been placed on improving aircraft-inspection performance. One proven technique for improving inspection performance is the use of training. There are several strategies that have been implemented for training, one of which is feedforward information. The use of prior information (feedforward) is known to positively affect inspection performance. This information can consist of knowledge about defect characteristics (types, severity/criticality, and location) and the probability of occurrence. Although several studies have been conducted that demonstrate the usefulness of feedforward as a training strategy, there are certain research issues that need to be addressed. This study evaluates the effect of feedforward information in a simulated 3-dimensional environment by the use of virtual reality. A controlled study was conducted to evaluate the effectiveness of feedforward information in a simulated aircraft inspection environment. The study was conducted in two phases. The first phase evaluated the difference between general and detailed inspection at different pacing levels. The second phase evaluated the effect of feedforward information pertaining to severity, probability and location. Analyses of the results showed that subjects performing detailed inspection performed significantly better than while performing general inspection. Pacing also had the effect of reducing performance for both general and detailed inspection. The study also found that as the level of feedforward information increases, performance also increases. In addition to evaluating performance measures, the study also evaluated process and subjective measures. It was found that process measures such as number of fixation points, fixation groups, mean fixation duration, and percent area covered were all affected by the treatment levels. Analyses of the subjective measures also found a correlation between the perceived usefulness of feedforward information and the actual effect on performance. The study also examined the potential of virtual reality as a training tool and analyzed the effect different calculational algorithms have on determining various process measures.
Parental participation in the habilitation process--evaluation from a user perspective.
Granat, T; Lagander, B; Börjesson, M C
2002-11-01
To develop a national instrument for evaluation of parental participation: (1) to obtain a functional measure of quality from a user perspective; (2) as part of quality development in child habilitation services departments; (3) to create common grounds for the evaluation of important aspects of the habilitation process based on the opinions of users and care professionals; (4) to enable evaluation of individual service departments from a more general viewpoint and to highlight areas for improvement; and (5) to enable comparisons of individual service departments in the future against those of others via benchmarking. The Measurement of Processes of Care (MPOC) was deemed to be the method that corresponded most closely with these formulated aims. A shortened version, MPOC 20, had already been produced and was awaiting publication. This shortened version measures the same important aspects of habilitation as the original MPOC. It also has a new scale, with verbal clarification for each step. This makes it more user friendly, as the results are easier to interpret. MPOC 20 was modified to become MPOC 28. This questionnaire was sent out in 11 of 26 counties in Sweden. The target group for the questionnaires was families with children up to 18 years of age who had been in contact with a habilitation services department for at least 1 year. The sample group comprised 4013 randomly selected families. A total of 3391 (84.5%) returned the questionnaire, and 2458 (61%) had responded to the questions. Twelve particular questions that can be regarded as fundamental to the habilitation processes emerged from the questionnaire in the regression analysis. These are measures of good quality in the habilitation process as perceived by the parents and are important in their overall satisfaction with habilitation services. Apart from the specific information category, these questions represented all the factors, i.e. enabling/partnership, general information, co-ordinated/comprehensive care and respectful/supportive care. MPOC 28 can be useful as an analytical tool for comparisons over time and for measuring changes in the way in which parents rank the various question areas linked to their overall level of satisfaction with the habilitation services in general.
Hirschi, Jennifer S.; Takeya, Tetsuya; Hang, Chao; Singleton, Daniel A.
2009-01-01
We suggest here and evaluate a methodology for the measurement of specific interatomic distances from a combination of theoretical calculations and experimentally measured 13C kinetic isotope effects. This process takes advantage of a broad diversity of transition structures available for the epoxidation of 2-methyl-2-butene with oxaziridines. From the isotope effects calculated for these transition structures, a theory-independent relationship between the C-O bond distances of the newly forming bonds and the isotope effects is established. Within the precision of the measurement, this relationship in combination with the experimental isotope effects provides a highly accurate picture of the C-O bonds forming at the transition state. The diversity of transition structures also allows an evaluation of the Schramm process for defining transition state geometries based on calculations at non-stationary points, and the methodology is found to be reasonably accurate. PMID:19146405
Methods for Evaluating Emotions Evoked by Food Experiences: A Literature Review
Kaneko, Daisuke; Toet, Alexander; Brouwer, Anne-Marie; Kallen, Victor; van Erp, Jan B. F.
2018-01-01
Besides sensory characteristics of food, food-evoked emotion is a crucial factor in predicting consumer's food preference and therefore in developing new products. Many measures have been developed to assess food-evoked emotions. The aim of this literature review is (i) to give an exhaustive overview of measures used in current research and (ii) to categorize these methods along measurement level (physiological, behavioral, and cognitive) and emotional processing level (unconscious sensory, perceptual/early cognitive, and conscious/decision making) level. This 3 × 3 categorization may help researchers to compile a set of complementary measures (“toolbox”) for their studies. We included 101 peer-reviewed articles that evaluate consumer's emotions and were published between 1997 and 2016, providing us with 59 different measures. More than 60% of these measures are based on self-reported, subjective ratings and questionnaires (cognitive measurement level) and assess the conscious/decision-making level of emotional processing. This multitude of measures and their overrepresentation in a single category hinders the comparison of results across studies and building a complete multi-faceted picture of food-evoked emotions. We recommend (1) to use widely applied, validated measures only, (2) to refrain from using (highly correlated) measures from the same category but use measures from different categories instead, preferably covering all three emotional processing levels, and (3) to acquire and share simultaneously collected physiological, behavioral, and cognitive datasets to improve the predictive power of food choice and other models. PMID:29937744
Evaluation matters: lessons learned on the evaluation of surgical teaching.
Woods, Nicole N
2011-01-01
The traditional system of academic promotion and tenure can make it difficult to reward those who excel at surgical teaching. A successful faculty evaluation process can provide the objective measures of teaching performance needed for performance appraisals and promotion decisions. Over the course of two decades, an extensive faculty evaluation process has been developed in the Department of Surgery at the University of Toronto. This paper presents some of the non-psychometric characteristics of that system. Faculty awareness of the evaluation process, the consistency of its application, trainee anonymity and the materiality of the results are described key factors of a faculty evaluation system that meets the assessment needs of individual teachers and raises the profile of teaching in surgical departments. Copyright © 2010 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.
Zerhouni, Oulmann; Bègue, Laurent; Comiran, Francisco; Wiers, Reinout W
2018-01-01
Since implicit attitudes (i.e. evaluations occurring outside of complete awareness) are highly predictive of alcohol consumption, we tested an evaluative learning procedure based on repeated pairing to a critical stimulus (i.e. alcohol, the CS) with a valenced stimulus (the US) in order to modify implicit attitudes (i.e. evaluative conditioning; EC). We hypothesized that manipulating the learning context to bolster implicit affect misattribution should strengthen EC effects on implicit attitudes toward alcohol, while encouraging deliberate processing of CS-US pairs, should strengthen EC effects on explicit attitudes. In our study (n=114 students) we manipulated whether CS-US pairs were presented simultaneously or sequentially. Recollective memory was estimated with a Process Dissociation Procedure. Both implicit and explicit attitudes were assessed immediately after the procedure. Behavioral intentions were measured directly after and one week after the EC-procedure. We found that EC with sequential presentation had a stronger impact on implicit and explicit measures and on purchase intentions immediately after the procedure and one week after. The present findings provide new evidence that (i) EC is an effective way to change implicit attitudes toward alcohol and (ii) evidence that EC may be better described by propositional rather than dual process accounts. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rebar, Amanda L.; Ram, Nilam; Conroy, David E.
2014-01-01
Objective The Single-Category Implicit Association Test (SC-IAT) has been used as a method for assessing automatic evaluations of physical activity, but measurement artifact or consciously-held attitudes could be confounding the outcome scores of these measures. The objective of these two studies was to address these measurement concerns by testing the validity of a novel SC-IAT scoring technique. Design Study 1 was a cross-sectional study, and study 2 was a prospective study. Method In study 1, undergraduate students (N = 104) completed SC-IATs for physical activity, flowers, and sedentary behavior. In study 2, undergraduate students (N = 91) completed a SC-IAT for physical activity, self-reported affective and instrumental attitudes toward physical activity, physical activity intentions, and wore an accelerometer for two weeks. The EZ-diffusion model was used to decompose the SC-IAT into three process component scores including the information processing efficiency score. Results In study 1, a series of structural equation model comparisons revealed that the information processing score did not share variability across distinct SC-IATs, suggesting it does not represent systematic measurement artifact. In study 2, the information processing efficiency score was shown to be unrelated to self-reported affective and instrumental attitudes toward physical activity, and positively related to physical activity behavior, above and beyond the traditional D-score of the SC-IAT. Conclusions The information processing efficiency score is a valid measure of automatic evaluations of physical activity. PMID:25484621
[Adaptability of sweet corn ears to a frozen process].
Ramírez Matheus, Alejandra O; Martínez, Norelkys Maribel; de Bertorelli, Ligia O; De Venanzi, Frank
2004-12-01
The effects of frozen condition on the quality of three sweet corn ears (2038, 2010, 2004) and the pattern (Bonanza), were evaluated. Biometrics characteristics like ear size, ear diameter, row and kernel deep were measured as well as chemical and physical measurement in fresh and frozen states. The corn ears were frozen at -95 degrees C by 7 minutes. The yield and stability of the frozen ears were evaluated at 45 and 90 days of frozen storage (-18 degrees C). The average commercial yield as frozen corn ear for all the hybrids was 54.2%. The industry has a similar value range of 48% to 54%. The ear size average was 21.57 cm, row number was 15, ear diameter 45.54 mm and the kernel corn deep was 8.57 mm. All these measurements were found not different from commercial values found for the industry. All corn samples evaluated showed good stability despites the frozen processing and storage. Hybrid 2038 ranked higher in quality.
Measurement-based reliability/performability models
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen
1987-01-01
Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.
Improving reliability of a residency interview process.
Peeters, Michael J; Serres, Michelle L; Gundrum, Todd E
2013-10-14
To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individually-evaluated each candidate in one-on-one interviews. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station-impact of content specificity was greatly reduced with more interview stations. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity.
EVALUATING MC AND A EFFECTIVENESS TO VERIFY THE PRESENCE OF NUCLEAR MATERIALS
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. G. DAWSON; J. A MORZINSKI; ET AL
Traditional materials accounting is focused exclusively on the material balance area (MBA), and involves periodically closing a material balance based on accountability measurements conducted during a physical inventory. In contrast, the physical inventory for Los Alamos National Laboratory's near-real-time accounting system is established around processes and looks more like an item inventory. That is, the intent is not to measure material for accounting purposes, since materials have already been measured in the normal course of daily operations. A given unit process operates many times over the course of a material balance period. The product of a given unit process maymore » move for processing within another unit process in the same MBA or may be transferred out of the MBA. Since few materials are unmeasured the physical inventory for a near-real-time process area looks more like an item inventory. Thus, the intent of the physical inventory is to locate the materials on the books and verify information about the materials contained in the books. Closing a materials balance for such an area is a matter of summing all the individual mass balances for the batches processed by all unit processes in the MBA. Additionally, performance parameters are established to measure the program's effectiveness. Program effectiveness for verifying the presence of nuclear material is required to be equal to or greater than a prescribed performance level, process measurements must be within established precision and accuracy values, physical inventory results meet or exceed performance requirements, and inventory differences are less than a target/goal quantity. This approach exceeds DOE established accounting and physical inventory program requirements. Hence, LANL is committed to this approach and to seeking opportunities for further improvement through integrated technologies. This paper will provide a detailed description of this evaluation process.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-27
... through primary processing; (2) to analyze the economic performance effects of current management measures; and (3) to analyze the economic performance effects of alternative management measures. The measures... used to track economic performance and to evaluate the economic effects of alternative management...
Hershey, Christine L.; Bhattarai, Achuyt; Florey, Lia S.; McElroy, Peter D.; Nielsen, Carrie F.; Yé, Yazoume; Eckert, Erin; Franca-Koh, Ana Cláudia; Shargie, Estifanos; Komatsu, Ryuichi; Smithson, Paul; Thwing, Julie; Mihigo, Jules; Herrera, Samantha; Taylor, Cameron; Shah, Jui; Mouzin, Eric; Yoon, Steven S.; Salgado, S. René
2017-01-01
Abstract. As funding for malaria control increased considerably over the past 10 years resulting in the expanded coverage of malaria control interventions, so did the need to measure the impact of these investments on malaria morbidity and mortality. Members of the Roll Back Malaria (RBM) Partnership undertook impact evaluations of malaria control programs at a time when there was little guidance in terms of the process for conducting an impact evaluation of a national-level malaria control program. The President’s Malaria Initiative (PMI), as a member of the RBM Partnership, has provided financial and technical support for impact evaluations in 13 countries to date. On the basis of these experiences, PMI and its partners have developed a streamlined process for conducting the evaluations with a set of lessons learned and recommendations. Chief among these are: to ensure country ownership and involvement in the evaluations; to engage stakeholders throughout the process; to coordinate evaluations among interested partners to avoid duplication of efforts; to tailor the evaluation to the particular country context; to develop a standard methodology for the evaluations and a streamlined process for completion within a reasonable time; and to develop tailored dissemination products on the evaluation for a broad range of stakeholders. These key lessons learned and resulting recommendations will guide future impact evaluations of malaria control programs and other health programs. PMID:28990921
The Circle of Evaluation in the Community Junior College.
ERIC Educational Resources Information Center
Trent, James W.
Community colleges need to evaluate, free from prejudice, the nature and impact of their whole system. Lack of effective programs for minority students is one of many discrepancies between objectives held and those actually implemented. The goals of a successful evaluation are to measure and identify a combination of input and process factors that…
Evaluation of spray drift using low speed wind tunnel measurements and dispersion modeling
USDA-ARS?s Scientific Manuscript database
The objective of this work was to evaluate the EPA’s proposed Test Plan for the validation testing of pesticide spray drift reduction technologies (DRTs) for row and field crops, focusing on the evaluation of ground application systems using the low-speed wind tunnel protocols and processing the dat...
The Power of Why: Engaging the Goal Paradox in Program Evaluation
ERIC Educational Resources Information Center
Friedman, Victor J.; Rothman, Jay; Withers, Bill
2006-01-01
Clearly defined and measurable goals are commonly considered prerequisites for effective evaluation. Goal setting, however, presents a paradox to evaluators because it takes place at the interface of rationality and values. The objective of this article is to demonstrate a method for unlocking this paradox by making goal setting a process of…
Reducing the overlay metrology sensitivity to perturbations of the measurement stack
NASA Astrophysics Data System (ADS)
Zhou, Yue; Park, DeNeil; Gutjahr, Karsten; Gottipati, Abhishek; Vuong, Tam; Bae, Sung Yong; Stokes, Nicholas; Jiang, Aiqin; Hsu, Po Ya; O'Mahony, Mark; Donini, Andrea; Visser, Bart; de Ruiter, Chris; Grzela, Grzegorz; van der Laan, Hans; Jak, Martin; Izikson, Pavel; Morgan, Stephen
2017-03-01
Overlay metrology setup today faces a continuously changing landscape of process steps. During Diffraction Based Overlay (DBO) metrology setup, many different metrology target designs are evaluated in order to cover the full process window. The standard method for overlay metrology setup consists of single-wafer optimization in which the performance of all available metrology targets is evaluated. Without the availability of external reference data or multiwafer measurements it is hard to predict the metrology accuracy and robustness against process variations which naturally occur from wafer-to-wafer and lot-to-lot. In this paper, the capabilities of the Holistic Metrology Qualification (HMQ) setup flow are outlined, in particular with respect to overlay metrology accuracy and process robustness. The significance of robustness and its impact on overlay measurements is discussed using multiple examples. Measurement differences caused by slight stack variations across the target area, called grating imbalance, are shown to cause significant errors in the overlay calculation in case the recipe and target have not been selected properly. To this point, an overlay sensitivity check on perturbations of the measurement stack is presented for improvement of the overlay metrology setup flow. An extensive analysis on Key Performance Indicators (KPIs) from HMQ recipe optimization is performed on µDBO measurements of product wafers. The key parameters describing the sensitivity to perturbations of the measurement stack are based on an intra-target analysis. Using advanced image analysis, which is only possible for image plane detection of μDBO instead of pupil plane detection of DBO, the process robustness performance of a recipe can be determined. Intra-target analysis can be applied for a wide range of applications, independent of layers and devices.
A survey of methods for the evaluation of tissue engineering scaffold permeability.
Pennella, F; Cerino, G; Massai, D; Gallo, D; Falvo D'Urso Labate, G; Schiavi, A; Deriu, M A; Audenino, A; Morbiducci, Umberto
2013-10-01
The performance of porous scaffolds for tissue engineering (TE) applications is evaluated, in general, in terms of porosity, pore size and distribution, and pore tortuosity. These descriptors are often confounding when they are applied to characterize transport phenomena within porous scaffolds. On the contrary, permeability is a more effective parameter in (1) estimating mass and species transport through the scaffold and (2) describing its topological features, thus allowing a better evaluation of the overall scaffold performance. However, the evaluation of TE scaffold permeability suffers of a lack of uniformity and standards in measurement and testing procedures which makes the comparison of results obtained in different laboratories unfeasible. In this review paper we summarize the most important features influencing TE scaffold permeability, linking them to the theoretical background. An overview of methods applied for TE scaffold permeability evaluation is given, presenting experimental test benches and computational methods applied (1) to integrate experimental measurements and (2) to support the TE scaffold design process. Both experimental and computational limitations in the permeability evaluation process are also discussed.
Bossard, B.; Renard, J. M.; Capelle, P.; Paradis, P.; Beuscart, M. C.
2000-01-01
Investing in information technology has become a crucial process in hospital management today. Medical and administrative managers are faced with difficulties in measuring medical information technology costs and benefits due to the complexity of the domain. This paper proposes a preimplementation methodology for evaluating and appraising material, process and human costs and benefits. Based on the users needs and organizational process analysis, the methodology provides an evaluative set of financial and non financial indicators which can be integrated in a decision making and investment evaluation process. We describe the first results obtained after a few months of operation for the Computer-Based Patient Record (CPR) project. Its full acceptance, in spite of some difficulties, encourages us to diffuse the method for the entire project. PMID:11079851
Measuring the effect of attention on simple visual search.
Palmer, J; Ames, C T; Lindsey, D T
1993-02-01
Set-size in visual search may be due to 1 or more of 3 factors: sensory processes such as lateral masking between stimuli, attentional processes limiting the perception of individual stimuli, or attentional processes affecting the decision rules for combining information from multiple stimuli. These possibilities were evaluated in tasks such as searching for a longer line among shorter lines. To evaluate sensory contributions, display set-size effects were compared with cuing conditions that held sensory phenomena constant. Similar effects for the display and cue manipulations suggested that sensory processes contributed little under the conditions of this experiment. To evaluate the contribution of decision processes, the set-size effects were modeled with signal detection theory. In these models, a decision effect alone was sufficient to predict the set-size effects without any attentional limitation due to perception.
ERIC Educational Resources Information Center
Borus, Michael E.
An approach and methodology for the systematic measurement of the impact of employment-related social programs is presented in this primer. Chapter 1 focuses on evaluation as the third step (the first two being planning and operation) in the process of program implementation. Chapter 2 examines the impacts of social programs. Topics include…
Processing study of a high temperature adhesive
NASA Technical Reports Server (NTRS)
Progar, D. J.
1984-01-01
An adhesive-bonding process cycle study was performed for a polyimidesulphone. The high molecular weight, linear aromatic system possesses properties which make it attractive as a processable, low-cost material for elevated temperature applications. The results of a study to better understand the parameters that affect the adhesive properties of the polymer for titanium alloy adherends are presented. These include the tape preparation, the use of a primer and press and simulated autoclave processing conditions. The polymer was characterized using Fourier transform infrared spectroscopy, glass transition temperature determination, flow measurements, and weight loss measurements. The lap shear strength of the adhesive was used to evaluate the effects of the bonding process variations.
Rosenblum, Sara
2018-01-01
To describe handwriting and executive control features and their inter-relationships among children with developmental dysgraphia, in comparison to controls. Participants included 64 children, aged 10-12 years, 32 with dysgraphia based on the Handwriting Proficiency Screening Questionnaire (HPSQ) and 32 matched controls. Children copied a paragraph onto paper affixed to a digitizer that supplied handwriting process objective measures (Computerized Penmanship Evaluation Tool (ComPET). Their written product was evaluated by the Hebrew Handwriting Evaluation (HHE). Parents completed the Behavior Rating Inventory of Executive Function (BRIEF) questionnaire about their child's executive control abilities. Significant group differences were found for handwriting performance measures (HHE and ComPET) and executive control domains (BRIEF). Based on one discriminate function, including handwriting performance and executive control measures, 98.4% of the participants were correctly classified into groups. Significant correlations were found in each group between working memory and legibility as well as for other executive domains and handwriting measures. Furthermore, twenty percent of the variability of the mean pressure applied towards the writing surface among children with was explained by their 'emotional control' (BRIEF). The results strongly suggest consideration of executive control domains to obtain better insight into handwriting impairment characteristics among children with dysgraphia to improve their identification, evaluation and the intervention process.
2018-01-01
Objective To describe handwriting and executive control features and their inter-relationships among children with developmental dysgraphia, in comparison to controls. Method Participants included 64 children, aged 10–12 years, 32 with dysgraphia based on the Handwriting Proficiency Screening Questionnaire (HPSQ) and 32 matched controls. Children copied a paragraph onto paper affixed to a digitizer that supplied handwriting process objective measures (Computerized Penmanship Evaluation Tool (ComPET). Their written product was evaluated by the Hebrew Handwriting Evaluation (HHE). Parents completed the Behavior Rating Inventory of Executive Function (BRIEF) questionnaire about their child's executive control abilities. Results Significant group differences were found for handwriting performance measures (HHE and ComPET) and executive control domains (BRIEF). Based on one discriminate function, including handwriting performance and executive control measures, 98.4% of the participants were correctly classified into groups. Significant correlations were found in each group between working memory and legibility as well as for other executive domains and handwriting measures. Furthermore, twenty percent of the variability of the mean pressure applied towards the writing surface among children with was explained by their 'emotional control' (BRIEF). Conclusion The results strongly suggest consideration of executive control domains to obtain better insight into handwriting impairment characteristics among children with dysgraphia to improve their identification, evaluation and the intervention process. PMID:29689111
The Role of Attention in Somatosensory Processing: A Multi-Trait, Multi-Method Analysis
ERIC Educational Resources Information Center
Wodka, Ericka L.; Puts, Nicolaas A. J.; Mahone, E. Mark; Edden, Richard A. E.; Tommerdahl, Mark; Mostofsky, Stewart H.
2016-01-01
Sensory processing abnormalities in autism have largely been described by parent report. This study used a multi-method (parent-report and measurement), multi-trait (tactile sensitivity and attention) design to evaluate somatosensory processing in ASD. Results showed multiple significant within-method (e.g., parent report of different…
Data Input, Processing and Presentation. [helicopter rotor balance measurement
NASA Technical Reports Server (NTRS)
Langer, H. J.
1984-01-01
The problems of data acquisition, processing and display are investigated in the case of a helicopter rotor balance. The types of sensors to be employed are discussed in addition to their placement and application in wind tunnel trials. Finally, the equipment for data processing, evaluation and storage are presented with a description of methods.
Morgan, Lauren; New, Steve; Robertson, Eleanor; Collins, Gary; Rivero-Arias, Oliver; Catchpole, Ken; Pickering, Sharon P; Hadi, Mohammed; Griffin, Damian; McCulloch, Peter
2015-02-01
Standard operating procedures (SOPs) should improve safety in the operating theatre, but controlled studies evaluating the effect of staff-led implementation are needed. In a controlled interrupted time series, we evaluated three team process measures (compliance with WHO surgical safety checklist, non-technical skills and technical performance) and three clinical outcome measures (length of hospital stay, complications and readmissions) before and after a 3-month staff-led development of SOPs. Process measures were evaluated by direct observation, using Oxford Non-Technical Skills II for non-technical skills and the 'glitch count' for technical performance. All staff in two orthopaedic operating theatres were trained in the principles of SOPs and then assisted to develop standardised procedures. Staff in a control operating theatre underwent the same observations but received no training. The change in difference between active and control groups was compared before and after the intervention using repeated measures analysis of variance. We observed 50 operations before and 55 after the intervention and analysed clinical data on 1022 and 861 operations, respectively. The staff chose to structure their efforts around revising the 'whiteboard' which documented and prompted tasks, rather than directly addressing specific task problems. Although staff preferred and sustained the new system, we found no significant differences in process or outcome measures before/after intervention in the active versus the control group. There was a secular trend towards worse outcomes in the postintervention period, seen in both active and control theatres. SOPs when developed and introduced by frontline staff do not necessarily improve operative processes or outcomes. The inherent tension in improvement work between giving staff ownership of improvement and maintaining control of direction needs to be managed, to ensure staff are engaged but invest energy in appropriate change. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Methods utilized in evaluating the profitability of commercial space processing
NASA Technical Reports Server (NTRS)
Bloom, H. L.; Schmitt, P. T.
1976-01-01
Profitability analysis is applied to commercial space processing on the basis of business concept definition and assessment and the relationship between ground and space functions. Throughput analysis is demonstrated by analysis of the space manufacturing of surface acoustic wave devices. The paper describes a financial analysis model for space processing and provides key profitability measures for space processed isoenzymes.
Statistical Process Control: A Quality Tool for a Venous Thromboembolic Disease Registry.
Posadas-Martinez, Maria Lourdes; Rojas, Liliana Paloma; Vazquez, Fernando Javier; De Quiros, Fernan Bernaldo; Waisman, Gabriel Dario; Giunta, Diego Hernan
2016-01-01
We aim to describe Statistical Control Process as a quality tool for the Institutional Registry of Venous Thromboembolic Disease (IRTD), a registry developed in a community-care tertiary hospital in Buenos Aires, Argentina. The IRTD is a prospective cohort. The process of data acquisition began with the creation of a computerized alert generated whenever physicians requested imaging or laboratory study to diagnose venous thromboembolism, which defined eligible patients. The process then followed a structured methodology for patient's inclusion, evaluation, and posterior data entry. To control this process, process performance indicators were designed to be measured monthly. These included the number of eligible patients, the number of included patients, median time to patient's evaluation, and percentage of patients lost to evaluation. Control charts were graphed for each indicator. The registry was evaluated in 93 months, where 25,757 patients were reported and 6,798 patients met inclusion criteria. The median time to evaluation was 20 hours (SD, 12) and 7.7% of the total was lost to evaluation. Each indicator presented trends over time, caused by structural changes and improvement cycles, and therefore the central limit suffered inflexions. Statistical process control through process performance indicators allowed us to control the performance of the registry over time to detect systematic problems. We postulate that this approach could be reproduced for other clinical registries.
Experimental study of digital image processing techniques for LANDSAT data
NASA Technical Reports Server (NTRS)
Rifman, S. S. (Principal Investigator); Allendoerfer, W. B.; Caron, R. H.; Pemberton, L. J.; Mckinnon, D. M.; Polanski, G.; Simon, K. W.
1976-01-01
The author has identified the following significant results. Results are reported for: (1) subscene registration, (2) full scene rectification and registration, (3) resampling techniques, (4) and ground control point (GCP) extraction. Subscenes (354 pixels x 234 lines) were registered to approximately 1/4 pixel accuracy and evaluated by change detection imagery for three cases: (1) bulk data registration, (2) precision correction of a reference subscene using GCP data, and (3) independently precision processed subscenes. Full scene rectification and registration results were evaluated by using a correlation technique to measure registration errors of 0.3 pixel rms thoughout the full scene. Resampling evaluations of nearest neighbor and TRW cubic convolution processed data included change detection imagery and feature classification. Resampled data were also evaluated for an MSS scene containing specular solar reflections.
Analytical method for promoting process capability of shock absorption steel.
Sung, Wen-Pei; Shih, Ming-Hsiang; Chen, Kuen-Suan
2003-01-01
Mechanical properties and low cycle fatigue are two factors that must be considered in developing new type steel for shock absorption. Process capability and process control are significant factors in achieving the purpose of research and development programs. Often-used evaluation methods failed to measure process yield and process centering; so this paper uses Taguchi loss function as basis to establish an evaluation method and the steps for assessing the quality of mechanical properties and process control of an iron and steel manufacturer. The establishment of this method can serve the research and development and manufacturing industry and lay a foundation in enhancing its process control ability to select better manufacturing processes that are more reliable than decision making by using the other commonly used methods.
Villani, N; Gérard, K; Marchesi, V; Huger, S; François, P; Noël, A
2010-06-01
The first purpose of this study was to illustrate the contribution of statistical process control for a better security in intensity modulated radiotherapy (IMRT) treatments. This improvement is possible by controlling the dose delivery process, characterized by pretreatment quality control results. So, it is necessary to put under control portal dosimetry measurements (currently, the ionisation chamber measurements were already monitored by statistical process control thanks to statistical process control tools). The second objective was to state whether it is possible to substitute ionisation chamber with portal dosimetry in order to optimize time devoted to pretreatment quality control. At Alexis-Vautrin center, pretreatment quality controls in IMRT for prostate and head and neck treatments were performed for each beam of each patient. These controls were made with an ionisation chamber, which is the reference detector for the absolute dose measurement, and with portal dosimetry for the verification of dose distribution. Statistical process control is a statistical analysis method, coming from industry, used to control and improve the studied process quality. It uses graphic tools as control maps to follow-up process, warning the operator in case of failure, and quantitative tools to evaluate the process toward its ability to respect guidelines: this is the capability study. The study was performed on 450 head and neck beams and on 100 prostate beams. Control charts, showing drifts, both slow and weak, and also both strong and fast, of mean and standard deviation have been established and have shown special cause introduced (manual shift of the leaf gap of the multileaf collimator). Correlation between dose measured at one point, given with the EPID and the ionisation chamber has been evaluated at more than 97% and disagreement cases between the two measurements were identified. The study allowed to demonstrate the feasibility to reduce the time devoted to pretreatment controls, by substituting the ionisation chamber's measurements with those performed with EPID, and also that a statistical process control monitoring of data brought security guarantee. 2010 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.
Enterprise Professional Development--Evaluating Learning
ERIC Educational Resources Information Center
Murphy, Gerald A.; Calway, Bruce A.
2010-01-01
Whilst professional development (PD) is an activity required by many regulatory authorities, the value that enterprises obtain from PD is often unknown, particularly when it involves development of knowledge. This paper discusses measurement techniques and processes and provides a review of established evaluation techniques, highlighting…
EVALUATING THE CONDITION OF RIVERINE-RIPARIAN RESOURCES IN THE PACIFIC NORTHWEST
The evaluation of the condition of riverine-riparian resources at regional scales relies on the interpretation of measurements taken on a variety of attributes reflecting both status and processes governing status of these resources. Typical attributes include indicators of upsl...
Assessment of the urban water system with an open, reproducible process applied to Chicago
Urban water systems convey complex environmental and man-made flows. The relationships among water flows and networked storages remains difficult to comprehensively evaluate. Such evaluation is important, however, as interventions are designed (e.g, conservation measures, green...
MEASUREMENT OF BIOAVAILABLE IRON AT TWO HAZARDOUS WASTE SITES
In the past, the concentrations of iron II in monitoring wells has been used to evaluate natural attenuation processes at hazardous waste sites. Changes in the aqueous concentrations of electron acceptors/products are important to the evaluation of natural biological attenuation...
Methods of Writing Instruction Evaluation.
ERIC Educational Resources Information Center
Lamb, Bill H.
The Writing Program Director at Johnson County Community College (Kansas) developed quantitative measures for writing instruction evaluation which can support that institution's growing interest in and support for peer collaboration as a means to improving instructional quality. The first process (Interaction Analysis) has an observer measure…
Portfolio Evaluation for Professional Competence: Credentialing in Genetics for Nurses.
ERIC Educational Resources Information Center
Cook, Sarah Sheets; Kase, Ron; Middelton, Lindsay; Monsen, Rita Black
2003-01-01
Describes the process used by the Credentialing Committee of the International Society of Nurses in Genetics to validate evaluation criteria for nursing portfolios using neural network programs. Illustrates how standards are translated into measurable competencies and provides a scoring guide. (SK)
Brniak, Witold; Jachowicz, Renata; Krupa, Anna; Skorka, Tomasz; Niwinski, Krzysztof
2013-01-01
The compendial method of evaluation of orodispersible tablets (ODT) is the same disintegration test as for conventional tablets. Since it does not reflect the disintegration process in the oral cavity, alternative methods are proposed that are more related to in vivo conditions, e.g. modified dissolution paddle apparatus, texture analyzer, rotating shaft apparatus, CCD camera application, or wetting time and water absorption ratio measurement. In this study, three different co-processed excipients for direct compression of orally disintegrating tablets were compared (Ludiflash, Pharmaburst, F-Melt). The properties of the prepared tablets such as tensile strength, friability, wetting time and water absorption ratio were evaluated. Disintegration time was measured using the pharmacopoeial method and the novel apparatus constructed by the authors. The apparatus was based on the idea of Narazaki et al., however it has been modified. Magnetic resonance imaging (MRI) was applied for the analysis of the disintegration mechanism of prepared tablets. The research has shown the significant effect of excipients, compression force, temperature, volume and kind of medium on the disintegration process. The novel apparatus features better correlation of disintegration time with in vivo results (R(2) = 0.9999) than the compendial method (R(2) = 0.5788), and presents additional information on the disintegration process, e.g. swelling properties.
NASA Astrophysics Data System (ADS)
Abe, R.; Hamada, K.; Hirata, N.; Tamura, R.; Nishi, N.
2015-05-01
As well as the BIM of quality management in the construction industry, demand for quality management of the manufacturing process of the member is higher in shipbuilding field. The time series of three-dimensional deformation of the each process, and are accurately be grasped strongly demanded. In this study, we focused on the shipbuilding field, will be examined three-dimensional measurement method. The shipyard, since a large equipment and components are intricately arranged in a limited space, the installation of the measuring equipment and the target is limited. There is also the element to be measured is moved in each process, the establishment of the reference point for time series comparison is necessary to devise. In this paper will be discussed method for measuring the welding deformation in time series by using a total station. In particular, by using a plurality of measurement data obtained from this approach and evaluated the amount of deformation of each process.
The relationship of post-event processing to self-evaluation of performance in social anxiety.
Brozovich, Faith; Heimberg, Richard G
2011-06-01
Socially anxious and control participants engaged in a social interaction with a confederate and then wrote about themselves or the other person (i.e., self-focused post-event processing [SF-PEP] vs. other-focused post-event processing [OF-PEP]) and completed several questionnaires. One week later, participants completed measures concerning their evaluation of their performance in the social interaction and the degree to which they engaged in post-event processing (PEP) during the week. Socially anxious individuals evaluated their performance in the social interaction more poorly than control participants, both immediately after and 1 week later. Socially anxious individuals assigned to the SF-PEP condition displayed fewer positive feelings about their performance compared to the socially anxious individuals in the OF-PEP condition as well as controls in either condition. Also, the trait tendency to engage in PEP moderated the effect of social anxiety on participants' evaluation of their performance in the interaction, such that high socially anxious individuals with high trait PEP scores evaluated themselves in the interaction more negatively at the later assessment. These results suggest that PEP and other self-evaluative processes may perpetuate the cycle of social anxiety. Copyright © 2011. Published by Elsevier Ltd.
Finney, John W; Humphreys, Keith; Kivlahan, Daniel R; Harris, Alex H S
2016-04-01
Studies finding weak or nonexistent relationships between hospital performance on providing recommended care and hospital-level clinical outcomes raise questions about the value and validity of process of care performance measures. Such findings may cause clinicians to question the effectiveness of the care process presumably captured by the performance measure. However, one cannot infer from hospital-level results whether patients who received the specified care had comparable, worse or superior outcomes relative to patients not receiving that care. To make such an inference has been labeled the "ecological fallacy," an error that is well known among epidemiologists and sociologists, but less so among health care researchers and policy makers. We discuss such inappropriate inferences in the health care performance measurement field and illustrate how and why process measure-outcome relationships can differ at the patient and hospital levels. We also offer recommendations for appropriate multilevel analyses to evaluate process measure-outcome relationships at the patient and hospital levels and for a more effective role for performance measure bodies and research funding organizations in encouraging such multilevel analyses.
Analysis of a document/reporting system
NASA Technical Reports Server (NTRS)
Narrow, B.
1971-01-01
An in-depth analysis of the information system within the Data Processing Branch is presented. Quantitative measures are used to evaluate the efficiency and effectiveness of the information system. It is believed that this is the first documented study which utilizes quantitative measures for full scale system analysis. The quantitative measures and techniques for collecting and qualifying the basic data, as described, are applicable to any information system. Therefore this report is considered to be of interest to any persons concerned with the management design, analysis or evaluation of information systems.
Kaluzny, Bartlomiej J; Szkulmowski, Maciej; Bukowska, Danuta M; Wojtkowski, Maciej
2014-04-01
We evaluate Spectral OCT (SOCT) with a speckle contrast reduction technique using resonant scanner for assessment of corneal surface changes after excimer laser photorefractive keratectomy (PRK) and we compare healing process between conventional PRK and transepithelial PRK. The measurements were performed before and after the surgery. Obtained results show that SOCT with a resonant scanner speckle contrast reduction is capable of providing information regarding the healing process after PRK. The main difference between the healing processes of PRK and TransPRK, assessed by SOCT, was the time to cover the stroma with epithelium, which was shorter in the TransPRK group.
NASA Technical Reports Server (NTRS)
Perlwitz, J. P.; Garcia-Pando, C. Perez; Miller, R. L.
2015-01-01
A global compilation of nearly sixty measurement studies is used to evaluate two methods of simulating the mineral composition of dust aerosols in an Earth system model. Both methods are based upon a Mean Mineralogical Table (MMT) that relates the soil mineral fractions to a global atlas of arid soil type. The Soil Mineral Fraction (SMF) method assumes that the aerosol mineral fractions match the fractions of the soil. The MMT is based upon soil measurements after wet sieving, a process that destroys aggregates of soil particles that would have been emitted from the original, undisturbed soil. The second method approximately reconstructs the emitted aggregates. This model is referred to as the Aerosol Mineral Fraction (AMF) method because the mineral fractions of the aerosols differ from those of the wet-sieved parent soil, partly due to reaggregation. The AMF method remedies some of the deficiencies of the SMF method in comparison to observations. Only the AMF method exhibits phyllosilicate mass at silt sizes, where they are abundant according to observations. In addition, the AMF quartz fraction of silt particles is in better agreement with measured values, in contrast to the overestimated SMF fraction. Measurements at distinct clay and silt particle sizes are shown to be more useful for evaluation of the models, in contrast to the sum over all particles sizes that is susceptible to compensating errors, as illustrated by the SMF experiment. Model errors suggest that allocation of the emitted silt fraction of each mineral into the corresponding transported size categories is an important remaining source of uncertainty. Evaluation of both models and the MMT is hindered by the limited number of size-resolved measurements of mineral content that sparsely sample aerosols from the major dust sources. The importance of climate processes dependent upon aerosol mineral composition shows the need for global and routine mineral measurements.
Perlwitz, J. P.; Perez Garcia-Pando, C.; Miller, R. L.
2015-10-21
A global compilation of nearly sixty measurement studies is used to evaluate two methods of simulating the mineral composition of dust aerosols in an Earth system model. Both methods are based upon a Mean Mineralogical Table (MMT) that relates the soil mineral fractions to a global atlas of arid soil type. The Soil Mineral Fraction (SMF) method assumes that the aerosol mineral fractions match the fractions of the soil. The MMT is based upon soil measurements after wet sieving, a process that destroys aggregates of soil particles that would have been emitted from the original, undisturbed soil. The second methodmore » approximately reconstructs the emitted aggregates. This model is referred to as the Aerosol Mineral Fraction (AMF) method because the mineral fractions of the aerosols differ from those of the wet-sieved parent soil, partly due to reaggregation. The AMF method remedies some of the deficiencies of the SMF method in comparison to observations. Only the AMF method exhibits phyllosilicate mass at silt sizes, where they are abundant according to observations. In addition, the AMF quartz fraction of silt particles is in better agreement with measured values, in contrast to the overestimated SMF fraction. Measurements at distinct clay and silt particle sizes are shown to be more useful for evaluation of the models, in contrast to the sum over all particles sizes that is susceptible to compensating errors, as illustrated by the SMF experiment. Model errors suggest that allocation of the emitted silt fraction of each mineral into the corresponding transported size categories is an important remaining source of uncertainty. Evaluation of both models and the MMT is hindered by the limited number of size-resolved measurements of mineral content that sparsely sample aerosols from the major dust sources. In conclusion, the importance of climate processes dependent upon aerosol mineral composition shows the need for global and routine mineral measurements.« less
ERIC Educational Resources Information Center
Hussey, David L.; Flannery, Daniel J.
2007-01-01
In 2004, Second Step (Committee for Children, 2002), a violence prevention program, was implemented in the Cleveland Heights-University Heights school district for 1,416 K through second grade students. Both process and outcome measures were used to evaluate program impact and examine issues related to the implementation and evaluation of…
Teacher Performance: Do We Know What We are Evaluating?
ERIC Educational Resources Information Center
Goldbas, Mervyn; And Others
This study was designed to provide the teacher trainers at State University College, Fredonia, New York with information to identify the actual criteria upon which student teachers were being evaluated and to provide a basis for altering the evaluation process so that it would measure more validly the degree to which objectives of the field…
Students' views on the block evaluation process: A descriptive analysis.
Pakkies, Ntefeleng E; Mtshali, Ntombifikile G
2016-03-30
Higher education institutions have executed policies and practices intended to determine and promote good teaching. Students' evaluation of the teaching and learning process is seen as one measure of evaluating quality and effectiveness of instruction and courses. Policies and procedures guiding this process are discernible in universities, but it isoften not the case for nursing colleges. To analyse and describe the views of nursing students on block evaluation, and how feedback obtained from this process was managed. A quantitative descriptive study was conducted amongst nursing students (n = 177) in their second to fourth year of training from one nursing college in KwaZulu-Natal. A questionnaire was administered by the researcher and data were analysed using the Statistical Package of Social Sciences Version 19.0. The response rate was 145 (81.9%). The participants perceived the aim of block evaluation as improving the quality of teaching and enhancing their experiences as students.They questioned the significance of their input as stakeholders given that they had never been consulted about the development or review of the evaluation tool, or the administration process; and they often did not receive feedback from the evaluation they participated in. The college management should develop a clear organisational structure with supporting policies and operational guidelines for administering the evaluation process. The administration, implementation procedures, reporting of results and follow-up mechanisms should be made transparent and communicated to all concerned. Reports and actions related to these evaluations should provide feedback into relevant courses or programmes.
NASA Astrophysics Data System (ADS)
Okawa, Tsutomu; Kaminishi, Tsukasa; Kojima, Yoshiyuki; Hirabayashi, Syuichi; Koizumi, Hisao
Business process modeling (BPM) is gaining attention as a measure of analysis and improvement of the business process. BPM analyses the current business process as an AS-IS model and solves problems to improve the current business and moreover it aims to create a business process, which produces values, as a TO-BE model. However, researches of techniques that connect the business process improvement acquired by BPM to the implementation of the information system seamlessly are rarely reported. If the business model obtained by BPM is converted into UML, and the implementation can be carried out by the technique of UML, we can expect the improvement in efficiency of information system implementation. In this paper, we describe a method of the system development, which converts the process model obtained by BPM into UML and the method is evaluated by modeling a prototype of a parts procurement system. In the evaluation, comparison with the case where the system is implemented by the conventional UML technique without going via BPM is performed.
Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio
2017-01-01
The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage. PMID:28273801
Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio
2017-03-03
The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Michael; Dietsch, Niko
2018-01-01
This guide describes frameworks for evaluation, measurement, and verification (EM&V) of utility customer–funded energy efficiency programs. The authors reviewed multiple frameworks across the United States and gathered input from experts to prepare this guide. This guide provides the reader with both the contents of an EM&V framework, along with the processes used to develop and update these frameworks.
Evaluation of fuzzy inference systems using fuzzy least squares
NASA Technical Reports Server (NTRS)
Barone, Joseph M.
1992-01-01
Efforts to develop evaluation methods for fuzzy inference systems which are not based on crisp, quantitative data or processes (i.e., where the phenomenon the system is built to describe or control is inherently fuzzy) are just beginning. This paper suggests that the method of fuzzy least squares can be used to perform such evaluations. Regressing the desired outputs onto the inferred outputs can provide both global and local measures of success. The global measures have some value in an absolute sense, but they are particularly useful when competing solutions (e.g., different numbers of rules, different fuzzy input partitions) are being compared. The local measure described here can be used to identify specific areas of poor fit where special measures (e.g., the use of emphatic or suppressive rules) can be applied. Several examples are discussed which illustrate the applicability of the method as an evaluation tool.
Rodriguez, Hayley; Kissell, Kellie; Lucas, Lloyd; Fisak, Brian
2017-11-01
Although negative beliefs have been found to be associated with worry symptoms and depressive rumination, negative beliefs have yet to be examined in relation to post-event processing and social anxiety symptoms. The purpose of the current study was to examine the psychometric properties of the Negative Beliefs about Post-Event Processing Questionnaire (NB-PEPQ). A large, non-referred undergraduate sample completed the NB-PEPQ along with validation measures, including a measure of post-event processing and social anxiety symptoms. Based on factor analysis, a single-factor model was obtained, and the NB-PEPQ was found to exhibit good validity, including positive associations with measures of post-event processing and social anxiety symptoms. These findings add to the literature on the metacognitive variables that may lead to the development and maintenance of post-event processing and social anxiety symptoms, and have relevant clinical applications.
[INVITED] Evaluation of process observation features for laser metal welding
NASA Astrophysics Data System (ADS)
Tenner, Felix; Klämpfl, Florian; Nagulin, Konstantin Yu.; Schmidt, Michael
2016-06-01
In the present study we show how fast the fluid dynamics change when changing the laser power for different feed rates during laser metal welding. By the use of two high-speed cameras and a data acquisition system we conclude how fast we have to image the process to measure the fluid dynamics with a very high certainty. Our experiments show that not all process features which can be measured during laser welding do represent the process behavior similarly well. Despite the good visibility of the vapor plume the monitoring of its movement is less suitable as an input signal for a closed-loop control. The features measured inside the keyhole show a good correlation with changes of process parameters. Due to its low noise, the area of the keyhole opening is well suited as an input signal for a closed-loop control of the process.
Performance evaluation methodology for historical document image binarization.
Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis
2013-02-01
Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.
Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities
NASA Technical Reports Server (NTRS)
Jones, Thomas W.; Lunsford, Charles B.
2005-01-01
A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.
Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities
NASA Technical Reports Server (NTRS)
Jones, Thomas W.; Lunsford, Charles B.
2004-01-01
A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.
Object-oriented software for evaluating measurement uncertainty
NASA Astrophysics Data System (ADS)
Hall, B. D.
2013-05-01
An earlier publication (Hall 2006 Metrologia 43 L56-61) introduced the notion of an uncertain number that can be used in data processing to represent quantity estimates with associated uncertainty. The approach can be automated, allowing data processing algorithms to be decomposed into convenient steps, so that complicated measurement procedures can be handled. This paper illustrates the uncertain-number approach using several simple measurement scenarios and two different software tools. One is an extension library for Microsoft Excel®. The other is a special-purpose calculator using the Python programming language.
Measuring Outcomes in Children's Services.
ERIC Educational Resources Information Center
Christner, Anne Marshall, Ed.
Outcomes evaluation can provide program managers and clinical directors in child welfare, juvenile justice, child mental health, and child protective services the necessary tools for program quality assurance and accountability. This guide describes the outcomes evaluation process and provides a summary of articles and reports detailing current…
Using an evidence-based approach to measure outcomes in clinical practice.
MacDermid, Joy C; Grewal, Ruby; MacIntyre, Norma J
2009-02-01
Evaluation of the outcome of evidence-based practice decisions in individual patients or patient groups is step five in the evidence-based practice approach. Outcome measures are any measures that reflect patient status. Status or outcome measures can be used to detect change over time (eg, treatment effects), to discriminate among clinical groups, or to predict future outcomes (eg, return to work). A variety of reliable and valid physical impairment and disability measures are available to assess treatment outcomes in hand surgery and therapy. Evidence from research studies that includes normative data, standard error of measurement, or comparative scores for important clinical subgroups can be used to set treatment goals, monitor recovery, and compare individual patient outcomes to those reported in the literature. Clinicians tend to rely on impairment measures, such as radiographic measures, grip strength, and range of motion, although self-report measures are known to be equally reliable and more related to global effects, such as return-to-work. The process of selecting and implementing outcome measures is crucial. This process works best when team members are involved and willing to trial new measures. In this way, the team can develop customized outcome assessment procedures that meet their needs for assessing individual patients and providing data for program evaluation.
Diabetes Self-Management Care via Cell Phone: A Systematic Review
Krishna, Santosh; Boren, Suzanne Austin
2008-01-01
Background The objective of this study was to evaluate the evidence on the impact of cell phone interventions for persons with diabetes and/or obesity in improving health outcomes and/or processes of care for persons with diabetes and/or obesity. Methods We searched Medline (1966–2007) and reviewed reference lists from included studies and relevant reviews to identify additional studies. We extracted descriptions of the study design, sample size, patient age, duration of study, technology, educational content and delivery environment, intervention and control groups, process and outcome measures, and statistical significance. Results In this review, we included 20 articles, representing 18 studies, evaluating the use of a cell phone for health information for persons with diabetes or obesity. Thirteen of 18 studies measured health outcomes and the remaining 5 studies evaluated processes of care. Outcomes were grouped into learning, behavior change, clinical improvement, and improved health status. Nine out of 10 studies that measured hemoglobin A1c reported significant improvement among those receiving education and care support. Cell phone and text message interventions increased patient–provider and parent–child communication and satisfaction with care. Conclusions Providing care and support with cell phones and text message interventions can improve clinically relevant diabetes-related health outcomes by increasing knowledge and self-efficacy to carry out self-management behaviors. PMID:19885219
Diabetes self-management care via cell phone: a systematic review.
Krishna, Santosh; Boren, Suzanne Austin
2008-05-01
The objective of this study was to evaluate the evidence on the impact of cell phone interventions for persons with diabetes and/or obesity in improving health outcomes and/or processes of care for persons with diabetes and/or obesity. We searched Medline (1966-2007) and reviewed reference lists from included studies and relevant reviews to identify additional studies. We extracted descriptions of the study design, sample size, patient age, duration of study, technology, educational content and delivery environment, intervention and control groups, process and outcome measures, and statistical significance. In this review, we included 20 articles, representing 18 studies, evaluating the use of a cell phone for health information for persons with diabetes or obesity. Thirteen of 18 studies measured health outcomes and the remaining 5 studies evaluated processes of care. Outcomes were grouped into learning, behavior change, clinical improvement, and improved health status. Nine out of 10 studies that measured hemoglobin A1c reported significant improvement among those receiving education and care support. Cell phone and text message interventions increased patient-provider and parent-child communication and satisfaction with care. Providing care and support with cell phones and text message interventions can improve clinically relevant diabetes-related health outcomes by increasing knowledge and self-efficacy to carry out self-management behaviors.
Nielsen, Niels Peter; Wiig, Elisabeth H; Bäck, Svante; Gustafsson, Jan
2017-05-01
Treatment responses to methylphenidate by adults with ADHD are generally monitored against DSM-IV/DSM-V symptomatology, rating scales or interviews during reviews. To evaluate the use of single- and dual-dimension processing-speed and efficiency measures to monitor the effects of pharmacological treatment with methylphenidate after a short period off medication. A Quick Test of Cognitive Speed (AQT) monitored the effects of immediate-release methylphenidate in 40 previously diagnosed and medicated adults with ADHD. Processing speed was evaluated with prior prescription medication, without medication after a 2-day period off ADHD medication, and with low-dose (10/20 mg) and high-dose (20/40 mg) methylphenidate hydrochloride (Medikinet IR). Thirty-three participants responded to the experimental treatments. One-way ANOVA with post-hoc analysis (Scheffe) indicated significant main effects for single dimension colour and form and dual-dimension colour-form naming. Post-hoc analysis indicated statistical differences between the no- and high-dose medication conditions for colour and form, measures of perceptual speed. For colour-form naming, a measure of cognitive speed, there was a significant difference between no- and low-dose medication and between no- and high-dose medications, but not between low- and high-dose medications. Results indicated that the AQT tests effectively monitored incremental effects of the methylphenidate dose on processing speed after a 2-day period off medication. Thus, perceptual (colour and form) and cognitive speed (two-dimensional colour-form naming) and processing efficiency (lowered shift costs) increased measurably with high-dose medication. These preliminary findings warrant validation with added measures of associated behavioural and cognitive changes.
Continuous Evaluation in Ethics Education: A Case Study.
McIntosh, Tristan; Higgs, Cory; Mumford, Michael; Connelly, Shane; DuBois, James
2018-04-01
A great need for systematic evaluation of ethics training programs exists. Those tasked with developing an ethics training program may be quick to dismiss the value of training evaluation in continuous process improvement. In the present effort, we use a case study approach to delineate how to leverage formative and summative evaluation measures to create a high-quality ethics education program. With regard to formative evaluation, information bearing on trainee reactions, qualitative data from the comments of trainees, in addition to empirical findings, can ensure that the training program operates smoothly. Regarding summative evaluation, measures examining trainee cognition, behavior, and organization-level results provide information about how much trainees have changed as a result of taking the ethics training. The implications of effective training program evaluation are discussed.
1997-02-06
Adjudication Duration 2 2. INTRODUCTION This retrospective study analyzes relationships of variables to adjudication and processing duration in the Army...Package for Social Scientists (SPSS), Standard Version 6.1, June 1994, to determine relationships among the dependent and independent variables... consanguinity between variables. Content and criterion validity is employed to determine the measure of scientific validity. Reliability is also
Nondestructive evaluation of Bakwan paddy grains moisture content by means of spectrophotometry
NASA Astrophysics Data System (ADS)
Makky, M.; Putry, R. E.; Nakano, K.; Santosa
2018-03-01
Paddy grains moisture content (MC) strongly correlated to the physical properties of rice after being milled. Incorrect MC will cause higher percentage of broken rice and prompts the grains to be more fragile. In general, paddy grains with 13 – 14% MC are ideal for post-harvest processing. The objective of this study is to measure the MC of intact paddy grain from CV. Bakwan by means of non-destructive evaluation using NIR spectral assessment. Paddy grains samples with identical MC were put into 30 mm tube glass and measured using NIR spectrophotometer. The electromagnetic radiation absorbance under consideration upon spectral measurement fell between 1000 and 2500 nm. The grains’ actual MC was then measured by primary method, based on weight measurement i.e. oven method. In this study, the spectral data of the grains was then processed by means of Principal Component Analysis (PCA) before correlated with its MCs by Partial Least Square (PLS) method. The model calibration obtained correlation (r) of 0.983 and RMSEC of 1.684. Moreover, model validation produced correlation (r) of 0.973, RMSEP of 2.095, and bias of 0.2, indicating that the MC of paddy grains can be precisely identified by non-destructive evaluation using spectral analysis.
Brandl, Katharina; Mandel, Jess; Winegarden, Babbi
2017-02-01
Most medical schools use online systems to gather student feedback on the quality of their educational programmes and services. Online data may be limiting, however, as the course directors cannot question the students about written comments, nor can students engage in mutual problem-solving dialogue with course directors. We describe the implementation of a student evaluation team (SET) process to permit course directors and students to gather shortly after courses end to engage in feedback and problem solving regarding the course and course elements. Approximately 16 students were randomly selected to participate in each SET meeting, along with the course director, academic deans and other faculty members involved in the design and delivery of the course. An objective expert facilitates the SET meetings. SETs are scheduled for each of the core courses and threads that occur within the first 2 years of medical school, resulting in approximately 29 SETs annually. SET-specific satisfaction surveys submitted by students (n = 76) and course directors (n = 16) in 2015 were used to evaluate the SET process itself. Survey data were collected from 885 students (2010-2015), which measured student satisfaction with the overall evaluation process before and after the implementation of SETs. Students and course directors valued the SET process itself as a positive experience. Students felt that SETs allowed their voices to be heard, and that the SET increased the probability of suggested changes being implemented. Students' satisfaction with the overall evaluation process significantly improved after implementation of the SET process. Our data suggest that the SET process is a valuable way to supplement online evaluation systems and to increase students' and faculty members' satisfaction with the evaluation process. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Alves, Paula Cristina Gomes; Sales, Célia Maria Dias; Ashworth, Mark
2016-07-19
The involvement of service users in health care provision in general, and specifically in substance use disorder treatment, is of growing importance. This paper explores the views of patients in a therapeutic community for alcohol dependence about clinical assessment, including general aspects about the evaluation process, and the specific characteristics of four measures: two individualised and two standardised. A focus group was conducted and data were analysed using a framework synthesis approach. Service users welcomed the experience of clinical assessment, particularly when conducted by therapists. The duration of the evaluation process was seen as satisfactory and most of its contents were regarded as relevant for their population. Regarding the evaluation measures, patients diverged in their preferences for delivery formats (self-report vs. interview). Service users enjoyed the freedom given by individualised measures to discuss topics of their own choosing. However, they felt that part of the standardised questions were difficult to answer, inadequate (e.g. quantification of health status in 0-20 points) and sensitive (e.g. suicide-related issues), particularly for pre-treatment assessments. Patients perceived clinical assessment as helpful for their therapeutic journey, including the opportunity to reflect about their problems, either related or unrelated to alcohol use. Our study suggests that patients prefer to have evaluation protocols administered by therapists, and that measures should ideally be flexible in their formats to accommodate for patient preferences and needs during the evaluation.
Community outreach: from measuring the difference to making a difference with health information*
Ottoson, Judith M.; Green, Lawrence W.
2005-01-01
Background: Community-based outreach seeks to move libraries beyond their traditional institutional boundaries to improve both access to and effectiveness of health information. The evaluation of such outreach needs to involve the community in assessing the program's process and outcomes. Purpose: Evaluation of community-based library outreach programs benefits from a participatory approach. To explain this premise of the paper, three components of evaluation theory are paired with relevant participatory strategies. Concepts: The first component of evaluation theory is also a standard of program evaluation: use. Evaluation is intended to be useful for stakeholders to make decisions. A useful evaluation is credible, timely, and of adequate scope. Participatory approaches to increase use of evaluation findings include engaging end users early in planning the program itself and in deciding on the outcomes of the evaluation. A second component of evaluation theory seeks to understand what is being evaluated, such as specific aspects of outreach programs. A transparent understanding of the ways outreach achieves intended goals, its activities and linkages, and the context in which it operates precedes any attempt to measure it. Participatory approaches to evaluating outreach include having end users, such as health practitioners in other community-based organizations, identify what components of the outreach program are most important to their work. A third component of evaluation theory is concerned with the process by which value is placed on outreach. What will count as outreach success or failure? Who decides? Participatory approaches to valuing include assuring end-user representation in the formulation of evaluation questions and in the interpretation of evaluation results. Conclusions: The evaluation of community-based outreach is a complex process that is not made easier by a participatory approach. Nevertheless, a participatory approach is more likely to make the evaluation findings useful, ensure that program knowledge is shared, and make outreach valuing transparent. PMID:16239958
Assessing the representativeness of wind data for wind turbine site evaluation
NASA Technical Reports Server (NTRS)
Renne, D. S.; Corotis, R. B.
1982-01-01
Once potential wind turbine sites (either for single installations or clusters) are identified through siting procedures, actual evaluation of the sites must commence. This evaluation is needed to obtain estimates of wind turbine performance and to identify hazards to the machine from the turbulence component of the atmosphere. These estimates allow for more detailed project planning and for preliminary financing arrangements to be secured. The site evaluation process can occur in two stages: (1) utilizing existing nearby data, and (2) establishing and monitoring an onsite measurement program. Since step (2) requires a period of at least 1 yr or more from the time a potential site has been identified, step (1) is often an essential stage in the preliminary evaluation process. Both the methods that have been developed and the unknowns that still exist in assessing the representativeness of available data to a nearby wind turbine site are discussed. How the assessment of the representativeness of available data can be used to develop a more effective onsite meteorological measurement program is also discussed.
Evaluation of telerobotic systems using an instrumented task board
NASA Technical Reports Server (NTRS)
Carroll, John D.; Gierow, Paul A.; Bryan, Thomas C.
1991-01-01
An instrumented task board was developed at NASA Marshall Space Flight Center (MSFC). An overview of the task board design, and current development status is presented. The task board was originally developed to evaluate operator performance using the Protoflight Manipulator Arm (PFMA) at MSFC. The task board evaluates tasks for Orbital Replacement Unit (ORU), fluid connect and transfers, electrical connect/disconnect, bolt running, and other basic tasks. The instrumented task board measures the 3-D forces and torques placed on the board, determines the robot arm's 3-D position relative to the task board using IR optics, and provides the information in real-time. The PFMA joint input signals can also be measured from a breakout box to evaluate the sensitivity or response of the arm operation to control commands. The data processing system provides the capability for post processing of time-history graphics and plots of the PFMA positions, the operator's actions, and the PFMA servo reactions in addition to real-time force/torque data presentation. The instrumented task board's most promising use is developing benchmarks for NASA centers for comparison and evaluation of telerobotic performance.
New Techniques to Evaluate the Incendiary Behavior of Insulators
NASA Technical Reports Server (NTRS)
Buhler, Charles; Calle, Carlos; Clements, Sid; Trigwell, Steve; Ritz, Mindy
2008-01-01
New techniques for evaluating the incendiary behavior of insulators is presented. The onset of incendive brush discharges in air is evaluated using standard spark probe techniques for the case simulating approaches of an electrically grounded sphere to a charged insulator in the presence of a flammable atmosphere. However, this standard technique is unsuitable for the case of brush discharges that may occur during the charging-separation process for two insulator materials. We present experimental techniques to evaluate this hazard in the presence of a flammable atmosphere which is ideally suited to measure the incendiary nature of micro-discharges upon separation, a measurement never before performed. Other measurement techniques unique to this study include; surface potential measurements of insulators before, during and after contact and separation, as well as methods to verify fieldmeter calibrations using a charge insulator surface opposed to standard high voltage plates. Key words: Kapton polyimide film, incendiary discharges, brush discharges, contact and frictional electrification, ignition hazards, insulators, contact angle, surface potential measurements.
21 CFR 315.5 - Evaluation of effectiveness.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., physiological, or biochemical assessment is established by demonstrating in a defined clinical setting reliable measurement of function(s) or physiological, biochemical, or molecular process(es). (3) The claim of disease... demonstrating in a defined clinical setting that the test is useful in diagnostic or therapeutic patient...
21 CFR 601.34 - Evaluation of effectiveness.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., physiological, or biochemical assessment is established by demonstrating in a defined clinical setting reliable measurement of function(s) or physiological, biochemical, or molecular process(es). (3) The claim of disease... demonstrating in a defined clinical setting that the test is useful in diagnostic or therapeutic patient...
21 CFR 315.5 - Evaluation of effectiveness.
Code of Federal Regulations, 2014 CFR
2014-04-01
..., physiological, or biochemical assessment is established by demonstrating in a defined clinical setting reliable measurement of function(s) or physiological, biochemical, or molecular process(es). (3) The claim of disease... demonstrating in a defined clinical setting that the test is useful in diagnostic or therapeutic patient...
21 CFR 315.5 - Evaluation of effectiveness.
Code of Federal Regulations, 2012 CFR
2012-04-01
..., physiological, or biochemical assessment is established by demonstrating in a defined clinical setting reliable measurement of function(s) or physiological, biochemical, or molecular process(es). (3) The claim of disease... demonstrating in a defined clinical setting that the test is useful in diagnostic or therapeutic patient...
21 CFR 601.34 - Evaluation of effectiveness.
Code of Federal Regulations, 2014 CFR
2014-04-01
..., physiological, or biochemical assessment is established by demonstrating in a defined clinical setting reliable measurement of function(s) or physiological, biochemical, or molecular process(es). (3) The claim of disease... demonstrating in a defined clinical setting that the test is useful in diagnostic or therapeutic patient...
21 CFR 601.34 - Evaluation of effectiveness.
Code of Federal Regulations, 2012 CFR
2012-04-01
..., physiological, or biochemical assessment is established by demonstrating in a defined clinical setting reliable measurement of function(s) or physiological, biochemical, or molecular process(es). (3) The claim of disease... demonstrating in a defined clinical setting that the test is useful in diagnostic or therapeutic patient...
21 CFR 315.5 - Evaluation of effectiveness.
Code of Federal Regulations, 2011 CFR
2011-04-01
..., physiological, or biochemical assessment is established by demonstrating in a defined clinical setting reliable measurement of function(s) or physiological, biochemical, or molecular process(es). (3) The claim of disease... demonstrating in a defined clinical setting that the test is useful in diagnostic or therapeutic patient...
21 CFR 601.34 - Evaluation of effectiveness.
Code of Federal Regulations, 2011 CFR
2011-04-01
..., physiological, or biochemical assessment is established by demonstrating in a defined clinical setting reliable measurement of function(s) or physiological, biochemical, or molecular process(es). (3) The claim of disease... demonstrating in a defined clinical setting that the test is useful in diagnostic or therapeutic patient...
21 CFR 601.34 - Evaluation of effectiveness.
Code of Federal Regulations, 2013 CFR
2013-04-01
..., physiological, or biochemical assessment is established by demonstrating in a defined clinical setting reliable measurement of function(s) or physiological, biochemical, or molecular process(es). (3) The claim of disease... demonstrating in a defined clinical setting that the test is useful in diagnostic or therapeutic patient...
21 CFR 315.5 - Evaluation of effectiveness.
Code of Federal Regulations, 2013 CFR
2013-04-01
..., physiological, or biochemical assessment is established by demonstrating in a defined clinical setting reliable measurement of function(s) or physiological, biochemical, or molecular process(es). (3) The claim of disease... demonstrating in a defined clinical setting that the test is useful in diagnostic or therapeutic patient...
Brain Oscillations during Semantic Evaluation of Speech
ERIC Educational Resources Information Center
Shahin, Antoine J.; Picton, Terence W.; Miller, Lee M.
2009-01-01
Changes in oscillatory brain activity have been related to perceptual and cognitive processes such as selective attention and memory matching. Here we examined brain oscillations, measured with electroencephalography (EEG), during a semantic speech processing task that required both lexically mediated memory matching and selective attention.…
Performance evaluation of PRIDE UNDA system with pyroprocessing feed material.
An, Su Jung; Seo, Hee; Lee, Chaehun; Ahn, Seong-Kyu; Park, Se-Hwan; Ku, Jeong-Hoe
2017-04-01
The PRIDE (PyRoprocessing Integrated inactive DEmonstration) is an engineering-scale pyroprocessing test-bed facility that utilizes depleted uranium (DU) instead of spent fuel as a process material. As part of the ongoing effort to enhance pyroprocessing safeguardability, UNDA (Unified Non-Destructive Assay), a system integrating three different non-destructive assay techniques, namely, neutron, gamma-ray, and mass measurement, for nuclear material accountancy (NMA) was developed. In the present study, UNDA's NMA capability was evaluated by measurement of the weight, 238 U mass, and U enrichment of oxide-reduction-process feed material (i.e., porous pellets). In the 238 U mass determination, the total neutron counts for porous pellets of six different weights were measured. The U enrichment of the porous pellets, meanwhile, was determined according to the gamma spectrums acquired using UNDA's NaI-based enrichment measurement system. The results demonstrated that the UNDA system, after appropriate corrections, could be used in PRIDE NMA applications with reasonable uncertainty. It is expected that in the near future, the UNDA system will be tested with next-step materials such as the products of the oxide-reduction and electro-refining processes. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fauzi, Ilham; Muharram Hasby, Fariz; Irianto, Dradjad
2018-03-01
Although government is able to make mandatory standards that must be obeyed by the industry, the respective industries themselves often have difficulties to fulfil the requirements described in those standards. This is especially true in many small and medium sized enterprises that lack the required capital to invest in standard-compliant equipment and machineries. This study aims to develop a set of measurement tools for evaluating the level of readiness of production technology with respect to the requirements of a product standard based on the quality function deployment (QFD) method. By combining the QFD methodology, UNESCAP Technometric model [9] and Analytic Hierarchy Process (AHP), this model is used to measure a firm’s capability to fulfill government standard in the toy making industry. Expert opinions from both the governmental officers responsible for setting and implementing standards and the industry practitioners responsible for managing manufacturing processes are collected and processed to find out the technological capabilities that should be improved by the firm to fulfill the existing standard. This study showed that the proposed model can be used successfully to measure the gap between the requirements of the standard and the readiness of technoware technological component in a particular firm.
Peny-Dahlstrand, Marie; Gosman-Hedström, Gunilla; Krumlinde-Sundholm, Lena
2012-01-01
In many studies of self-care assessments for children, cultural differences in age-norm values have been shown. No study has evaluated whether there are cross-cultural differences in ADL motor and/or process skills in children when measured with the Assessment of Motor and Process Skills (AMPS). To investigate if there were systematic differences in ADL ability measured with the AMPS between children from the Nordic countries and North America and to evaluate the applicability of the existing international age-normative values for children from these two regions. Values from a total of 4 613 children, 3-15 years old, without known disabilities, from these geographical regions were compared with ANOVA. The difference in logits between each region and the mean values for each age group were calculated. No differences of relevance in age-related ADL ability measures between children from the two geographical regions were found, and the age-norm values are applicable to both regions. The AMPS may be considered free from cultural bias and useful in both clinical practice and research concerned with children in both the Nordic countries and North America.
Welding process modelling and control
NASA Technical Reports Server (NTRS)
Romine, Peter L.; Adenwala, Jinen A.
1993-01-01
The research and analysis performed, and software developed, and hardware/software recommendations made during 1992 in development of the PC-based data acquisition system for support of Welding Process Modeling and Control is reported. A need was identified by the Metals Processing Branch of NASA Marshall Space Flight Center, for a mobile data aquisition and analysis system, customized for welding measurement and calibration. Several hardware configurations were evaluated and a PC-based system was chosen. The Welding Measurement System (WMS) is a dedicated instrument, strictly for the use of data aquisition and analysis. Although the WMS supports many of the functions associated with the process control, it is not the intention for this system to be used for welding process control.
NASA Technical Reports Server (NTRS)
Powell, W. B.
1973-01-01
Thrust chamber performance is evaluated in terms of an analytical model incorporating all the loss processes that occur in a real rocket motor. The important loss processes in the real thrust chamber were identified, and a methodology and recommended procedure for predicting real thrust chamber vacuum specific impulse were developed. Simplified equations for the calculation of vacuum specific impulse are developed to relate the delivered performance (both vacuum specific impulse and characteristic velocity) to the ideal performance as degraded by the losses corresponding to a specified list of loss processes. These simplified equations enable the various performance loss components, and the corresponding efficiencies, to be quantified separately (except that interaction effects are arbitrarily assigned in the process). The loss and efficiency expressions presented can be used to evaluate experimentally measured thrust chamber performance, to direct development effort into the areas most likely to yield improvements in performance, and as a basis to predict performance of related thrust chamber configurations.
Spapé, M M; Harjunen, Ville; Ravaja, N
2017-03-01
Being touched is known to affect emotion, and even a casual touch can elicit positive feelings and affinity. Psychophysiological studies have recently shown that tactile primes affect visual evoked potentials to emotional stimuli, suggesting altered affective stimulus processing. As, however, these studies approached emotion from a purely unidimensional perspective, it remains unclear whether touch biases emotional evaluation or a more general feature such as salience. Here, we investigated how simple tactile primes modulate event related potentials (ERPs), facial EMG and cardiac response to pictures of facial expressions of emotion. All measures replicated known effects of emotional face processing: Disgust and fear modulated early ERPs, anger increased the cardiac orienting response, and expressions elicited emotion-congruent facial EMG activity. Tactile primes also affected these measures, but priming never interacted with the type of emotional expression. Thus, touch may additively affect general stimulus processing, but it does not bias or modulate immediate affective evaluation. Copyright © 2017. Published by Elsevier B.V.
Evaluating building performance in healthcare facilities: an organizational perspective.
Steinke, Claudia; Webster, Lynn; Fontaine, Marie
2010-01-01
Using the environment as a strategic tool is one of the most cost-effective and enduring approaches for improving public health; however, it is one that requires multiple perspectives. The purpose of this article is to highlight an innovative methodology that has been developed for conducting comprehensive performance evaluations in public sector health facilities in Canada. The building performance evaluation methodology described in this paper is a government initiative. The project team developed a comprehensive building evaluation process for all new capital health projects that would respond to the aforementioned need for stakeholders to be more accountable and to better integrate the larger organizational strategy of facilities. The Balanced Scorecard, which is a multiparadigmatic, performance-based business framework, serves as the underlying theoretical framework for this initiative. It was applied in the development of the conceptual model entitled the Building Performance Evaluation Scorecard, which provides the following benefits: (1) It illustrates a process to link facilities more effectively to the overall mission and goals of an organization; (2) It is both a measurement and a management system that has the ability to link regional facilities to measures of success and larger business goals; (3) It provides a standardized methodology that ensures consistency in assessing building performance; and (4) It is more comprehensive than traditional building evaluations. The methodology presented in this paper is both a measurement and management system that integrates the principles of evidence-based design with the practices of pre- and post-occupancy evaluation. It promotes accountability and continues throughout the life cycle of a project. The advantage of applying this framework is that it engages health organizations in clarifying a vision and strategy for their facilities and helps translate those strategies into action and measurable performance outcomes.
Evaluation of parameters of color profile models of LCD and LED screens
NASA Astrophysics Data System (ADS)
Zharinov, I. O.; Zharinov, O. O.
2017-12-01
The purpose of the research relates to the problem of parametric identification of the color profile model of LCD (liquid crystal display) and LED (light emitting diode) screens. The color profile model of a screen is based on the Grassmann’s Law of additive color mixture. Mathematically the problem is to evaluate unknown parameters (numerical coefficients) of the matrix transformation between different color spaces. Several methods of evaluation of these screen profile coefficients were developed. These methods are based either on processing of some colorimetric measurements or on processing of technical documentation data.
Schön, Ulla-Karin; Svedberg, Petra; Rosenberg, David
2015-05-01
Recovery is understood to be an individual process that cannot be controlled, but can be supported and facilitated at the individual, organizational and system levels. Standardized measures of recovery may play a critical role in contributing to the development of a recovery-oriented system. The INSPIRE measure is a 28-item service user-rated measure of recovery support. INSPIRE assesses both the individual preferences of the user in the recovery process and their experience of support from staff. The aim of this study was to evaluate the psychometric properties of the Swedish version of the INSPIRE measure, for potential use in Swedish mental health services and in order to promote recovery in mental illness. The sample consisted of 85 participants from six community mental health services targeting people with a diagnosis of psychosis in a municipality in Sweden. For the test-retest evaluation, 78 participants completed the questionnaire 2 weeks later. The results in the present study indicate that the Swedish version of the INSPIRE measure had good face and content validity, satisfactory internal consistency and some level of instability in test-retest reliability. While further studies that test the instrument in a larger and more diverse clinical context are needed, INSPIRE can be considered a relevant and feasible instrument to utilize in supporting the development of a recovery-oriented system in Sweden.
Improving Reliability of a Residency Interview Process
Serres, Michelle L.; Gundrum, Todd E.
2013-01-01
Objective. To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. Methods. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individually-evaluated each candidate in one-on-one interviews. Results. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station—impact of content specificity was greatly reduced with more interview stations. Conclusion. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity. PMID:24159209
Szulczyński, Bartosz; Wasilewski, Tomasz; Wojnowski, Wojciech; Majchrzak, Tomasz; Dymerski, Tomasz; Namieśnik, Jacek; Gębicki, Jacek
2017-01-01
This review paper presents different ways to apply a measurement instrument of e-nose type to evaluate ambient air with respect to detection of the odorants characterized by unpleasant odour in a vicinity of municipal processing plants. An emphasis was put on the following applications of the electronic nose instruments: monitoring networks, remote controlled robots and drones as well as portable devices. Moreover, this paper presents commercially available sensors utilized in the electronic noses and characterized by the limit of quantification below 1 ppm v/v, which is close to the odour threshold of some odorants. Additionally, information about bioelectronic noses being a possible alternative to electronic noses and their principle of operation and application potential in the field of air evaluation with respect to detection of the odorants characterized by unpleasant odour was provided. PMID:29156597
Szulczyński, Bartosz; Wasilewski, Tomasz; Wojnowski, Wojciech; Majchrzak, Tomasz; Dymerski, Tomasz; Namieśnik, Jacek; Gębicki, Jacek
2017-11-19
This review paper presents different ways to apply a measurement instrument of e-nose type to evaluate ambient air with respect to detection of the odorants characterized by unpleasant odour in a vicinity of municipal processing plants. An emphasis was put on the following applications of the electronic nose instruments: monitoring networks, remote controlled robots and drones as well as portable devices. Moreover, this paper presents commercially available sensors utilized in the electronic noses and characterized by the limit of quantification below 1 ppm v / v , which is close to the odour threshold of some odorants. Additionally, information about bioelectronic noses being a possible alternative to electronic noses and their principle of operation and application potential in the field of air evaluation with respect to detection of the odorants characterized by unpleasant odour was provided.
Benitez-Rosario, Miguel Angel; Caceres-Miranda, Raquel; Aguirre-Jaime, Armando
2016-03-01
A reliable and valid measure of the structure and process of end-of-life care is important for improving the outcomes of care. This study evaluated the validity and reliability of the Spanish adaptation of a satisfaction tool of the Care Evaluation Scale (CES), which was developed in Japan to evaluate palliative care structure and process from the perspective of family members. Standard forward-backward translation and a pilot test were conducted. A multicenter survey was conducted with the relatives of patients admitted to palliative care units for symptom control. The dimensional structure was assessed using confirmatory factor analyses. Concurrent and discriminant validity were tested by correlation with the SERQVHOS, a Spanish hospital care satisfaction scale and with an 11-point rating scale on satisfaction with care. The reliability of the CES was tested by Cronbach α and by test-retest correlation. A total of 284 primary caregivers completed the CES, with low missing response rates. The results of the factor analysis suggested a six-factor solution explaining 69% of the total variance. The CES moderately correlated with the SERQVHOS and with the overall satisfaction scale (intraclass correlation coefficients of 0.66 and 0.44, respectively; P = 0.001). Cronbach α was 0.90 overall and ranged from 0.85 to 0.89 for subdomains. Intraclass correlation coefficient was 0.88 (P = 0.001) for test-retest analysis. The Spanish CES was found to be a reliable and valid measure of the satisfaction with end-of-life care structure and process from family members' perspectives. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Field, Robert; Kim, Daehyun; Kelley, Max; LeGrande, Allegra; Worden, John; Schmidt, Gavin
2014-05-01
Observational and theoretical arguments suggest that satellite retrievals of the stable isotope composition of water vapor could be useful for climate model evaluation. The isotopic composition of water vapor is controlled by the same processes that control water vapor amount, but the observed distribution of isotopic composition is distinct from amount itself . This is due to the fractionation that occurs between the abundant H216O isotopes (isotopologues) and the rare and heavy H218O and HDO isotopes during evaporation and condensation. The fractionation physics are much simpler than the underlying moist physics; discrepancies between observed and modeled isotopic fields are more likely due to problems in the latter. Isotopic measurements therefore have the potential for identifying problems that might not be apparent from more conventional measurements. Isotopic tracers have existed in climate models since the 1980s but it is only since the mid 2000s that there have been enough data for meaningful model evaluation in this sense, in the troposphere at least. We have evaluated the NASA GISS ModelE2 general circulation model over the tropics against water isotope (HDO/H2O) retrievals from the Aura Tropospheric Emission Spectrometer (TES), alongside more conventional measurements. A small ensemble of experiments was performed with physics perturbations to the cumulus and planetary boundary layer schemes, done in the context of the normal model development process. We examined the degree to which model-data agreement could be used to constrain a select group of internal processes in the model, namely condensate evaporation, entrainment strength, and moist convective air mass flux. All are difficult to parameterize, but exert strong influence over model performance. We found that the water isotope composition was significantly more sensitive to physics changes than precipitation, temperature or relative humidity through the depth of the tropical troposphere. Among the processes considered, this was most closely, and fairly exclusively, related to mid-tropospheric entrainment strength. This demonstrates that water isotope retrievals have considerable potential alongside more conventional measurements for climate model evaluation and development.
NASA Technical Reports Server (NTRS)
Mayo, W. T., Jr.; Smart, A. E.
1979-01-01
A laser transit anemometer measured a two-dimensional vector velocity, using the transit time of scattering particles between two focused and parallel laser beams. The objectives were: (1) the determination of the concentration levels and light scattering efficiencies of naturally occurring, submicron particles in the NASA/Ames unitary wind tunnel and (2) the evaluation based on these measured data of a laser transit anemometer with digital correlation processing for nonintrusive velocity measurement in this facility. The evaluation criteria were the speeds at which point velocity measurements could be realized with this technique (as determined from computer simulations) for given accuracy requirements.
NASA Astrophysics Data System (ADS)
Tsuchida, Yuji; Enokizono, Masato
2018-04-01
The iron loss of industrial motors increases by residual stress during manufacturing processes. It is very important to make clear the distribution of the residual stress in the motor cores to reduce the iron loss in the motors. Barkhausen signals which occur on electrical steel sheets can be used for the evaluation of the residual stress because they are very sensitive to the material properties. Generally, a B-sensor is used to measure Barkhausen signals, however, we developed a new H-sensor to measure them and applied it into the stress evaluation. It is supposed that the Barkhausen signals by using a H-sensor can be much effective to the residual stress on the electrical steel sheets by referring our results regarding to the stress evaluations. We evaluated the tensile stress of the electrical steel sheets by measuring Barkhausen signals by using our developed H-sensor for high efficiency electrical motors.
[Supply services at health facilities: measuring performance].
Dacosta Claro, I
2001-01-01
Performance measurement, in their different meanings--either balance scorecard or outputs measurement--have become an essential tool in today's organizations (World-Class organizations) to improve service quality and reduce costs. This paper presents a performance measurement system for the hospital supply chain. The system is organized in different levels and groups of indicators in order to show a hierarchical, coherent and integrated vision of the processes. Thus, supply services performance is measured according to (1) financial aspects, (2) customers satisfaction aspects and (3) internal aspects of the processes performed. Since the informational needs of the managers vary within the administrative structure, the performance measurement system is defined in three hierarchical levels. Firstly, the whole supply chain, with the different interrelation of activities. Secondly, the three main processes of the chain--physical management of products, purchasing and negotiation processes and the local storage units. And finally, the performance measurement of each activity involved. The system and the indicators have been evaluated with the participation of 17 health services of Quebec (Canada), however, and due to the similarities of the operation, could be equally implemented in Spanish hospitals.
Watts, Adreanna T M; Tootell, Anne V; Fix, Spencer T; Aviyente, Selin; Bernat, Edward M
2018-04-29
The neurophysiological mechanisms involved in the evaluation of performance feedback have been widely studied in the ERP literature over the past twenty years, but understanding has been limited by the use of traditional time-domain amplitude analytic approaches. Gambling outcome valence has been identified as an important factor modulating event-related potential (ERP) components, most notably the feedback negativity (FN). Recent work employing time-frequency analysis has shown that processes indexed by the FN are confounded in the time-domain and can be better represented as separable feedback-related processes in the theta (3-7 Hz) and delta (0-3 Hz) frequency bands. In addition to time-frequency amplitude analysis, phase synchrony measures have begun to further our understanding of performance evaluation by revealing how feedback information is processed within and between various brain regions. The current study aimed to provide an integrative assessment of time-frequency amplitude, inter-trial phase synchrony, and inter-channel phase synchrony changes following monetary feedback in a gambling task. Results revealed that time-frequency amplitude activity explained separable loss and gain processes confounded in the time-domain. Furthermore, phase synchrony measures explained unique variance above and beyond amplitude measures and demonstrated enhanced functional integration between medial prefrontal and bilateral frontal, motor, and occipital regions for loss relative to gain feedback. These findings demonstrate the utility of assessing time-frequency amplitude, inter-trial phase synchrony, and inter-channel phase synchrony together to better elucidate the neurophysiology of feedback processing. Copyright © 2017. Published by Elsevier B.V.
Evaluation of complex community-based childhood obesity prevention interventions.
Karacabeyli, D; Allender, S; Pinkney, S; Amed, S
2018-05-16
Multi-setting, multi-component community-based interventions have shown promise in preventing childhood obesity; however, evaluation of these complex interventions remains a challenge. The objective of the study is to systematically review published methodological approaches to outcome evaluation for multi-setting community-based childhood obesity prevention interventions and synthesize a set of pragmatic recommendations. MEDLINE, CINAHL and PsycINFO were searched from inception to 6 July 2017. Papers were included if the intervention targeted children ≤18 years, engaged at least two community sectors and described their outcome evaluation methodology. A single reviewer conducted title and abstract scans, full article review and data abstraction. Directed content analysis was performed by three reviewers to identify prevailing themes. Thirty-three studies were included, and of these, 26 employed a quasi-experimental design; the remaining were randomized control trials. Body mass index was the most commonly measured outcome, followed by health behaviour change and psychosocial outcomes. Six themes emerged, highlighting advantages and disadvantages of active vs. passive consent, quasi-experimental vs. randomized control trials, longitudinal vs. repeat cross-sectional designs and the roles of process evaluation and methodological flexibility in evaluating complex interventions. Selection of study designs and outcome measures compatible with community infrastructure, accompanied by process evaluation, may facilitate successful outcome evaluation. © 2018 World Obesity Federation.
Evaluation, Instruction and Policy Making. IIEP Seminar Paper: 9.
ERIC Educational Resources Information Center
Bloom, Benjamin S.
Recently, educational evaluation has attempted to use the precision, objectivity, and mathematical rigor of the psychological measurement field as well as to find ways in which instrumentation and data utilization could more directly be related to educational institutions, educational processes, and educational purposes. The linkages between…
College Students' Instructional Expectations and Evaluations.
ERIC Educational Resources Information Center
Calista, Donald J.
Typical end-of-course faculty ratings were questioned for their inability to measure actual classroom interaction. Extending the concept of these evaluations to include the student instructional expectations dimension, the study proposed that the classroom experience be related to the process and systems approaches, more dependent upon monitoring…
Exploring Operational Test and Evaluation of Unmanned Aircraft Systems: A Qualitative Case Study
NASA Astrophysics Data System (ADS)
Saliceti, Jose A.
The purpose of this qualitative case study was to explore and identify strategies that may potentially remedy operational test and evaluation procedures used to evaluate Unmanned Aircraft Systems (UAS) technology. The sample for analysis consisted of organizations testing and evaluating UASs (e.g., U.S. Air Force, U.S. Navy, U.S. Army, U.S. Marine Corps, U.S. Coast Guard, and Customs Border Protection). A purposeful sampling technique was used to select 15 subject matter experts in the field of operational test and evaluation of UASs. A questionnaire was provided to participants to construct a descriptive and robust research. Analysis of responses revealed themes related to each research question. Findings revealed operational testers utilized requirements documents to extrapolate measures for testing UAS technology and develop critical operational issues. The requirements documents were (a) developed without the contribution of stakeholders and operational testers, (b) developed with vague or unrealistic measures, and (c) developed without a systematic method to derive requirements from mission tasks. Four approaches are recommended to develop testable operational requirements and assist operational testers: (a) use a mission task analysis tool to derive requirements for mission essential tasks for the system, (b) exercise collaboration among stakeholders and testers to ensure testable operational requirements based on mission tasks, (c) ensure testable measures are used in requirements documents, and (d) create a repository list of critical operational issues by mission areas. The preparation of operational test and evaluation processes for UAS technology is not uniform across testers. The processes in place are not standardized, thus test plan preparation and reporting are different among participants. A standard method to prepare and report UAS technology should be used when preparing and reporting on UAS technology. Using a systematic process, such as mission-based test design, resonated among participants as an analytical method to link UAS mission tasks and measures of performance to the capabilities of the system under test when developing operational test plans. Further research should examine system engineering designs for system requirements traceability matrix of mission tasks and subtasks while using an analysis tool that adequately evaluates UASs with an acceptable level of confidence in the results.
NASA Technical Reports Server (NTRS)
O'Connor, Brian; Hernandez, Deborah; Hornsby, Linda; Brown, Maria; Horton-Mullins, Kathryn
2017-01-01
Outline: Background of ISS (International Space Station) Material Science Research Rack; NASA SCA (Sample Cartridge Assembly) Design; GEDS (Gravitational Effects in Distortion in Sintering) Experiment Ampoule Design; Development Testing Summary; Thermal Modeling and Analysis. Summary: GEDS design development challenging (GEDS Ampoule design developed through MUGS (Microgravity) testing; Short duration transient sample processing; Unable to measure sample temperatures); MUGS Development testing used to gather data (Actual LGF (Low Gradient Furnace)-like furnace response; Provided sample for sintering evaluation); Transient thermal model integral to successful GEDS experiment (Development testing provided furnace response; PI (Performance Indicator) evaluation of sintering anchored model evaluation of processing durations; Thermal transient model used to determine flight SCA sample processing profiles).
Evaluating WRF Simulations of Urban Boundary Layer Processes during DISCOVER-AQ
NASA Astrophysics Data System (ADS)
Hegarty, J. D.; Henderson, J.; Lewis, J. R.; McGrath-Spangler, E. L.; Scarino, A. J.; Ferrare, R. A.; DeCola, P.; Welton, E. J.
2015-12-01
The accurate representation of processes in the planetary boundary layer (PBL) in meteorological models is of prime importance to air quality and greenhouse gas simulations as it governs the depth to which surface emissions are vertically mixed and influences the efficiency by which they are transported downwind. In this work we evaluate high resolution (~1 km) WRF simulations of PBL processes in the Washington DC - Baltimore and Houston urban areas during the respective DISCOVER-AQ 2011 and 2013 field campaigns using MPLNET micro-pulse lidar (MPL), mini-MPL, airborne high spectral resolution lidar (HSRL), Doppler wind profiler and CALIPSO satellite measurements along with complimentary surface and aircraft measurements. We will discuss how well WRF simulates the spatiotemporal variability of the PBL height in the urban areas and the development of fine-scale meteorological features such as bay and sea breezes that influence the air quality of the urban areas studied.
Containerless high temperature property measurements
NASA Technical Reports Server (NTRS)
Nordine, Paul C.; Weber, J. K. Richard; Krishnan, Shankar; Anderson, Collin D.
1991-01-01
Containerless processing in the low gravity environment of space provides the opportunity to increase the temperature at which well controlled processing of and property measurements on materials is possible. This project was directed towards advancing containerless processing and property measurement techniques for application to materials research at high temperatures in space. Containerless high temperature material property studies include measurements of the vapor pressure, melting temperature, optical properties, and spectral emissivities of solid boron. The reaction of boron with nitrogen was also studied by laser polarimetric measurement of boron nitride film growth. The optical properties and spectral emissivities were measured for solid and liquid silicon, niobium, and zirconium; liquid aluminum and titanium; and liquid Ti-Al alloys of 5 to 60 atomic pct. titanium. Alternative means for noncontact temperature measurement in the absence of material emissivity data were evaluated. Also, the application of laser induced fluorescence for component activity measurements in electromagnetic levitated liquids was studied, along with the feasibility of a hybrid aerodynamic electromagnetic levitation technique.
Bower, Peter; Roberts, Chris; O'Leary, Neil; Callaghan, Patrick; Bee, Penny; Fraser, Claire; Gibbons, Chris; Olleveant, Nicola; Rogers, Anne; Davies, Linda; Drake, Richard; Sanders, Caroline; Meade, Oonagh; Grundy, Andrew; Walker, Lauren; Cree, Lindsey; Berzins, Kathryn; Brooks, Helen; Beatty, Susan; Cahoon, Patrick; Rolfe, Anita; Lovell, Karina
2015-08-13
Involving service users in planning their care is at the centre of policy initiatives to improve mental health care quality in England. Whilst users value care planning and want to be more involved in their own care, there is substantial empirical evidence that the majority of users are not fully involved in the care planning process. Our aim is to evaluate the effectiveness and cost-effectiveness of training for mental health professionals in improving user involvement with the care planning processes. This is a cluster randomised controlled trial of community mental health teams in NHS Trusts in England allocated either to a training intervention to improve user and carer involvement in care planning or control (no training and care planning as usual). We will evaluate the effectiveness of the training intervention using a mixed design, including a 'cluster cohort' sample, a 'cluster cross-sectional' sample and process evaluation. Service users will be recruited from the caseloads of care co-ordinators. The primary outcome will be change in self-reported involvement in care planning as measured by the validated Health Care Climate Questionnaire. Secondary outcomes include involvement in care planning, satisfaction with services, medication side-effects, recovery and hope, mental health symptoms, alliance/engagement, well-being and quality of life. Cost- effectiveness will also be measured. A process evaluation informed by implementation theory will be undertaken to assess the extent to which the training was implemented and to gauge sustainability beyond the time-frame of the trial. It is hoped that the trial will generate data to inform mental health care policy and practice on care planning. ISRCTN16488358 (14 May 2014).
Langley Wind Tunnel Data Quality Assurance-Check Standard Results
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Grubb, John P.; Krieger, William B.; Cler, Daniel L.
2000-01-01
A framework for statistical evaluation, control and improvement of wind funnel measurement processes is presented The methodology is adapted from elements of the Measurement Assurance Plans developed by the National Bureau of Standards (now the National Institute of Standards and Technology) for standards and calibration laboratories. The present methodology is based on the notions of statistical quality control (SQC) together with check standard testing and a small number of customer repeat-run sets. The results of check standard and customer repeat-run -sets are analyzed using the statistical control chart-methods of Walter A. Shewhart long familiar to the SQC community. Control chart results are presented for. various measurement processes in five facilities at Langley Research Center. The processes include test section calibration, force and moment measurements with a balance, and instrument calibration.
Information Theory Broadens the Spectrum of Molecular Ecology and Evolution.
Sherwin, W B; Chao, A; Jost, L; Smouse, P E
2017-12-01
Information or entropy analysis of diversity is used extensively in community ecology, and has recently been exploited for prediction and analysis in molecular ecology and evolution. Information measures belong to a spectrum (or q profile) of measures whose contrasting properties provide a rich summary of diversity, including allelic richness (q=0), Shannon information (q=1), and heterozygosity (q=2). We present the merits of information measures for describing and forecasting molecular variation within and among groups, comparing forecasts with data, and evaluating underlying processes such as dispersal. Importantly, information measures directly link causal processes and divergence outcomes, have straightforward relationship to allele frequency differences (including monotonicity that q=2 lacks), and show additivity across hierarchical layers such as ecology, behaviour, cellular processes, and nongenetic inheritance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Methods of Measurement for Semiconductor Materials, Process Control, and Devices
NASA Technical Reports Server (NTRS)
Bullis, W. M. (Editor)
1973-01-01
The development of methods of measurement for semiconductor materials, process control, and devices is reported. Significant accomplishments include: (1) Completion of an initial identification of the more important problems in process control for integrated circuit fabrication and assembly; (2) preparations for making silicon bulk resistivity wafer standards available to the industry; and (3) establishment of the relationship between carrier mobility and impurity density in silicon. Work is continuing on measurement of resistivity of semiconductor crystals; characterization of generation-recombination-trapping centers, including gold, in silicon; evaluation of wire bonds and die attachment; study of scanning electron microscopy for wafer inspection and test; measurement of thermal properties of semiconductor devices; determination of S-parameters and delay time in junction devices; and characterization of noise and conversion loss of microwave detector diodes.
Personality and Attitude Determinants of Voting Behavior
ERIC Educational Resources Information Center
Brigham, John C.; Severy, Lawrence J.
1976-01-01
Measures of racial attitude, conceptual style, commitment to candidate and electoral process, social-political evaluation, and voting intentions, were administered to white college students (N=320) before the 1972 Presidential election. Prediction of behavioral intentions becomes more powerful as attitudinal measures are made more directly…
Hessel, F P; Wittmann, M; Petro, W; Wasem, J
2000-07-01
Studies in health economics especially economic evaluations of health care technologies and programmes are getting more and more important. However, in Germany there are no established, validated and commonly used instruments for the costing process. For the economic evaluation of a rehabilitation programme for patients with chronic lung diseases such as asthma and chronic bronchitis we developed methods for identification, measurement and validation of resource use during the inpatient rehabilitation programme and during the outpatient follow-up period. These methods are based on methodological considerations as well as on practical experience from conducting a pilot study. With regard to the inpatient setting all relevant diagnostic and therapeutic resource uses could be measured basing on routine clinical documentation and validated by using the cost accounting of the clinic. For measuring the use of resources during the follow-up period in an outpatient setting no reliable administrative data are accessible. Hence, we compared a standardised retrospective patient questionnaire used in a 20-minute interview (n = 50) and a cost diary for the continuing documentation by the patient over a period of 4 weeks (n = 50). Both tools were useful for measuring all relevant resource uses in sufficient detail, but because of higher participation rates and lower dropouts the structured interview appears to be more suitable. Average total costs per month were 1591 DM (interview), respectively 1867 DM (cost diary). Besides productivity loss, costs for medication and GP visits caused the relatively highest resource uses. Practicable instruments were developed for the costing process as part of an economic evaluation in a German rehabilitation setting for pulmonary diseases. After individual modification, these could also be used for different indications and in other institutional settings.
Use of Unmanned Aerial Systems to Study Atmospheric Processes During Sea Ice Freeze Up
NASA Astrophysics Data System (ADS)
de Boer, G.; Lawrence, D.; Weibel, D.; Borenstein, S.; Bendure, A.; Solomon, A.; Intrieri, J. M.
2017-12-01
In October 2016, a team of scientists deployed to Oliktok Point, Alaska to make atmospheric measurements as part of the Evaluation of Routine Atmospheric Sounding measurements using Unmanned Systems (ERASMUS) and Inaugural Campaigns for ARM Research using Unmanned Systems (ICARUS) campaigns. The deployment included operations using the University of Colorado DataHawk2 UAS. The DataHawk2 was configured to make measurements of atmospheric thermodynamics, wind and surface temperature, providing information on lower tropospheric thermodynamic structure, turbulent surface fluxes, and surface temperature. During this campaign, the team experienced a variety of weather regimes and witnessed the development of near shore sea ice. In this presentation, we will give an overview of the measurements obtained during this time and how they were used to better understand freeze up processes in this coastal environment. Additionally, we will provide insight into how these platforms are being used for evaluation of a fully-coupled sea ice forecast model operated by NOAA's Physical Sciences Division.
PATTERNS OF CLINICALLY SIGNIFICANT COGNITIVE IMPAIRMENT IN HOARDING DISORDER.
Mackin, R Scott; Vigil, Ofilio; Insel, Philip; Kivowitz, Alana; Kupferman, Eve; Hough, Christina M; Fekri, Shiva; Crothers, Ross; Bickford, David; Delucchi, Kevin L; Mathews, Carol A
2016-03-01
The cognitive characteristics of individuals with hoarding disorder (HD) are not well understood. Existing studies are relatively few and somewhat inconsistent but suggest that individuals with HD may have specific dysfunction in the cognitive domains of categorization, speed of information processing, and decision making. However, there have been no studies evaluating the degree to which cognitive dysfunction in these domains reflects clinically significant cognitive impairment (CI). Participants included 78 individuals who met DSM-V criteria for HD and 70 age- and education-matched controls. Cognitive performance on measures of memory, attention, information processing speed, abstract reasoning, visuospatial processing, decision making, and categorization ability was evaluated for each participant. Rates of clinical impairment for each measure were compared, as were age- and education-corrected raw scores for each cognitive test. HD participants showed greater incidence of CI on measures of visual memory, visual detection, and visual categorization relative to controls. Raw-score comparisons between groups showed similar results with HD participants showing lower raw-score performance on each of these measures. In addition, in raw-score comparisons HD participants also demonstrated relative strengths compared to control participants on measures of verbal and visual abstract reasoning. These results suggest that HD is associated with a pattern of clinically significant CI in some visually mediated neurocognitive processes including visual memory, visual detection, and visual categorization. Additionally, these results suggest HD individuals may also exhibit relative strengths, perhaps compensatory, in abstract reasoning in both verbal and visual domains. © 2015 Wiley Periodicals, Inc.
2008-11-01
103 2.3.6 Use of Model 4 for MANPRINT Evaluation of the M1A2 in the IOTE ..........103 iv 3. Conclusions...System (ASAS) Block II Initial Operational Test and Evaluation ( IOTE ) 203 Appendix I. Noise and Temperature Measurements in and Around the HETS...Test and Evaluation (LUTE), Initial Operational Test and Evaluation ( IOTE ) or a Follow-on Test and Evaluation (FOTE)—was being planned, our Field
Rheology as a tool for evaluation of melt processability of innovative dosage forms.
Aho, Johanna; Boetker, Johan P; Baldursdottir, Stefania; Rantanen, Jukka
2015-10-30
Future manufacturing of pharmaceuticals will involve innovative use of polymeric excipients. Hot melt extrusion (HME) is an already established manufacturing technique and several products based on HME are on the market. Additionally, processing based on, e.g., HME or three dimensional (3D) printing, will have an increasingly important role when designing products for flexible dosing, since dosage forms based on compacting of a given powder mixture do not enable manufacturing of optimal pharmaceutical products for personalized treatments. The melt processability of polymers and API-polymer mixtures is highly dependent on the rheological properties of these systems, and rheological measurements should be considered as a more central part of the material characterization tool box when selecting suitable candidates for melt processing by, e.g., HME or 3D printing. The polymer processing industry offers established platforms, methods, and models for rheological characterization, and they can often be readily applied in the field of pharmaceutical manufacturing. Thoroughly measured and calculated rheological parameters together with thermal and mechanical material data are needed for the process simulations which are also becoming increasingly important. The authors aim to give an overview to the basics of rheology and summarize examples of the studies where rheology has been utilized in setting up or evaluating extrusion processes. Furthermore, examples of different experimental set-ups available for rheological measurements are presented, discussing each of their typical application area, advantages and limitations. Copyright © 2015 Elsevier B.V. All rights reserved.
Murphy, Simon; Raisanen, Larry; Moore, Graham; Edwards, Rhiannon Tudor; Linck, Pat; Williams, Nefyn; Ud Din, Nafees; Hale, Janine; Roberts, Chris; McNaish, Elaine; Moore, Laurence
2010-06-18
The benefits to health of a physically active lifestyle are well established and there is evidence that a sedentary lifestyle plays a significant role in the onset and progression of chronic disease. Despite a recognised need for effective public health interventions encouraging sedentary people with a medical condition to become more active, there are few rigorous evaluations of their effectiveness. Following NICE guidance, the Welsh national exercise referral scheme was implemented within the context of a pragmatic randomised controlled trial. The randomised controlled trial, with nested economic and process evaluations, recruited 2,104 inactive men and women aged 16+ with coronary heart disease (CHD) risk factors and/or mild to moderate depression, anxiety or stress. Participants were recruited from 12 local health boards in Wales and referred directly by health professionals working in a range of health care settings. Consenting participants were randomised to either a 16 week tailored exercise programme run by qualified exercise professionals at community sports centres (intervention), or received an information booklet on physical activity (control). A range of validated measures assessing physical activity, mental health, psycho-social processes and health economics were administered at 6 and 12 months, with the primary 12 month outcome measure being 7 day Physical Activity Recall. The process evaluation explored factors determining the effectiveness or otherwise of the scheme, whilst the economic evaluation determined the relative cost-effectiveness of the scheme in terms of public spending. Evaluation of such a large scale national public health intervention presents methodological challenges in terms of trial design and implementation. This study was facilitated by early collaboration with social research and policy colleagues to develop a rigorous design which included an innovative approach to patient referral and trial recruitment, a comprehensive process evaluation examining intervention delivery and an integrated economic evaluation. This will allow a unique insight into the feasibility, effectiveness and cost effectiveness of a national exercise referral scheme for participants with CHD risk factors or mild to moderate anxiety, depression, or stress and provides a potential model for future policy evaluations. Current Controlled Trials ISRCTN47680448.
Evaluating Organic Aerosol Model Performance: Impact of two Embedded Assumptions
NASA Astrophysics Data System (ADS)
Jiang, W.; Giroux, E.; Roth, H.; Yin, D.
2004-05-01
Organic aerosols are important due to their abundance in the polluted lower atmosphere and their impact on human health and vegetation. However, modeling organic aerosols is a very challenging task because of the complexity of aerosol composition, structure, and formation processes. Assumptions and their associated uncertainties in both models and measurement data make model performance evaluation a truly demanding job. Although some assumptions are obvious, others are hidden and embedded, and can significantly impact modeling results, possibly even changing conclusions about model performance. This paper focuses on analyzing the impact of two embedded assumptions on evaluation of organic aerosol model performance. One assumption is about the enthalpy of vaporization widely used in various secondary organic aerosol (SOA) algorithms. The other is about the conversion factor used to obtain ambient organic aerosol concentrations from measured organic carbon. These two assumptions reflect uncertainties in the model and in the ambient measurement data, respectively. For illustration purposes, various choices of the assumed values are implemented in the evaluation process for an air quality model based on CMAQ (the Community Multiscale Air Quality Model). Model simulations are conducted for the Lower Fraser Valley covering Southwest British Columbia, Canada, and Northwest Washington, United States, for a historical pollution episode in 1993. To understand the impact of the assumed enthalpy of vaporization on modeling results, its impact on instantaneous organic aerosol yields (IAY) through partitioning coefficients is analysed first. The analysis shows that utilizing different enthalpy of vaporization values causes changes in the shapes of IAY curves and in the response of SOA formation capability of reactive organic gases to temperature variations. These changes are then carried into the air quality model and cause substantial changes in the organic aerosol modeling results. In another aspect, using different assumed factors to convert measured organic carbon to organic aerosol concentrations cause substantial variations in the processed ambient data themselves, which are normally used as performance targets for model evaluations. The combination of uncertainties in the modeling results and in the moving performance targets causes major uncertainties in the final conclusion about the model performance. Without further information, the best thing that a modeler can do is to choose a combination of the assumed values from the sensible parameter ranges available in the literature, based on the best match of the modeling results with the processed measurement data. However, the best match of the modeling results with the processed measurement data may not necessarily guarantee that the model itself is rigorous and the model performance is robust. Conclusions on the model performance can only be reached with sufficient understanding of the uncertainties and their impact.
ERIC Educational Resources Information Center
Filby, Nikola N.
The development and refinement of the measures of student achievement in reading and mathematics for the Beginning Teacher Evaluation Study are described. The concept of reactivity to instruction is introduced: the tests used to evaluate instructional processes must be sensitive indicators of classroom learning overtime. Data collection activities…
The Development of a Practical and Reliable Assessment Measure for Atopic Dermatitis (ADAM).
ERIC Educational Resources Information Center
Charman, Denise; Varigos, George; Horne, David J. de L.; Oberklaid, Frank
1999-01-01
A study was conducted in Australia to develop a reliable, valid, and practical measure of atopic dermatitis. The test development process and validity evaluation with two doctors and 51 patients are discussed. Results suggest that operational definitions of the scales need to be defined more clearly. The measure satisfies assumptions for a partial…
Evaluation of mixed hardwood studs manufactured by the Saw-Dry-Rip (SDR) process
R. R. Maeglin; R. S. Boone
1985-01-01
This paper describes increment cores (a useful tool in forestry and wood technology) and their uses which include age determination, growth increment, specific gravity determination, fiber length measurements, fibril angle measurements, cell measurements, and pathological investigations. Also described is the use and care of the increment borer which is essential in...
Koskinen, Heli I
2010-01-01
The Faculty of Veterinary Medicine at the University of Helsinki recognized the lack of systems to measure the quality of education. At the department level, this meant lack of systems to measure the quality of students' outcomes. The aim of this article was to compare the quality of outcomes of a final examination in veterinary radiology by calculating the correlations between traditional (quantitative scores traditionally given by veterinary teachers) and nontraditional (qualitative Structure of the Observed Learning Outcome, or SOLO, method) grading results. Evaluation of the quality of the questions is also included. The results indicate that SOLO offers criteria for quality evaluation, especially for questions. A correlation of 0.60 (p<0.01) existed between qualitative and quantitative estimations, and a correlation of 0.79 (p<0.01) existed between evaluators, both using traditional scores. Two suggestions for a better system to evaluate quality in the future: First, development of problem-solving skills during the learning process should also be assessed. Second, both the scoring of factual correctness of answers (knowledge) and the grammatical structure of an answer and the quality of presentation should be included in the quality evaluation process.
A Module Experimental Process System Development Unit (MEPSDU)
NASA Technical Reports Server (NTRS)
1981-01-01
The purpose of this program is to demonstrate the technical readiness of a cost effective process sequence that has the potential for the production of flat plate photovoltaic modules which met the price goal in 1986 of $.70 or less per watt peak. Program efforts included: preliminary design review, preliminary cell fabrication using the proposed process sequence, verification of sandblasting back cleanup, study of resist parameters, evaluation of pull strength of the proposed metallization, measurement of contact resistance of Electroless Ni contacts, optimization of process parameter, design of the MEPSDU module, identification and testing of insulator tapes, development of a lamination process sequence, identification, discussions, demonstrations and visits with candidate equipment vendors, evaluation of proposals for tabbing and stringing machine.
ERIC Educational Resources Information Center
Badger, Elizabeth
1992-01-01
Explains a set of processes that teachers might use to structure their evaluation of students' learning and understanding. Illustrates the processes of setting goals, deciding what to assess, gathering information, and using the results through a measurement task requiring students to estimate the number of popcorn kernels in a container. (MDH)
DOT National Transportation Integrated Search
2011-01-01
Travel demand modeling plays a key role in the transportation system planning and evaluation process. The four-step sequential travel demand model is the most widely used technique in practice. Traffic assignment is the key step in the conventional f...
Kahn, Jeremy M; Gould, Michael K; Krishnan, Jerry A; Wilson, Kevin C; Au, David H; Cooke, Colin R; Douglas, Ivor S; Feemster, Laura C; Mularski, Richard A; Slatore, Christopher G; Wiener, Renda Soylemez
2014-05-01
Many health care performance measures are either not based on high-quality clinical evidence or not tightly linked to patient-centered outcomes, limiting their usefulness in quality improvement. In this report we summarize the proceedings of an American Thoracic Society workshop convened to address this problem by reviewing current approaches to performance measure development and creating a framework for developing high-quality performance measures by basing them directly on recommendations from well-constructed clinical practice guidelines. Workshop participants concluded that ideally performance measures addressing care processes should be linked to clinical practice guidelines that explicitly rate the quality of evidence and the strength of recommendations, such as the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) process. Under this framework, process-based performance measures would only be developed from strong recommendations based on high- or moderate-quality evidence. This approach would help ensure that clinical processes specified in performance measures are both of clear benefit to patients and supported by strong evidence. Although this approach may result in fewer performance measures, it would substantially increase the likelihood that quality-improvement programs based on these measures actually improve patient care.
Moore, Lynne; Lavoie, André; Bourgeois, Gilles; Lapointe, Jean
2015-06-01
According to Donabedian's health care quality model, improvements in the structure of care should lead to improvements in clinical processes that should in turn improve patient outcome. This model has been widely adopted by the trauma community but has not yet been validated in a trauma system. The objective of this study was to assess the performance of an integrated trauma system in terms of structure, process, and outcome and evaluate the correlation between quality domains. Quality of care was evaluated for patients treated in a Canadian provincial trauma system (2005-2010; 57 centers, n = 63,971) using quality indicators (QIs) developed and validated previously. Structural performance was measured by transposing on-site accreditation visit reports onto an evaluation grid according to American College of Surgeons criteria. The composite process QI was calculated as the average sum of proportions of conformity to 15 process QIs derived from literature review and expert opinion. Outcome performance was measured using risk-adjusted rates of mortality, complications, and readmission as well as hospital length of stay (LOS). Correlation was assessed with Pearson's correlation coefficients. Statistically significant correlations were observed between structure and process QIs (r = 0.33), and process and outcome QIs (r = -0.33 for readmission, r = -0.27 for LOS). Significant positive correlations were also observed between outcome QIs (r = 0.37 for mortality-readmission; r = 0.39 for mortality-LOS and readmission-LOS; r = 0.45 for mortality-complications; r = 0.34 for readmission-complications; 0.63 for complications-LOS). Significant correlations between quality domains observed in this study suggest that Donabedian's structure-process-outcome model is a valid model for evaluating trauma care. Trauma centers that perform well in terms of structure also tend to perform well in terms of clinical processes, which in turn has a favorable influence on patient outcomes. Prognostic study, level III.
Inconclusive quantum measurements and decisions under uncertainty
NASA Astrophysics Data System (ADS)
Yukalov, Vyacheslav; Sornette, Didier
2016-04-01
We give a mathematical definition for the notion of inconclusive quantum measurements. In physics, such measurements occur at intermediate stages of a complex measurement procedure, with the final measurement result being operationally testable. Since the mathematical structure of Quantum Decision Theory has been developed in analogy with the theory of quantum measurements, the inconclusive quantum measurements correspond, in Quantum Decision Theory, to intermediate stages of decision making in the process of taking decisions under uncertainty. The general form of the quantum probability for a composite event is the sum of a utility factor, describing a rational evaluation of the considered prospect, and of an attraction factor, characterizing irrational, subconscious attitudes of the decision maker. Despite the involved irrationality, the probability of prospects can be evaluated. This is equivalent to the possibility of calculating quantum probabilities without specifying hidden variables. We formulate a general way of evaluation, based on the use of non-informative priors. As an example, we suggest the explanation of the decoy effect. Our quantitative predictions are in very good agreement with experimental data.
NASA Astrophysics Data System (ADS)
Yiran, P.; Li, J.; von Salzen, K.; Dai, T.; Liu, D.
2014-12-01
Mineral dust is a significant contributor to global and Asian aerosol burden. Currently, large uncertainties still exist in simulated aerosol processes in global climate models (GCMs), which lead to a diversity in dust mass loading and spatial distribution of GCM projections. In this study, satellite measurements from CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) and observed aerosol data from Asian stations are compared with modelled aerosol in the Canadian Atmospheric Global Climate Model (CanAM4.2). Both seasonal and annual variations in Asian dust distribution are investigated. Vertical profile of simulated aerosol in troposphere is evaluated with CALIOP Level 3 products and local observed extinction for dust and total aerosols. Physical processes in GCM such as horizontal advection, vertical mixing, dry and wet removals are analyzed according to model simulation and available measurements of aerosol. This work aims to improve current understanding of Asian dust transport and vertical exchange on a large scale, which may help to increase the accuracy of GCM simulation on aerosols.
Quality of Care Measures for the Management of Unhealthy Alcohol Use
Hepner, Kimberly A.; Watkins, Katherine E.; Farmer, Carrie M.; Rubenstein, Lisa; Pedersen, Eric R.; Pincus, Harold Alan
2017-01-01
There is a paucity of quality measures to assess the care for the range of unhealthy alcohol use, ranging from risky drinking to alcohol use disorders. Using a two-phase expert panel review process, we sought to develop an expanded set of quality of care measures for unhealthy alcohol use, focusing on outpatient care delivered in both primary care and specialty care settings. This process generated 25 candidate measures. Eight measures address screening and assessment, 11 address aspects of treatment, and six address follow-up. These quality measures represent high priority targets for future development, including creating detailed technical specifications and pilot testing them to evaluate their utility in terms of feasibility, reliability, and validity. PMID:28340902
1982-06-01
process) pertain to the second. Tiae or cost factors sometimes preclude the uGe of product measures, leaving measures of task process as the only...it noo.ewy ad idenatify by Sleek nomher) Training Offectiveness ivluation Product ftaluation Training effectiveness Air defense training Training...requirements. ’The, TE systea described in this report 4 rporates the principles of instructional system Aevelopusnt and provides for both product evaltiation
1977-08-01
major emphasis was on the self-understanding of one’s interpersonal behavior and attitudes , and how they impacted on interpersonal relationships. The...in behavior and attitudes related to increased interpersonal effectiveness? The second part of the study focuses on the relationship of specific 22...process measures are discussed below. Outcome Measurement Two basic instruments were used to assess change from pre-laboratory attitudes and behavior
Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C
2008-01-01
As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.
Development of the Clinical Teaching Effectiveness Questionnaire in the United States.
Wormley, Michelle E; Romney, Wendy; Greer, Anna E
2017-01-01
The purpose of this study was to develop a valid measure for assessing clinical teaching effectiveness within the field of physical therapy. The Clinical Teaching Effectiveness Questionnaire (CTEQ) was developed via a 4-stage process, including (1) initial content development, (2) content analysis with 8 clinical instructors with over 5 years of clinical teaching experience, (3) pilot testing with 205 clinical instructors from 2 universities in the Northeast of the United States, and (4) psychometric evaluation, including principal component analysis. The scale development process resulted in a 30-item questionnaire with 4 sections that relate to clinical teaching: learning experiences, learning environment, communication, and evaluation. The CTEQ provides a preliminary valid measure for assessing clinical teaching effectiveness in physical therapy practice.
Comprehensive and Highly Accurate Measurements of Crane Runways, Profiles and Fastenings
Dennig, Dirk; Bureick, Johannes; Link, Johannes; Diener, Dmitri; Hesse, Christian; Neumann, Ingo
2017-01-01
The process of surveying crane runways has been continually refined due to the competitive situation, modern surveying instruments, additional sensors, accessories and evaluation procedures. Guidelines, such as the International Organization for Standardization (ISO) 12488-1, define target values that must be determined by survey. For a crane runway these are for example the span, the position and height of the rails. The process has to be objective and reproducible. However, common processes of surveying crane runways do not meet these requirements sufficiently. The evaluation of the protocols, ideally by an expert, requires many years of experience. Additionally, the recording of crucial parameters, e.g., the wear of the rail, or the condition of the rail fastening and rail joints, is not regulated and for that reason are often not considered during the measurement. To solve this deficit the Advanced Rail Track Inspection System (ARTIS) was developed. ARTIS is used to measure the 3D position of crane rails, the cross-section of the crane rails, joints and, for the first time, the (crane-rail) fastenings. The system consists of a monitoring vehicle and an external tracking sensor. It makes kinematic observations with the tracking sensor from outside the rail run, e.g., the floor of an overhead crane runway, possible. In this paper we present stages of the development process of ARTIS, new target values, calibration of sensors and results of a test measurement. PMID:28505076
Park, Heeyoung; Lombardino, Linda J
2013-09-01
Processing speed deficits along with phonological awareness deficits have been identified as risk factors for dyslexia. This study was designed to examine the behavioral profiles of two groups, a younger (6-8 years) and an older (10-15 years) group of dyslexic children for the purposes of (1) evaluating the degree to which phonological awareness and processing speed deficits occur in the two developmental cohorts; (2) determining the strength of relationships between the groups' respective mean scores on cognitive tasks of phonological awareness and processing speed and their scores on component skills of reading; and (3) evaluating the degree to which phonological awareness and processing speed serve as concurrent predictors of component reading skills for each group. The mean scaled scores for both groups were similar on all but one processing speed task. The older group was significantly more depressed on a visual matching test of attention, scanning, and speed. Correlations between reading skills and the cognitive constructs were very similar for both age-groups. Neither of the two phonological awareness tasks correlated with either of the two processing speed tasks or with any of the three measures of reading. One of the two processing speed measures served as a concurrent predictor of word- and text-level reading in the younger, however, only the rapid naming measure functioned as a concurrent predictor of word reading in the older group. Conversely, phonological processing measures did not serve as concurrent predictors for word-level or text-level reading in either of the groups. Descriptive analyses of individual subjects' deficits in the domains of phonological awareness and processing speed revealed that (1) both linguistic and nonlinguistic processing speed deficits in the younger dyslexic children occurred at higher rates than deficits in phonological awareness and (2) cognitive deficits within and across these two domains were greater in the older dyslexic children. Our findings underscore the importance of using rapid naming measures when testing school-age children suspected of having a reading disability and suggest that processing speed measures that do not reply on verbal responses may serve as predictors of reading disability in young children prior to their development of naming automaticity. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Kupersmidt, Janis B.; Stelter, Rebecca; Dodge, Kenneth A.
2011-01-01
The purpose of this study was to evaluate the psychometric properties of an audio computer-assisted self-interviewing Web-based software application called the Social Information Processing Application (SIP-AP) that was designed to assess social information processing skills in boys in RD through 5th grades. This study included a racially and…
The influence of (central) auditory processing disorder in speech sound disorders.
Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Vilela, Nadia; Carvallo, Renata Mota Mamede; Wertzner, Haydée Fiszbein
2016-01-01
Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central) auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. To study phonological measures and (central) auditory processing of children with speech sound disorder. Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central) auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. The group with (central) auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central) auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Cognitive processing of visual images in migraine populations in between headache attacks.
Mickleborough, Marla J S; Chapman, Christine M; Toma, Andreea S; Handy, Todd C
2014-09-25
People with migraine headache have altered interictal visual sensory-level processing in between headache attacks. Here we examined the extent to which these migraine abnormalities may extend into higher visual processing such as implicit evaluative analysis of visual images in between migraine events. Specifically, we asked two groups of participants--migraineurs (N=29) and non-migraine controls (N=29)--to view a set of unfamiliar commercial logos in the context of a target identification task as the brain electrical responses to these objects were recorded via event-related potentials (ERPs). Following this task, participants individually identified those logos that they most liked or disliked. We applied a between-groups comparison of how ERP responses to logos varied as a function of hedonic evaluation. Our results suggest migraineurs have abnormal implicit evaluative processing of visual stimuli. Specifically, migraineurs lacked a bias for disliked logos found in control subjects, as measured via a late positive potential (LPP) ERP component. These results suggest post-sensory consequences of migraine in between headache events, specifically abnormal cognitive evaluative processing with a lack of normal categorical hedonic evaluation. Copyright © 2014 Elsevier B.V. All rights reserved.
Evaluation of security algorithms used for security processing on DICOM images
NASA Astrophysics Data System (ADS)
Chen, Xiaomeng; Shuai, Jie; Zhang, Jianguo; Huang, H. K.
2005-04-01
In this paper, we developed security approach to provide security measures and features in PACS image acquisition and Tele-radiology image transmission. The security processing on medical images was based on public key infrastructure (PKI) and including digital signature and data encryption to achieve the security features of confidentiality, privacy, authenticity, integrity, and non-repudiation. There are many algorithms which can be used in PKI for data encryption and digital signature. In this research, we select several algorithms to perform security processing on different DICOM images in PACS environment, evaluate the security processing performance of these algorithms, and find the relationship between performance with image types, sizes and the implementation methods.
Kaluzny, Bartlomiej J.; Szkulmowski, Maciej; Bukowska, Danuta M.; Wojtkowski, Maciej
2014-01-01
We evaluate Spectral OCT (SOCT) with a speckle contrast reduction technique using resonant scanner for assessment of corneal surface changes after excimer laser photorefractive keratectomy (PRK) and we compare healing process between conventional PRK and transepithelial PRK. The measurements were performed before and after the surgery. Obtained results show that SOCT with a resonant scanner speckle contrast reduction is capable of providing information regarding the healing process after PRK. The main difference between the healing processes of PRK and TransPRK, assessed by SOCT, was the time to cover the stroma with epithelium, which was shorter in the TransPRK group. PMID:24761291
Adult Learners and Distance Education Evaluation: Implications for Success.
ERIC Educational Resources Information Center
Weigand, Kathy
The telecourse education program created by Barry University in Florida was evaluated by use of a questionnaire designed to measure students' overall satisfaction or dissatisfaction with the telecourse process. The questionnaire contains questions about how students heard about the telecourses, whether services were accommodating, what they liked…
The Impact of In-Prison Therapeutic Community Programs on Prison Management.
ERIC Educational Resources Information Center
Prendergast, Michael; Farabee, David; Cartier, Jerome
2001-01-01
Presents findings of a process evaluation of the California Substance Abuse Treatment Facility. Measures from the evaluation suggest that the presence of a therapeutic community within a prison is associated with significant advantages for management of the institution-including lower rates of infractions, reduced absenteeism among correctional…
PROCEEDINGS: SEMINAR ON IN-STACK PARTICLE SIZING FOR PARTICULATE CONTROL DEVICE EVALUATION
The proceedings document discussions during an EPA/IERL-RTP-sponsored seminar on In-stack Particle Sizing for Particulate Control Device Evaluation. The seminar, organized by IERL-RTP's Process Measurements Branch, was held at IERL-RTP in North Carolina on December 3 and 4, 1975....
The report describes the development and evaluation of a large chamber test method for measuring emissions from dry-process photocopiers. The test method was developed in two phases. Phase 1 was a single-laboratory evaluation at Research Triangle Institute (RTI) using four, mid-r...
Evaluating Cooperative Education Programs.
ERIC Educational Resources Information Center
Alvir, Howard P.
This document defines cooperative education as any form of occupational or professional activity that required the cooperation of both school and the labor market. In some cases, this might be the school and industry or business. In this process, evaluation is defined as the improvement of learner success through measurement of program components.…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-02
... Federal Agency Comments for Collaboration on Evaluating the World Health Organization (WHO) International... the business processes of other Federal agencies and researchers throughout the world. We invite other...-CY reflect WHO's framework for measuring health and disability at both individual and population...
Follow Through Classroom Process Measurement.
ERIC Educational Resources Information Center
Soar, Robert S.
This report presents a portion of the evaluation of the planned variation of Project Follow Through, in which increased understanding of education which is functional for disadvantaged children has been a major concern. This segment of the evaluation has the following two objectives: (1) to describe in behavioral terms the differences among…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald Boring; Roger Lew; Thomas Ulrich
2014-03-01
As control rooms are modernized with new digital systems at nuclear power plants, it is necessary to evaluate the operator performance using these systems as part of a verification and validation process. There are no standard, predefined metrics available for assessing what is satisfactory operator interaction with new systems, especially during the early design stages of a new system. This report identifies the process and metrics for evaluating human system interfaces as part of control room modernization. The report includes background information on design and evaluation, a thorough discussion of human performance measures, and a practical example of how themore » process and metrics have been used as part of a turbine control system upgrade during the formative stages of design. The process and metrics are geared toward generalizability to other applications and serve as a template for utilities undertaking their own control room modernization activities.« less
Screening programme to select a resin for Gravity Probe-B composites
NASA Technical Reports Server (NTRS)
Will, E. T.
1992-01-01
The Gravity Probe-B (GP-B) program undertook a screening program to select a possible replacement resin for the E-787 resin currently used in composite neck tubes and support struts. The goal was to find a resin with good cryogenic and structural properties, low-helium permeation and an easily repeatable fabrication process. Cycom 92, SCI REZ 081 and RS-3 were selected for comparison with E-787. Identical composite tubes made from each resin and gamma-alumina fiber (85 percent Al2O3, 15 percent SiO2) were evaluated for cryogenic and structural performance and for processability. Cryogenic performance was evaluated by measuring low-temperature permeation and leaks to determine cryogenic strain behavior. Structural performance was evaluated by comparing the resin-dominated shear strength of the composites. Processability was evaluated from fabrication comments and GP-B's own experience. SCI REZ 081 was selected as the best overall resin with superior strength and cryogenic performance and consistent processability.
ERIC Educational Resources Information Center
Kentucky State Dept. of Libraries, Frankfort.
This document is the beginning of a process. The objects of the process are to improve decisions between alternate choices in the development of statewide library services. Secondary functions are to develop the tools for providing information relevant to decisions, to measure and monitor services, and to aid in the communication process. The…
Kazis, Lewis E; Sheridan, Robert L; Shapiro, Gabriel D; Lee, Austin F; Liang, Matthew H; Ryan, Colleen M; Schneider, Jeffrey C; Lydon, Martha; Soley-Bori, Marina; Sonis, Lily A; Dore, Emily C; Palmieri, Tina; Herndon, David; Meyer, Walter; Warner, Petra; Kagan, Richard; Stoddard, Frederick J; Murphy, Michael; Tompkins, Ronald G
2018-04-01
There has been little systematic examination of variation in pediatric burn care clinical practices and its effect on outcomes. As a first step, current clinical care processes need to be operationally defined. The highly specialized burn care units of the Shriners Hospitals for Children system present an opportunity to describe the processes of care. The aim of this study was to develop a set of process-based measures for pediatric burn care and examine adherence to them by providers in a cohort of pediatric burn patients. We conducted a systematic literature review to compile a set of process-based indicators. These measures were refined by an expert panel of burn care providers, yielding 36 process-based indicators in four clinical areas: initial evaluation and resuscitation, acute excisional surgery and critical care, psychosocial and pain control, and reconstruction and aftercare. We assessed variability in adherence to the indicators in a cohort of 1,076 children with burns at four regional pediatric burn programs in the Shriners Hospital system. The percentages of the cohort at each of the four sites were as follows: Boston, 20.8%; Cincinnati, 21.1%; Galveston, 36.0%; and Sacramento, 22.1%. The cohort included children who received care between 2006 and 2010. Adherence to the process indicators varied both across sites and by clinical area. Adherence was lowest for the clinical areas of acute excisional surgery and critical care, with a range of 35% to 48% across sites, followed by initial evaluation and resuscitation (range, 34%-60%). In contrast, the clinical areas of psychosocial and pain control and reconstruction and aftercare had relatively high adherence across sites, with ranges of 62% to 93% and 71% to 87%, respectively. Of the 36 process indicators, 89% differed significantly in adherence between clinical sites (p < 0.05). Acute excisional surgery and critical care exhibited the most variability. The development of this set of process-based measures represents an important step in the assessment of clinical practice in pediatric burn care. Substantial variation was observed in practices of pediatric burn care. However, further research is needed to link these process-based measures to clinical outcomes. Therapeutic/care management, level IV.
The Collaboration Readiness of Transdisciplinary Research Teams and Centers
Hall, Kara L.; Stokols, Daniel; Moser, Richard P.; Taylor, Brandie K.; Thornquist, Mark D.; Nebeling, Linda C.; Ehret, Carolyn C.; Barnett, Matthew J.; McTiernan, Anne; Berger, Nathan A.; Goran, Michael I.; Jeffery, Robert W.
2009-01-01
Growing interest in promoting cross-disciplinary collaboration among health scientists has prompted several federal agencies, including the NIH, to establish large, multicenter initiatives intended to foster collaborative research and training. In order to assess whether these initiatives are effective in promoting scientific collaboration that ultimately results in public health improvements, it is necessary to develop new strategies for evaluating research processes and products as well as the longer-term societal outcomes associated with these programs. Ideally, evaluative measures should be administered over the entire course of large initiatives, including their near-term and later phases. The present study focuses on the development of new tools for assessing the readiness for collaboration among health scientists at the outset (during Year One) of their participation in the National Cancer Institute’s Transdisciplinary Research on Energetics and Cancer (TREC) initiative. Indexes of collaborative readiness, along with additional measures of near-term collaborative processes, were administered as part of the TREC Year-One evaluation survey. Additionally, early progress toward scientific collaboration and integration was assessed, using a protocol for evaluating written research products. Results from the Year-One survey and the ratings of written products provide evidence of cross-disciplinary collaboration among participants during the first year of the initiative, and also reveal opportunities for enhancing collaborative processes and outcomes during subsequent phases of the project. The implications of these findings for future evaluations of team science initiatives are discussed. PMID:18619396
NASA Astrophysics Data System (ADS)
Raeva, P. L.; Filipova, S. L.; Filipov, D. G.
2016-06-01
The following paper aims to test and evaluate the accuracy of UAV data for volumetric measurements to the conventional GNSS techniques. For this purpose, an appropriate open pit quarry has been chosen. Two sets of measurements were performed. Firstly, a stockpile was measured by GNSS technologies and later other terrestrial GNSS measurements for modelling the berms of the quarry were taken. Secondly, the area of the whole quarry including the stockpile site was mapped by a UAV flight. Having considered how dynamic our world is, new techniques and methods should be presented in numerous fields. For instance, the management of an open pit quarry requires gaining, processing and storing a large amount of information which is constantly changing with time. Fast and precise acquisition of measurements regarding the process taking place in a quarry is the key to an effective and stable maintenance. In other words, this means getting an objective evaluations of the processes, using up-to-date technologies and reliable accuracy of the results. Often legislations concerning mine engineering state that the volumetric calculations are to present ±3% accuracy of the whole amount. On one hand, extremely precise measurements could be performed by GNSS technologies, however, it could be really time consuming. On the other hand, UAV photogrammetry presents a fast, accurate method for mapping large areas and calculating stockpiles volumes. The study case was performed as a part of a master thesis.
Vincha, Kellem Regina Rosendo; Vieira, Viviane Laudelino; Guerra, Lúcia Dias da Silva; Botelho, Fernanda Cangussu; Pava-Cárdenas, Alexandra; Cervato-Mancuso, Ana Maria
2017-09-28
: The study analyzed the social representations of primary health care professionals on evaluative processes of groups that work with food and nutrition, and described the educational strategies used in this care. This was a qualitative study from 2012 to 2014 in the city of São Paulo, Brazil, in which 48 interviews were analyzed. In the analysis of the interviews, for classification of the educational strategies in learning categories and contents, Bogdan & Biklen and Zabala were used, respectively. The evaluative processes used the collective subject discourse technique, based on Jodelet's social representations. Three learning contents were found in the educational strategies and four social representations of the evaluative processes which combined to reveal the presence of a conflict by a practice directed by the work process to quantitative and individual evaluative criteria and a health-promoting practice that used inclusive approaches and participant evaluation. In this practice, the study implicitly identified the presence of autonomy in health. The study revealed the need to acknowledge and systematize group planning as an educational tool that qualifies and empowers comprehensive care.
Semiconductor technology program: Progress briefs
NASA Technical Reports Server (NTRS)
Galloway, K. F.; Scace, R. I.; Walters, E. J.
1981-01-01
Measurement technology for semiconductor materials, process control, and devices, is discussed. Silicon and silicon based devices are emphasized. Highlighted activities include semiinsulating GaAs characterization, an automatic scanning spectroscopic ellipsometer, linewidth measurement and coherence, bandgap narrowing effects in silicon, the evaluation of electrical linewidth uniformity, and arsenicomplanted profiles in silicon.
Mobile Air Monitoring Data Processing Strategies and Effects on Spatial Air Pollution Trends
The collection of real-time air quality measurements while in motion (i.e., mobile monitoring) is currently conducted worldwide to evaluate in situ emissions, local air quality trends, and air pollutant exposure. This measurement strategy pushes the limits of traditional data an...
Determining and Communicating the Value of the Special Library.
ERIC Educational Resources Information Center
Matthews, Joseph R.
2003-01-01
Discusses performance measures for libraries that will indicate the goodness of the library and its services. Highlights include a general evaluation model that includes input, process, output, and outcome measures; balanced scorecard approach that includes financial perspectives; focusing on strategy; strategies for change; user criteria for…
Silicon solar cell process. Development, fabrication and analysis
NASA Technical Reports Server (NTRS)
Yoo, H. I.; Iles, P. A.; Tanner, D. P.
1978-01-01
Solar cells were fabricated from unconventional silicon sheets, and the performances were characterized with an emphasis on statistical evaluation. A number of solar cell fabrication processes were used and conversion efficiency was measured under AMO condition at 25 C. Silso solar cells using standard processing showed an average efficiency of about 9.6%. Solar cells with back surface field process showed about the same efficiency as the cells from standard process. Solar cells from grain boundary passivation process did not show any improvements in solar cell performance.
Future of Assurance: Ensuring that a System is Trustworthy
NASA Astrophysics Data System (ADS)
Sadeghi, Ahmad-Reza; Verbauwhede, Ingrid; Vishik, Claire
Significant efforts are put in defining and implementing strong security measures for all components of the comput-ing environment. It is equally important to be able to evaluate the strength and robustness of these measures and establish trust among the components of the computing environment based on parameters and attributes of these elements and best practices associated with their production and deployment. Today the inventory of techniques used for security assurance and to establish trust -- audit, security-conscious development process, cryptographic components, external evaluation - is somewhat limited. These methods have their indisputable strengths and have contributed significantly to the advancement in the area of security assurance. However, shorter product and tech-nology development cycles and the sheer complexity of modern digital systems and processes have begun to decrease the efficiency of these techniques. Moreover, these approaches and technologies address only some aspects of security assurance and, for the most part, evaluate assurance in a general design rather than an instance of a product. Additionally, various components of the computing environment participating in the same processes enjoy different levels of security assurance, making it difficult to ensure adequate levels of protection end-to-end. Finally, most evaluation methodologies rely on the knowledge and skill of the evaluators, making reliable assessments of trustworthiness of a system even harder to achieve. The paper outlines some issues in security assurance that apply across the board, with the focus on the trustworthiness and authenticity of hardware components and evaluates current approaches to assurance.
Quantifying the process and outcomes of person-centered planning.
Holburn, S; Jacobson, J W; Vietze, P M; Schwartz, A A; Sersen, E
2000-09-01
Although person-centered planning is a popular approach in the field of developmental disabilities, there has been little systematic assessment of its process and outcomes. To measure person-centered planning, we developed three instruments designed to assess its various aspects. We then constructed variables comprising both a Process and an Outcome Index using a combined rational-empirical method. Test-retest reliability and measures of internal consistency appeared adequate. Variable correlations and factor analysis were generally consistent with our conceptualization and resulting item and variable classifications. Practical implications for intervention integrity, program evaluation, and organizational performance are discussed.
Benefits of adaptive FM systems on speech recognition in noise for listeners who use hearing aids.
Thibodeau, Linda
2010-06-01
To compare the benefits of adaptive FM and fixed FM systems through measurement of speech recognition in noise with adults and students in clinical and real-world settings. Five adults and 5 students with moderate-to-severe hearing loss completed objective and subjective speech recognition in noise measures with the 2 types of FM processing. Sentence recognition was evaluated in a classroom for 5 competing noise levels ranging from 54 to 80 dBA while the FM microphone was positioned 6 in. from the signal loudspeaker to receive input at 84 dB SPL. The subjective measures included 2 classroom activities and 6 auditory lessons in a noisy, public aquarium. On the objective measures, adaptive FM processing resulted in significantly better speech recognition in noise than fixed FM processing for 68- and 73-dBA noise levels. On the subjective measures, all individuals preferred adaptive over fixed processing for half of the activities. Adaptive processing was also preferred by most (8-9) individuals for the remaining 4 activities. The adaptive FM processing resulted in significant improvements at the higher noise levels and was preferred by the majority of participants in most of the conditions.
Facilities | Transportation Research | NREL
detailed chemical characterization, performance property measurements, and stability research. Photo of Technology Evaluation Center This off-network data center provides secure management, storage, and processing
NASA Astrophysics Data System (ADS)
Guo, X.; Wu, Z.; Lv, C.
2017-12-01
The water utilization benefits are formed by the material flow, energy flow, information flow and value stream in the whole water cycle process, and reflected along with the material circulation of inner system. But most of traditional water utilization benefits evaluation are based on the macro level, only consider the whole material input and output and energy conversion relation, and lack the characterization of water utilization benefits accompanying with water cycle process from the formation mechanism. In addition, most studies are from the perspective of economics, only pay attention to the whole economic output and sewage treatment economic investment, but neglect the ecological function benefits of water cycle, Therefore, from the perspective of internal material circulation in the whole system, taking water cycle process as the process of material circulation and energy flow, the circulation and flow process of water and other ecological environment, social economic elements were described, and the composition of water utilization positive and negative benefits in water-ecological-economic system was explored, and the performance of each benefit was analyzed. On this basis, the emergy calculation method of each benefit was proposed by emergy quantitative analysis technique, which can realize the unified measurement and evaluation of water utilization benefits in water-ecological-economic system. Then, taking Zhengzhou city as an example, the corresponding benefits of different water cycle links were calculated quantitatively by emergy method, and the results showed that the emergy evaluation method of water utilization benefits can unify the ecosystem and the economic system, achieve uniform quantitative analysis, and measure the true value of natural resources and human economic activities comprehensively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lah, J; Shin, D; Kim, G
Purpose: To show how tolerance design and tolerancing approaches can be used to predict and improve the site-specific range in patient QA process in implementing the Six Sigma. Methods: In this study, patient QA plans were selected according to 6 site-treatment groups: head &neck (94 cases), spine (76 cases), lung (89 cases), liver (53 cases), pancreas (55 cases), and prostate (121 cases), treated between 2007 and 2013. We evaluated a model of the Six Sigma that determines allowable deviations in design parameters and process variables in patient-specific QA, where possible, tolerance may be loosened, then customized if it necessary tomore » meet the functional requirements. A Six Sigma problem-solving methodology is known as DMAIC phases, which are used stand for: Define a problem or improvement opportunity, Measure process performance, Analyze the process to determine the root causes of poor performance, Improve the process by fixing root causes, Control the improved process to hold the gains. Results: The process capability for patient-specific range QA is 0.65 with only ±1 mm of tolerance criteria. Our results suggested the tolerance level of ±2–3 mm for prostate and liver cases and ±5 mm for lung cases. We found that customized tolerance between calculated and measured range reduce that patient QA plan failure and almost all sites had failure rates less than 1%. The average QA time also improved from 2 hr to less than 1 hr for all including planning and converting process, depth-dose measurement and evaluation. Conclusion: The objective of tolerance design is to achieve optimization beyond that obtained through QA process improvement and statistical analysis function detailing to implement a Six Sigma capable design.« less
NASA Technical Reports Server (NTRS)
Tilmes, Curt A.; Fleig, Albert J.
2008-01-01
NASA's traditional science data processing systems have focused on specific missions, and providing data access, processing and services to the funded science teams of those specific missions. Recently NASA has been modifying this stance, changing the focus from Missions to Measurements. Where a specific Mission has a discrete beginning and end, the Measurement considers long term data continuity across multiple missions. Total Column Ozone, a critical measurement of atmospheric composition, has been monitored for'decades on a series of Total Ozone Mapping Spectrometer (TOMS) instruments. Some important European missions also monitor ozone, including the Global Ozone Monitoring Experiment (GOME) and SCIAMACHY. With the U.S.IEuropean cooperative launch of the Dutch Ozone Monitoring Instrument (OMI) on NASA Aura satellite, and the GOME-2 instrumental on MetOp, the ozone monitoring record has been further extended. In conjunction with the U.S. Department of Defense (DoD) and the National Oceanic and Atmospheric Administration (NOAA), NASA is now preparing to evaluate data and algorithms for the next generation Ozone Mapping and Profiler Suite (OMPS) which will launch on the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP) in 2010. NASA is constructing the Science Data Segment (SDS) which is comprised of several elements to evaluate the various NPP data products and algorithms. The NPP SDS Ozone Product Evaluation and Test Element (PEATE) will build on the heritage of the TOMS and OM1 mission based processing systems. The overall measurement based system that will encompass these efforts is the Atmospheric Composition Processing System (ACPS). We have extended the system to include access to publically available data sets from other instruments where feasible, including non-NASA missions as appropriate. The heritage system was largely monolithic providing a very controlled processing flow from data.ingest of satellite data to the ultimate archive of specific operational data products. The ACPS allows more open access with standard protocols including HTTP, SOAPIXML, RSS and various REST incarnations. External entities can be granted access to various modules within the system, including an extended data archive, metadata searching, production planning and processing. Data access is provided with very fine grained access control. It is possible to easily designate certain datasets as being available to the public, or restricted to groups of researchers, or limited strictly to the originator. This can be used, for example, to release one's best validated data to the public, but restrict the "new version" of data processed with a new, unproven algorithm until it is ready. Similarly, the system can provide access to algorithms, both as modifiable source code (where possible) and fully integrated executable Algorithm Plugin Packages (APPs). This enables researchers to download publically released versions of the processing algorithms and easily reproduce the processing remotely, while interacting with the ACPS. The algorithms can be modified allowing better experimentation and rapid improvement. The modified algorithms can be easily integrated back into the production system for large scale bulk processing to evaluate improvements. The system includes complete provenance tracking of algorithms, data and the entire processing environment. The origin of any data or algorithms is recorded and the entire history of the processing chains are stored such that a researcher can understand the entire data flow. Provenance is captured in a form suitable for the system to guarantee scientific reproducability of any data product it distributes even in cases where the physical data products themselves have been deleted due to space constraints. We are currently working on Semantic Web ontologies for representing the various provenance information. A new web site focusing on consolidating informaon about the measurement, processing system, and data access has been established to encourage interaction with the overall scientific community. We will describe the system, its data processing capabilities, and the methods the community can use to interact with the standard interfaces of the system.
Validity and Reliability Study of the Turkish Version of Ego Identity Process Questionairre
ERIC Educational Resources Information Center
Morsünbül, Ümit; Atak, Hasan
2013-01-01
The main developmental task is identity development in adolescence period. Marcia defined four identity statuses based on exploration and commitment process: Achievement, moratorium, foreclosure and diffusion. Certain scales were developed to measure identity development. Another questionnaire that evaluates both four identity statuses and the…
New Developments in Developmental Research on Social Information Processing and Antisocial Behavior
ERIC Educational Resources Information Center
Fontaine, Reid Griffith
2010-01-01
The Special Section on developmental research on social information processing (SIP) and antisocial behavior is here introduced. Following a brief history of SIP theory, comments on several themes--measurement and assessment, attributional and interpretational style, response evaluation and decision, and the relation between emotion and SIP--that…
Does a Regional Accent Perturb Speech Processing?
ERIC Educational Resources Information Center
Floccia, Caroline; Goslin, Jeremy; Girard, Frederique; Konopczynski, Gabrielle
2006-01-01
The processing costs involved in regional accent normalization were evaluated by measuring differences in lexical decision latencies for targets placed at the end of sentences with different French regional accents. Over a series of 6 experiments, the authors examined the time course of comprehension disruption by manipulating the duration and…
2006-09-01
Scully, M., Van Manen , J., & Westney, D. (2005). Managing for the Future: Organizational Behavior & Processes. Mason, OH: South-Western College...equipment, facilities and with increasing importance the resources of information and expertise (Ancona, D., Kochan, T., Scully, M., Van Maanen, J
USDA-ARS?s Scientific Manuscript database
Watershed models typically are evaluated solely through comparison of in-stream water and nutrient fluxes with measured data using established performance criteria, whereas processes and responses within the interior of the watershed that govern these global fluxes often are neglected. Due to the l...
Sentence Processing Factors in Adults with Specific Language Impairment
ERIC Educational Resources Information Center
Poll, Gerard H.
2012-01-01
Sentence imitation effectively discriminates between adults with and without specific language impairment (SLI). Little is known, however, about the factors that result in performance differences. This study evaluated the effects of working memory, processing speed, and argument status on sentence imitation. Working memory was measured by both a…
USDA-ARS?s Scientific Manuscript database
In this study, an innovative emulsion made from soybean and navy bean blends of different proportionalities was developed. In addition, two processing methods were evaluated: traditional cooking and jet-cooking. The physical attributes and storage stability were measured and compared. This study fou...
Using Teacher Effectiveness Data for Information-Rich Hiring
ERIC Educational Resources Information Center
Cannata, Marisa; Rubin, Mollie; Goldring, Ellen; Grissom, Jason A.; Neumerski, Christine M.; Drake, Timothy A.; Schuermann, Patrick
2017-01-01
Purpose: New teacher effectiveness measures have the potential to influence how principals hire teachers as they provide new and richer information about candidates to a traditionally information-poor process. This article examines how the hiring process is changing as a result of teacher evaluation reforms. Research Methods: Data come from…
[Quality assessment in anesthesia].
Kupperwasser, B
1996-01-01
Quality assessment (assurance/improvement) is the set of methods used to measure and improve the delivered care and the department's performance against pre-established criteria or standards. The four stages of the self-maintained quality assessment cycle are: problem identification, problem analysis, problem correction and evaluation of corrective actions. Quality assessment is a measurable entity for which it is necessary to define and calibrate measurement parameters (indicators) from available data gathered from the hospital anaesthesia environment. Problem identification comes from the accumulation of indicators. There are four types of quality indicators: structure, process, outcome and sentinel indicators. The latter signal a quality defect, are independent of outcomes, are easier to analyse by statistical methods and closely related to processes and main targets of quality improvement. The three types of methods to analyse the problems (indicators) are: peer review, quantitative methods and risks management techniques. Peer review is performed by qualified anaesthesiologists. To improve its validity, the review process should be explicited and conclusions based on standards of practice and literature references. The quantitative methods are statistical analyses applied to the collected data and presented in a graphic format (histogram, Pareto diagram, control charts). The risks management techniques include: a) critical incident analysis establishing an objective relationship between a 'critical' event and the associated human behaviours; b) system accident analysis, based on the fact that accidents continue to occur despite safety systems and sophisticated technologies, checks of all the process components leading to the impredictable outcome and not just the human factors; c) cause-effect diagrams facilitate the problem analysis in reducing its causes to four fundamental components (persons, regulations, equipment, process). Definition and implementation of corrective measures, based on the findings of the two previous stages, are the third step of the evaluation cycle. The Hawthorne effect is an outcome improvement, before the implementation of any corrective actions. Verification of the implemented actions is the final and mandatory step closing the evaluation cycle.
Don't judge me: Psychophysiological evidence of gender differences to social evaluative feedback.
Vanderhasselt, Marie-Anne; De Raedt, Rudi; Nasso, Selene; Puttevils, Louise; Mueller, Sven C
2018-05-01
Human beings have a basic need for esteemed social connections, and receiving negative self-evaluative feedback induces emotional distress. The aim of the current study is to measure eye movements (a physiological marker of attention allocation) and pupillary responses (a physiological marker of cognitive and emotional processing) as online and objective indices of participants' reaction to positive/negative social evaluations from the same or opposite sex. Following the paradigm, subjective mood ratings and heart rate variability (HRV) - as an objective index of regulatory effort- were measured. Results demonstrate clear gender-specific results in all measures. Eye-movements demonstrate that male participants respond more with other-focused attention (and specifically to male participants), whereas women respond more with self-focused attention following negative social evaluative feedback. Pupillary responses show that social evaluative feedback is specifically eliciting cognitive/affective processes in male participants to regulate emotional responses when provided by the opposite gender. Finally, following the paradigm, female (as compared to male) participants were more subjectively reactive to the paradigm (i.e., self-reports), and were less able to engage contextual- and goal related regulatory control of emotional responses (reduced HRV). Although the current study focused on psychiatrically healthy young adults, results may contribute to our understanding of sex differences in internalizing mental problems, such as rumination. Copyright © 2018 Elsevier B.V. All rights reserved.
Nutbeam, D; Smith, C; Murphy, S; Catford, J
1993-01-01
STUDY OBJECTIVE--To examine the difficulties of developing and maintaining outcome evaluation designs in long term, community based health promotion programmes. DESIGN--Semistructured interviews of health promotion managers. SETTING--Wales and two reference health regions in England. PARTICIPANTS--Nine health promotion managers in Wales and 18 in England. MEASUREMENTS AND MAIN RESULTS--Information on selected heart health promotion activity undertaken or coordinated by health authorities from 1985-90 was collected. The Heartbeat Wales coronary heart disease prevention programme was set up in 1985, and a research and evaluation strategy was established to complement the intervention. A substantial increase in the budget occurred over the period. In the reference health regions in England this initiative was noted and rapidly taken up, thus compromising their use as control areas. CONCLUSION--Information on large scale, community based health promotion programmes can disseminate quickly and interfere with classic intervention/evaluation control designs through contamination. Alternative experimental designs for assessing the effectiveness of long term intervention programmes need to be considered. These should not rely solely on the use of reference populations, but should balance the measurement of outcome with an assessment of the process of change in communities. The development and use of intervention exposure measures together with well structured and comprehensive process evaluation in both the intervention and reference areas is recommended. PMID:8326270
Rorrer, Audrey S
2016-04-01
This paper describes the approach and process undertaken to develop evaluation capacity among the leaders of a federally funded undergraduate research program. An evaluation toolkit was developed for Computer and Information Sciences and Engineering(1) Research Experiences for Undergraduates(2) (CISE REU) programs to address the ongoing need for evaluation capacity among principal investigators who manage program evaluation. The toolkit was the result of collaboration within the CISE REU community with the purpose being to provide targeted instructional resources and tools for quality program evaluation. Challenges were to balance the desire for standardized assessment with the responsibility to account for individual program contexts. Toolkit contents included instructional materials about evaluation practice, a standardized applicant management tool, and a modulated outcomes measure. Resulting benefits from toolkit deployment were having cost effective, sustainable evaluation tools, a community evaluation forum, and aggregate measurement of key program outcomes for the national program. Lessons learned included the imperative of understanding the evaluation context, engaging stakeholders, and building stakeholder trust. Results from project measures are presented along with a discussion of guidelines for facilitating evaluation capacity building that will serve a variety of contexts. Copyright © 2016. Published by Elsevier Ltd.
Bubble structure evaluation method of sponge cake by using image morphology
NASA Astrophysics Data System (ADS)
Kato, Kunihito; Yamamoto, Kazuhiko; Nonaka, Masahiko; Katsuta, Yukiyo; Kasamatsu, Chinatsu
2007-01-01
Nowadays, many evaluation methods for food industry by using image processing are proposed. These methods are becoming new evaluation method besides the sensory test and the solid-state measurement that have been used for the quality evaluation recently. The goal of our research is structure evaluation of sponge cake by using the image processing. In this paper, we propose a feature extraction method of the bobble structure in the sponge cake. Analysis of the bubble structure is one of the important properties to understand characteristics of the cake from the image. In order to take the cake image, first we cut cakes and measured that's surface by using the CIS scanner, because the depth of field of this type scanner is very shallow. Therefore the bubble region of the surface has low gray scale value, and it has a feature that is blur. We extracted bubble regions from the surface images based on these features. The input image is binarized, and the feature of bubble is extracted by the morphology analysis. In order to evaluate the result of feature extraction, we compared correlation with "Size of the bubble" of the sensory test result. From a result, the bubble extraction by using morphology analysis gives good correlation. It is shown that our method is as well as the subjectivity evaluation.
NASA Technical Reports Server (NTRS)
Komar, Paul D.
1987-01-01
The concept of flow competence is generally employed to evaluate the velocities, discharges, and bottom stresses of river floods inferred from the size of the largest sediment particles transported. Flow competence has become an important tool for evaluating the hydraulics of exceptional floods on Earth, including those which eroded the Channeled Scabland of eastern Washington, and has potential for similar evaluations of the floods which carved the outflow channels on Mars. For the most part, flow-competence evaluations were empirical, based on data compiled from a variety of sources including major terrestrial floods caused by natural processes or dam failures. Such flow-competence relationships would appear to provide a straight-forward assessment of flood-flow stresses and velocities based on the maximum size of gravel and boulders transported. However, a re-examination of the data base and comparisons with measurements of selective entrainment and transport of gravel in rivers open to question such evaluations. Analyses of the forces acting on the grain during entrainment by pivoting, rolling, or sliding, an approach which focuses more on the physical processes than the purely empirical relationships can be demonstrated. These derived equations require further testing by flume and field measurements before being applied to flow-competence evaluations. Such tests are now underway.
Enhanced modeling and simulation of EO/IR sensor systems
NASA Astrophysics Data System (ADS)
Hixson, Jonathan G.; Miller, Brian; May, Christopher
2015-05-01
The testing and evaluation process developed by the Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) provides end to end systems evaluation, testing, and training of EO/IR sensors. By combining NV-LabCap, the Night Vision Integrated Performance Model (NV-IPM), One Semi-Automated Forces (OneSAF) input sensor file generation, and the Night Vision Image Generator (NVIG) capabilities, NVESD provides confidence to the M&S community that EO/IR sensor developmental and operational testing and evaluation are accurately represented throughout the lifecycle of an EO/IR system. This new process allows for both theoretical and actual sensor testing. A sensor can be theoretically designed in NV-IPM, modeled in NV-IPM, and then seamlessly input into the wargames for operational analysis. After theoretical design, prototype sensors can be measured by using NV-LabCap, then modeled in NV-IPM and input into wargames for further evaluation. The measurement process to high fidelity modeling and simulation can then be repeated again and again throughout the entire life cycle of an EO/IR sensor as needed, to include LRIP, full rate production, and even after Depot Level Maintenance. This is a prototypical example of how an engineering level model and higher level simulations can share models to mutual benefit.
The early effects of Medicare's mandatory hospital pay-for-performance program.
Ryan, Andrew M; Burgess, James F; Pesko, Michael F; Borden, William B; Dimick, Justin B
2015-02-01
To evaluate the impact of hospital value-based purchasing (HVBP) on clinical quality and patient experience during its initial implementation period (July 2011-March 2012). Hospital-level clinical quality and patient experience data from Hospital Compare from up to 5 years before and three quarters after HVBP was initiated. Acute care hospitals were exposed to HVBP by mandate while critical access hospitals and hospitals located in Maryland were not exposed. We performed a difference-in-differences analysis, comparing performance on 12 incentivized clinical process and 8 incentivized patient experience measures between hospitals exposed to the program and a matched comparison group of nonexposed hospitals. We also evaluated whether hospitals that were ultimately exposed to HVBP may have anticipated the program by improving quality in advance of its introduction. Difference-in-differences estimates indicated that hospitals that were exposed to HVBP did not show greater improvement for either the clinical process or patient experience measures during the program's first implementation period. Estimates from our preferred specification showed that HVBP was associated with a 0.51 percentage point reduction in composite quality for the clinical process measures (p > .10, 95 percent CI: -1.37, 0.34) and a 0.30 percentage point reduction in composite quality for the patient experience measures (p > .10, 95 percent CI: -0.79, 0.19). We found some evidence that hospitals improved performance on clinical process measures prior to the start of HVBP, but no evidence of this phenomenon for the patient experience measures. The timing of the financial incentives in HVBP was not associated with improved quality of care. It is unclear whether improvement for the clinical process measures prior to the start of HVBP was driven by the expectation of the program or was the result of other factors. © Health Research and Educational Trust.
NASA Astrophysics Data System (ADS)
Matas, Richard; Syka, Tomáš; Hurda, Lukáš
2018-06-01
The article deals with a description of results from research and development of a radial compressor stage with 3D rotor blades. The experimental facility and the measurement and evaluation process is described briefly in the first part. The comparison of measured and computed characteristics can be found in the second part. The last part of this contribution is the evaluation of the rotor blades technological holes influence on the compressor stage characteristics.
Assessment and Control of Spacecraft Charging Risks on the International Space Station
NASA Technical Reports Server (NTRS)
Koontz, Steve; Valentine, Mark; Keeping, Thomas; Edeen, Marybeth; Spetch, William; Dalton, Penni
2004-01-01
The International Space Station (ISS) operates in the F2 region of Earth's ionosphere, orbiting at altitudes ranging from 350 to 450 km at an inclination of 51.6 degrees. The relatively dense, cool F2 ionospheric plasma suppresses surface charging processes much of the time, and the flux of relativistic electrons is low enough to preclude deep dielectric charging processes. The most important spacecraft charging processes in the ISS orbital environment are: 1) ISS electrical power system interactions with the F2 plasma, 2) magnetic induction processes resulting from flight through the geomagnetic field and, 3) charging processes that result from interaction with auroral electrons at high latitude. Recently, the continuing review and evaluation of putative ISS charging hazards required by the ISS Program Office revealed that ISS charging could produce an electrical shock hazard to the ISS crew during extravehicular activity (EVA). ISS charging risks are being evaluated in an ongoing measurement and analysis campaign. The results of ISS charging measurements are combined with a recently developed model of ISS charging (the Plasma Interaction Model) and an exhaustive analysis of historical ionospheric variability data (ISS Ionospheric Specification) to evaluate ISS charging risks using Probabilistic Risk Assessment (PRA) methods. The PRA combines estimates of the frequency of occurrence and severity of the charging hazards with estimates of the reliability of various hazard controls systems, as required by NASA s safety and risk management programs, to enable design and selection of a hazard control approach that minimizes overall programmatic and personnel risk. The PRA provides a quantitative methodology for incorporating the results of the ISS charging measurement and analysis campaigns into the necessary hazard reports, EVA procedures, and ISS flight rules required for operating ISS in a safe and productive manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsui, S., E-mail: smatsui@gpi.ac.jp; Mori, Y.; Nonaka, T.
2016-05-15
For evaluation of on-site dosimetry and process design in industrial use of ultra-low energy electron beam (ULEB) processes, we evaluate the energy deposition using a thin radiochromic film and a Monte Carlo simulation. The response of film dosimeter was calibrated using a high energy electron beam with an acceleration voltage of 2 MV and alanine dosimeters with uncertainty of 11% at coverage factor 2. Using this response function, the results of absorbed dose measurements for ULEB were evaluated from 10 kGy to 100 kGy as a relative dose. The deviation between the responses of deposit energy on the films andmore » Monte Carlo simulations was within 15%. As far as this limitation, relative dose estimation using thin film dosimeters with response function obtained by high energy electron irradiation and simulation results is effective for ULEB irradiation processes management.« less
Matsui, S; Mori, Y; Nonaka, T; Hattori, T; Kasamatsu, Y; Haraguchi, D; Watanabe, Y; Uchiyama, K; Ishikawa, M
2016-05-01
For evaluation of on-site dosimetry and process design in industrial use of ultra-low energy electron beam (ULEB) processes, we evaluate the energy deposition using a thin radiochromic film and a Monte Carlo simulation. The response of film dosimeter was calibrated using a high energy electron beam with an acceleration voltage of 2 MV and alanine dosimeters with uncertainty of 11% at coverage factor 2. Using this response function, the results of absorbed dose measurements for ULEB were evaluated from 10 kGy to 100 kGy as a relative dose. The deviation between the responses of deposit energy on the films and Monte Carlo simulations was within 15%. As far as this limitation, relative dose estimation using thin film dosimeters with response function obtained by high energy electron irradiation and simulation results is effective for ULEB irradiation processes management.
Evaluation and selection of open-source EMR software packages based on integrated AHP and TOPSIS.
Zaidan, A A; Zaidan, B B; Al-Haiqi, Ahmed; Kiah, M L M; Hussain, Muzammil; Abdulnabi, Mohamed
2015-02-01
Evaluating and selecting software packages that meet the requirements of an organization are difficult aspects of software engineering process. Selecting the wrong open-source EMR software package can be costly and may adversely affect business processes and functioning of the organization. This study aims to evaluate and select open-source EMR software packages based on multi-criteria decision-making. A hands-on study was performed and a set of open-source EMR software packages were implemented locally on separate virtual machines to examine the systems more closely. Several measures as evaluation basis were specified, and the systems were selected based a set of metric outcomes using Integrated Analytic Hierarchy Process (AHP) and TOPSIS. The experimental results showed that GNUmed and OpenEMR software can provide better basis on ranking score records than other open-source EMR software packages. Copyright © 2014 Elsevier Inc. All rights reserved.
Clinical peer review program self-evaluation for US hospitals.
Edwards, Marc T
2010-01-01
Prior research has shown wide variation in clinical peer review program structure, process, governance, and perceived effectiveness. This study sought to validate the utility of a Peer Review Program Self-Evaluation Tool as a potential guide to physician and hospital leaders seeking greater program value. Data from 330 hospitals show that the total score from the self-evaluation tool is strongly associated with perceived quality impact. Organizational culture also plays a significant role. When controlling for these factors, there was no evidence of benefit from a multispecialty review process. Physicians do not generally use reliable methods to measure clinical performance. A high rate of change since 2007 has not produced much improvement. The Peer Review Program Self-Evaluation Tool reliably differentiates hospitals along a continuum of perceived program performance. The full potential of peer review as a process to improve the quality and safety of care has yet to be realized.
Evaluating treatment process redesign by applying the EFQM Excellence Model.
Nabitz, Udo; Schramade, Mark; Schippers, Gerard
2006-10-01
To evaluate a treatment process redesign programme implementing evidence-based treatment as part of a total quality management in a Dutch addiction treatment centre. Quality management was monitored over a period of more than 10 years in an addiction treatment centre with 550 professionals. Changes are evaluated, comparing the scores on the nine criteria of the European Foundation for Quality Management (EFQM) Excellence Model before and after a major redesign of treatment processes and ISO certification. In the course of 10 years, most intake, care, and cure processes were reorganized, the support processes were restructured and ISO certified, 29 evidence-based treatment protocols were developed and implemented, and patient follow-up measuring was established to make clinical outcomes transparent. Comparing the situation before and after the changes shows that the client satisfaction scores are stable, that the evaluation by personnel and society is inconsistent, and that clinical, production, and financial outcomes are positive. The overall EFQM assessment by external assessors in 2004 shows much higher scores on the nine criteria than the assessment in 1994. Evidence-based treatment can successfully be implemented in addiction treatment centres through treatment process redesign as part of a total quality management strategy, but not all results are positive.
ERIC Educational Resources Information Center
Harmon, Marcel; Larroque, Andre; Maniktala, Nate
2012-01-01
The New Mexico Public School Facilities Authority (NMPSFA) is the agency responsible for administering state-funded capital projects for schools statewide. Post occupancy evaluation (POE) is the tool selected by NMPSFA for measuring project outcomes. The basic POE process for V. Sue Cleveland High School (VSCHS) consisted of a series of field…
Evaluating process in child and family interventions: aggression prevention as an example.
Tolan, Patrick H; Hanish, Laura D; McKay, Mary M; Dickey, Mitchell H
2002-06-01
This article reports on 2 studies designed to develop and validate a set of measures for use in evaluating processes of child and family interventions. In Study 1 responses from 187 families attending an outpatient clinic for child behavior problems were factor analyzed to identify scales, consistent across sources: Alliance (Satisfactory Relationship with Interventionist and Program Satisfaction), Parenting Skill Attainment, Child Cooperation During Session, Child Prosocial Behavior, and Child Aggressive Behavior. Study 2 focused on patterns of scale scores among 78 families taking part in a 22-week preventive intervention designed to affect family relationships, parenting, and child antisocial and prosocial behaviors. The factor structure identified in Study 1 was replicated. Scale construct validity was demonstrated through across-source convergence, sensitivity to intervention change, and ability to discriminate individual differences. Path analysis validated the scales' utility in explaining key aspects of the intervention process. Implications for evaluating processes in family interventions are discussed.
Adapting Nielsen’s Design Heuristics to Dual Processing for Clinical Decision Support
Taft, Teresa; Staes, Catherine; Slager, Stacey; Weir, Charlene
2016-01-01
The study objective was to improve the applicability of Nielson’s standard design heuristics for evaluating electronic health record (EHR) alerts and linked ordering support by integrating them with Dual Process theory. Through initial heuristic evaluation and a user study of 7 physicians, usability problems were identified. Through independent mapping of specific usability criteria to support for each of the Dual Cognitive processes (S1 and S2) and deliberation, agreement was reached on mapping criteria. Finally, usability errors from the heuristic and user study were mapped to S1 and S2. Adding a dual process perspective to specific heuristic analysis increases the applicability and relevance of computerized health information design evaluations. This mapping enables designers to measure that their systems are tailored to support attention allocation. System 1 will be supported by improving pattern recognition and saliency, and system 2 through efficiency and control of information access. PMID:28269915
Adapting Nielsen's Design Heuristics to Dual Processing for Clinical Decision Support.
Taft, Teresa; Staes, Catherine; Slager, Stacey; Weir, Charlene
2016-01-01
The study objective was to improve the applicability of Nielson's standard design heuristics for evaluating electronic health record (EHR) alerts and linked ordering support by integrating them with Dual Process theory. Through initial heuristic evaluation and a user study of 7 physicians, usability problems were identified. Through independent mapping of specific usability criteria to support for each of the Dual Cognitive processes (S1 and S2) and deliberation, agreement was reached on mapping criteria. Finally, usability errors from the heuristic and user study were mapped to S1 and S2. Adding a dual process perspective to specific heuristic analysis increases the applicability and relevance of computerized health information design evaluations. This mapping enables designers to measure that their systems are tailored to support attention allocation. System 1 will be supported by improving pattern recognition and saliency, and system 2 through efficiency and control of information access.
NASA Astrophysics Data System (ADS)
Profumieri, A.; Bonell, C.; Catalfamo, P.; Cherniz, A.
2016-04-01
Virtual reality has been proposed for different applications, including the evaluation of new control strategies and training protocols for upper limb prostheses and for the study of new rehabilitation programs. In this study, a lower limb simulation environment commanded by surface electromyography signals is evaluated. The time delays generated by the acquisition and processing stages for the signals that would command the knee joint, were measured and different acquisition windows were analysed. The subjective perception of the quality of simulation was also evaluated when extra delays were added to the process. The results showed that the acquisition window is responsible for the longest delay. Also, the basic implemented processes allowed for the acquisition of three signal channels for commanding the simulation. Finally, the communication between different applications is arguably efficient, although it depends on the amount of data to be sent.
The Role of Attention in Somatosensory Processing: A Multi-trait, Multi-method Analysis
Puts, Nicolaas A. J.; Mahone, E. Mark; Edden, Richard A. E.; Tommerdahl, Mark; Mostofsky, Stewart H.
2016-01-01
Sensory processing abnormalities in autism have largely been described by parent report. This study used a multi-method (parent-report and measurement), multi-trait (tactile sensitivity and attention) design to evaluate somatosensory processing in ASD. Results showed multiple significant within-method (e.g., parent report of different traits)/cross-trait (e.g., attention and tactile sensitivity) correlations, suggesting that parent-reported tactile sensory dysfunction and performance-based tactile sensitivity describe different behavioral phenomena. Additionally, both parent-reported tactile functioning and performance-based tactile sensitivity measures were significantly associated with measures of attention. Findings suggest that sensory (tactile) processing abnormalities in ASD are multifaceted, and may partially reflect a more global deficit in behavioral regulation (including attention). Challenges of relying solely on parent-report to describe sensory difficulties faced by children/families with ASD are also highlighted. PMID:27448580
NASA Technical Reports Server (NTRS)
Wolford, David S.; Myers, Matthew G.; Prokop, Norman F.; Krasowski, Michael J.; Parker, David S.; Cassidy, Justin C.; Davies, William E.; Vorreiter, Janelle O.; Piszczor, Michael F.; McNatt, Jeremiah S.
2015-01-01
Measurement is essential for the evaluation of new photovoltaic (PV) technology for space solar cells. NASA Glenn Research Center (GRC) is in the process of measuring several solar cells in a supplemental experiment on NASA Goddard Space Flight Center's (GSFC) Robotic Refueling Mission's (RRM) Task Board 4 (TB4). Four industry and government partners have provided advanced PV devices for measurement and orbital environment testing. The experiment will be on-orbit for approximately 18 months. It is completely self-contained and will provide its own power and internal data storage. Several new cell technologies including four- junction (4J) Inverted Metamorphic Multijunction (IMM) cells will be evaluated and the results compared to ground-based measurements.
NASA Technical Reports Server (NTRS)
Roth, Donald J (Inventor)
2011-01-01
A computer implemented process for simultaneously measuring the velocity of terahertz electromagnetic radiation in a dielectric material sample without prior knowledge of the thickness of the sample and for measuring the thickness of a material sample using terahertz electromagnetic radiation in a material sample without prior knowledge of the velocity of the terahertz electromagnetic radiation in the sample is disclosed and claimed. Utilizing interactive software the process evaluates, in a plurality of locations, the sample for microstructural variations and for thickness variations and maps the microstructural and thickness variations by location. A thin sheet of dielectric material may be used on top of the sample to create a dielectric mismatch. The approximate focal point of the radiation source (transceiver) is initially determined for good measurements.
Studies of soundings and imagings measurements from geostationary satellites
NASA Technical Reports Server (NTRS)
Suomi, V. E.
1973-01-01
Soundings and imaging measurements from geostationary satellites are presented. The subjects discussed are: (1) meteorological data processing techniques, (2) sun glitter, (3) cloud growth rate study, satellite stability characteristics, and (4) high resolution optics. The use of perturbation technique to obtain the motion of sensors aboard a satellite is described. The most conditions, and measurement errors. Several performance evaluation parameters are proposed.
Choice and Change of Measures in Performance-Measurement Models
2005-05-01
associated costs . 3 Discussions of many current accounting and performance-measurement issues can be...change: an exploratory study. Accounting , Organizations, and Society, 24(3), 189-204. Adimando, C., Butler, R., Malley, S ., Ravid, S . A., Shepro, R...impact of contextual and process factors on the evaluation of activity-based costing systems. Accounting , Organizations and Society, 24, 525-559. Antle
Measures of progress for collaboration: case study of the Applegate Partnership.
Su. Rolle
2002-01-01
Using the Applegate Partnership as a case study, this paper proposes a number of ways to measure the success of collaborative groups. These measures allow for providing evaluation and feedback, engaging needed participants, and responding to groups critical of the collaborative process. Arguing for the concept of progress in place of success, this paper points out that...
Health Care Merged With Senior Housing: Description and Evaluation of a Successful Program.
Barry, Theresa Teta
2017-01-01
Objective: This article describes and evaluates a successful partnership between a large health care organization and housing for seniors. The program provides on-site, primary care visits by a physician and a nurse in addition to intensive social services to residents in an affordable senior housing apartment building located in Pennsylvania. Per Donabedian's "Structure-Process-Outcome" model, the program demonstrated positive health care outcomes for its participants via a prescribed structure. To provide guidance for replication in similar settings, we qualitatively evaluated the processes by which successful outcomes were obtained. Methods: With program structures in place and outcomes measured, this case study collected and analyzed qualitative information taken from key informant interviews on care processes involved in the program. Themes were extracted from semistructured interviews and used to describe the processes that helped and hindered the program. Results and Discussion: Common processes were identified across respondents; however, the nuanced processes that lead to successful outcomes suggest that defined structures and processes may not be sufficient to produce similar outcomes in other settings. Further research is needed to determine the program's replicability and policy implications.
Modified Universal Design Survey: Enhancing Operability of Launch Vehicle Ground Crew Worksites
NASA Technical Reports Server (NTRS)
Blume, Jennifer L.
2010-01-01
Operability is a driving requirement for next generation space launch vehicles. Launch site ground operations include numerous operator tasks to prepare the vehicle for launch or to perform preflight maintenance. Ensuring that components requiring operator interaction at the launch site are designed for optimal human use is a high priority for operability. To promote operability, a Design Quality Evaluation Survey based on Universal Design framework was developed to support Human Factors Engineering (HFE) evaluation for NASA s launch vehicles. Universal Design per se is not a priority for launch vehicle processing however; applying principles of Universal Design will increase the probability of an error free and efficient design which promotes operability. The Design Quality Evaluation Survey incorporates and tailors the seven Universal Design Principles and adds new measures for Safety and Efficiency. Adapting an approach proven to measure Universal Design Performance in Product, each principle is associated with multiple performance measures which are rated with the degree to which the statement is true. The Design Quality Evaluation Survey was employed for several launch vehicle ground processing worksite analyses. The tool was found to be most useful for comparative judgments as opposed to an assessment of a single design option. It provided a useful piece of additional data when assessing possible operator interfaces or worksites for operability.
Wysham, Nicholas G; Mularski, Richard A; Schmidt, David M; Nord, Shirley C; Louis, Deborah L; Shuster, Elizabeth; Curtis, J Randall; Mosen, David M
2014-06-01
Communication in the intensive care unit (ICU) is an important component of quality ICU care. In this report, we evaluate the long-term effects of a quality improvement (QI) initiative, based on the VALUE communication strategy, designed to improve communication with family members of critically ill patients. We implemented a multifaceted intervention to improve communication in the ICU and measured processes of care. Quality improvement components included posted VALUE placards, templated progress note inclusive of communication documentation, and a daily rounding checklist prompt. We evaluated care for all patients cared for by the intensivists during three separate 3 week periods, pre, post, and 3 years following the initial intervention. Care delivery was assessed in 38 patients and their families in the pre-intervention sample, 27 in the post-intervention period, and 41 in follow-up. Process measures of communication showed improvement across the evaluation periods, for example, daily updates increased from pre 62% to post 76% to current 84% of opportunities. Our evaluation of this quality improvement project suggests persistence and continued improvements in the delivery of measured aspects of ICU family communication. Maintenance with point-of-care-tools may account for some of the persistence and continued improvements. Copyright © 2014 Elsevier Inc. All rights reserved.
Space Fortress game training and executive control in older adults: A pilot intervention
Stern, Yaakov; Blumen, Helena M.; Rich, Leigh W.; Richards, Alexis; Herzberg, Gray; Gopher, Daniel
2012-01-01
We investigated the feasibility of using the Space Fortress (SF) game, a complex video game originally developed to study complex skill acquisition in young adults, to improve executive control processes in cognitively healthy older adults. The study protocol consisted of 36 one-hour game play sessions over 3 months with cognitive evaluations before and after, and a follow-up evaluation at 6 months. Sixty participants were randomized to one of three conditions: Emphasis Change (EC) – elders were instructed to concentrate on playing the entire game but place particular emphasis on a specific aspect of game play in each particular game; Active Control (AC) – game play with standard instructions; Passive Control (PC) – evaluation sessions without game play. Primary outcome measures were obtained from five tasks, presumably tapping executive control processes. A total of 54 older adults completed the study protocol. One measure of executive control, WAIS-III letter–number sequencing, showed improvement in performance from pre- to post-evaluations in the EC condition, but not in the other two conditions. These initial findings are modest but encouraging. Future SF interventions need to carefully consider increasing the duration and or the intensity of the intervention by providing at-home game training, reducing the motor demands of the game, and selecting appropriate outcome measures. PMID:21988726
Assessing Significance in Continuing Education: A Needed Adition to Productivity.
ERIC Educational Resources Information Center
Burnham, Byron R.
1986-01-01
A process whereby continuing education administrators can quantitatively measure FTEs (full-time equivalency), courses, registration, costs, and student hours and relate these measurements to staff productivity is evaluated and its limitations explored. The author also presents a framework of accounting for adult education productivity and values.…
Advanced in-situ measurement of soil carbon content using inelastic neutron scattering
USDA-ARS?s Scientific Manuscript database
Measurement and mapping of natural and anthropogenic variations in soil carbon stores is a critical component of any soil resource evaluation process. Emerging modalities for soil carbon analysis in the field is the registration of gamma rays from soil under neutron irradiation. The inelastic neutro...
The Response Dynamics of Recognition Memory: Sensitivity and Bias
ERIC Educational Resources Information Center
Koop, Gregory J.; Criss, Amy H.
2016-01-01
Advances in theories of memory are hampered by insufficient metrics for measuring memory. The goal of this paper is to further the development of model-independent, sensitive empirical measures of the recognition decision process. We evaluate whether metrics from continuous mouse tracking, or response dynamics, uniquely identify response bias and…
Comparative fiber evaluation of the mesdan aqualab microwave moisture measurement instrument
USDA-ARS?s Scientific Manuscript database
Moisture is a key cotton fiber parameter, as it can impact the fiber quality and the processing of cotton fiber. The Mesdan Aqualab is a microwave-based fiber moisture measurement instrument for samples with moderate sample size. A program was implemented to determine the capabilities of the Aqual...
Testing a model of componential processing of multi-symbol numbers-evidence from measurement units.
Huber, Stefan; Bahnmueller, Julia; Klein, Elise; Moeller, Korbinian
2015-10-01
Research on numerical cognition has addressed the processing of nonsymbolic quantities and symbolic digits extensively. However, magnitude processing of measurement units is still a neglected topic in numerical cognition research. Hence, we investigated the processing of measurement units to evaluate whether typical effects of multi-digit number processing such as the compatibility effect, the string length congruity effect, and the distance effect are also present for measurement units. In three experiments, participants had to single out the larger one of two physical quantities (e.g., lengths). In Experiment 1, the compatibility of number and measurement unit (compatible: 3 mm_6 cm with 3 < 6 and mm < cm; incompatible: 3 cm_6 mm with 3 < 6 but cm > mm) as well as string length congruity (congruent: 1 m_2 km with m < km and 2 < 3 characters; incongruent: 2 mm_1 m with mm < m, but 3 > 2 characters) were manipulated. We observed reliable compatibility effects with prolonged reaction times (RT) for incompatible trials. Moreover, a string length congruity effect was present in RT with longer RT for incongruent trials. Experiments 2 and 3 served as control experiments showing that compatibility effects persist when controlling for holistic distance and that a distance effect for measurement units exists. Our findings indicate that numbers and measurement units are processed in a componential manner and thus highlight that processing characteristics of multi-digit numbers generalize to measurement units. Thereby, our data lend further support to the recently proposed generalized model of componential multi-symbol number processing.
Recent advances in phase shifted time averaging and stroboscopic interferometry
NASA Astrophysics Data System (ADS)
Styk, Adam; Józwik, Michał
2016-08-01
Classical Time Averaging and Stroboscopic Interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an extensive measurement and data processing strategies in order to evaluate the information on maximum amplitude at a given load of vibrating object. In this paper the modified strategies of data processing in both techniques are introduced. These modifications allow for fast and reliable calculation of searched value, without additional complication of measurement systems. Through the paper the both approaches are discussed and experimentally verified.
Irwin, P; Rudd, A
1998-01-01
The emphasis on outcomes measurement requires that casemix is considered in any comparative studies. In 1996 the Intercollegiate Working Party for Stroke agreed a minimum data set to measure the severity of casemix in stroke. The reasons for its development, the evidence base supporting the items included and the possible uses of the data set are described. It is currently being evaluated in national outcome and process audits to be reported at a later date.
Image Navigation and Registration Performance Assessment Evaluation Tools for GOES-R ABI and GLM
NASA Technical Reports Server (NTRS)
Houchin, Scott; Porter, Brian; Graybill, Justin; Slingerland, Philip
2017-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. This paper describes the software design and implementation of IPATS and provides preliminary test results.
Niquini, Roberta Pereira; Bittencourt, Sonia Azevedo; Leal, Maria do Carmo
2013-09-01
To assess the conformity of the weight measurement process in the pre-gestational care offered in the city of Rio de Janeiro by primary units and hospitals of the National Health System, as well as to verify the agreement between the anthropometric data reported by pregnant women and those recorded in prenatal cards. A cross-sectional study was conducted in 2007 - 2008 with two cluster samples: one to obtain a sample of pregnant women to be interviewed and another one for the weight measurement procedures to be observed. The conformity of the weight measurement process was evaluated according to the Ministry of Health standards, and the agreement between the two sources of anthropometric data was evaluated using mean differences, Bland-Altman method, intraclass correlation coefficient (ICC) and weighted Kappa. Out of the twelve criteria for weight measurement evaluation (n = 159 observations), three weren't in conformity (< 50% of conformity), two of them only need to be assessed when the scale is mechanical. For the interviewed pregnant women (n = 2,148), who had the two sources of anthropometric data, there was a tendency of self-reported height overestimation and pre-gestational and current weight and Body Mass Index underestimation. Accordance between the two sources of anthropometric information, according to ICC and weighted Kappa, were high (> 0.80). Studies may use weight and height information reported by pregnant women, in the absence of prenatal cards records, when it is an important economy to their execution, although the improvement of these two sources of information by means of better anthropometric process is necessary.
Terrestrial photovoltaic cell process testing
NASA Technical Reports Server (NTRS)
Burger, D. R.
1985-01-01
The paper examines critical test parameters, criteria for selecting appropriate tests, and the use of statistical controls and test patterns to enhance PV-cell process test results. The coverage of critical test parameters is evaluated by examining available test methods and then screening these methods by considering the ability to measure those critical parameters which are most affected by the generic process, the cost of the test equipment and test performance, and the feasibility for process testing.
Terrestrial photovoltaic cell process testing
NASA Astrophysics Data System (ADS)
Burger, D. R.
The paper examines critical test parameters, criteria for selecting appropriate tests, and the use of statistical controls and test patterns to enhance PV-cell process test results. The coverage of critical test parameters is evaluated by examining available test methods and then screening these methods by considering the ability to measure those critical parameters which are most affected by the generic process, the cost of the test equipment and test performance, and the feasibility for process testing.
Juzwa, W; Duber, A; Myszka, K; Białas, W; Czaczyk, K
2016-09-01
In this study the design of a flow cytometry-based procedure to facilitate the detection of adherent bacteria from food-processing surfaces was evaluated. The measurement of the cellular redox potential (CRP) of microbial cells was combined with cell sorting for the identification of microorganisms. The procedure enhanced live/dead cell discrimination owing to the measurement of the cell physiology. The microbial contamination of the surface of a stainless steel conveyor used to process button mushrooms was evaluated in three independent experiments. The flow cytometry procedure provided a step towards monitoring of contamination and enabled the assessment of microbial food safety hazards by the discrimination of active, mid-active and non-active bacterial sub-populations based on determination of their cellular vitality and subsequently single cell sorting to isolate microbial strains from discriminated sub-populations. There was a significant correlation (r = 0.97; p < 0.05) between the bacterial cell count estimated by the pour plate method and flow cytometry, despite there being differences in the absolute number of cells detected. The combined approach of flow cytometric CRP measurement and cell sorting allowed an in situ analysis of microbial cell vitality and the identification of species from defined sub-populations, although the identified microbes were limited to culturable cells.
DISTA: a portable software solution for 3D compilation of photogrammetric image blocks
NASA Astrophysics Data System (ADS)
Boochs, Frank; Mueller, Hartmut; Neifer, Markus
2001-04-01
A photogrammetric evaluation system used for the precise determination of 3D-coordinates from blocks of large metric images will be presented. First, the motivation for the development is shown, which is placed in the field of processing tools for photogrammetric evaluation tasks. As the use and availability of metric images of digital type rapidly increases corresponding equipment for the measuring process is needed. Systems which have been developed up to now are either very special ones, founded on high end graphics workstations with an according pricing or simple ones with restricted measuring functionality. A new conception will be shown, avoiding special high end graphics hardware but providing a complete processing chain for all elementary photogrammetric tasks ranging from preparatory steps over the formation of image blocks up to the automatic and interactive 3D-evaluation within digital stereo models. The presented system is based on PC-hardware equipped with off the shelf graphics boards and uses an object oriented design. The specific needs of a flexible measuring system and the corresponding requirements which have to be met by the system are shown. Important aspects as modularity and hardware independence and their value for the solution are shown. The design of the software will be presented and first results with a prototype realised on a powerful PC-hardware configuration will be featured
Heras-Mosteiro, Julio; Otero-García, Laura; Sanz-Barbero, Belén; Aranaz-Andrés, Jesús María
2016-01-01
To address the current economic crisis, governments have promoted austerity measures that have affected the taxpayer-funded health system. We report the findings of a study exploring the perceptions of primary care physicians in Madrid (Spain) on measures implemented in the Spanish health system. We carried out a qualitative study in two primary health care centres located in two neighbourhoods with unemployment and migrant population rates above the average of those in Madrid. Interviews were conducted with 12 primary health care physicians. Interview data were analysed by using thematic analysis and by adopting some elements of the grounded theory approach. Two categories were identified: evaluation of austerity measures and evaluation of decision-making in this process. Respondents believed there was a need to promote measures to improve the taxpayer-funded health system, but expressed their disagreement with the measures implemented. They considered that the measures were not evidence-based and responded to the need to decrease public health care expenditure in the short term. Respondents believed that they had not been properly informed about the measures and that there was adequate professional participation in the prioritization, selection and implementation of measures. They considered physician participation to be essential in the decision-making process because physicians have a more patient-centred view and have first-hand knowledge of areas requiring improvement in the system. It is essential that public authorities actively involve health care professionals in decision-making processes to ensure the implementation of evidence-based measures with strong professional support, thus maintaining the quality of care. Copyright © 2016 SESPAS. Published by Elsevier Espana. All rights reserved.
Developing image processing meta-algorithms with data mining of multiple metrics.
Leung, Kelvin; Cunha, Alexandre; Toga, A W; Parker, D Stott
2014-01-01
People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation.
A numerical study of zone-melting process for the thermoelectric material of Bi2Te3
NASA Astrophysics Data System (ADS)
Chen, W. C.; Wu, Y. C.; Hwang, W. S.; Hsieh, H. L.; Huang, J. Y.; Huang, T. K.
2015-06-01
In this study, a numerical model has been established by employing a commercial software; ProCAST, to simulate the variation/distribution of temperature and the subsequent microstructure of Bi2Te3 fabricated by zone-melting technique. Then an experiment is conducted to measure the temperature variation/distribution during the zone-melting process to validate the numerical system. Also, the effects of processing parameters on crystallization microstructure such as moving speed and temperature of heater are numerically evaluated. In the experiment, the Bi2Te3 powder are filled into a 30mm diameter quartz cylinder and the heater is set to 800°C with a moving speed 12.5 mm/hr. A thermocouple is inserted in the Bi2Te3 powder to measure the temperature variation/distribution of the zone-melting process. The temperature variation/distribution measured by experiment is compared to the results of numerical simulation. The results show that our model and the experiment are well matched. Then the model is used to evaluate the crystal formation for Bi2Te3 with a 30mm diameter process. It's found that when the moving speed is slower than 17.5 mm/hr, columnar crystal is obtained. In the end, we use this model to predict the crystal formation of zone-melting process for Bi2Te3 with a 45 mm diameter. The results show that it is difficult to grow columnar crystal when the diameter comes to 45mm.
Latomme, Julie; Cardon, Greet; De Bourdeaudhuij, Ilse; Iotova, Violeta; Koletzko, Berthold; Socha, Piotr; Moreno, Luis; Androutsos, Odysseas; Manios, Yannis; De Craemer, Marieke
2017-01-01
The aim of the present study evaluated the effect and process of the ToyBox-intervention on proxy-reported sedentary behaviours in 4- to 6-year-old preschoolers from six European countries. In total, 2434 preschoolers' parents/primary caregivers (mean age: 4.7±0.4 years, 52.2% boys) filled out a questionnaire, assessing preschoolers' sedentary behaviours (TV/DVD/video viewing, computer/video games use and quiet play) on weekdays and weekend days. Multilevel repeated measures analyses were conducted to measure the intervention effects. Additionally, process evaluation data were included to better understand the intervention effects. Positive intervention effects were found for computer/video games use. In the total sample, the intervention group showed a smaller increase in computer/video games use on weekdays (ß = -3.40, p = 0.06; intervention: +5.48 min/day, control: +8.89 min/day) and on weekend days (ß = -5.97, p = 0.05; intervention: +9.46 min/day, control: +15.43 min/day) from baseline to follow-up, compared to the control group. Country-specific analyses showed similar effects in Belgium and Bulgaria, while no significant intervention effects were found in the other countries. Process evaluation data showed relatively low teachers' and low parents' process evaluation scores for the sedentary behaviour component of the intervention (mean: 15.6/24, range: 2.5-23.5 and mean: 8.7/17, range: 0-17, respectively). Higher parents' process evaluation scores were related to a larger intervention effect, but higher teachers' process evaluation scores were not. The ToyBox-intervention had a small, positive effect on European preschoolers' computer/video games use on both weekdays and weekend days, but not on TV/DVD/video viewing or quiet play. The lack of larger effects can possibly be due to the fact that parents were only passively involved in the intervention and to the fact that the intervention was too demanding for the teachers. Future interventions targeting preschoolers' behaviours should involve parents more actively in both the development and the implementation of the intervention and, when involving schools, less demanding activities for teachers should be developed. clinicaltrials.gov NCT02116296.
Latomme, Julie; Cardon, Greet; De Bourdeaudhuij, Ilse; Iotova, Violeta; Koletzko, Berthold; Socha, Piotr; Moreno, Luis; Androutsos, Odysseas; Manios, Yannis; De Craemer, Marieke
2017-01-01
Background The aim of the present study evaluated the effect and process of the ToyBox-intervention on proxy-reported sedentary behaviours in 4- to 6-year-old preschoolers from six European countries. Methods In total, 2434 preschoolers’ parents/primary caregivers (mean age: 4.7±0.4 years, 52.2% boys) filled out a questionnaire, assessing preschoolers’ sedentary behaviours (TV/DVD/video viewing, computer/video games use and quiet play) on weekdays and weekend days. Multilevel repeated measures analyses were conducted to measure the intervention effects. Additionally, process evaluation data were included to better understand the intervention effects. Results Positive intervention effects were found for computer/video games use. In the total sample, the intervention group showed a smaller increase in computer/video games use on weekdays (ß = -3.40, p = 0.06; intervention: +5.48 min/day, control: +8.89 min/day) and on weekend days (ß = -5.97, p = 0.05; intervention: +9.46 min/day, control: +15.43 min/day) from baseline to follow-up, compared to the control group. Country-specific analyses showed similar effects in Belgium and Bulgaria, while no significant intervention effects were found in the other countries. Process evaluation data showed relatively low teachers’ and low parents’ process evaluation scores for the sedentary behaviour component of the intervention (mean: 15.6/24, range: 2.5–23.5 and mean: 8.7/17, range: 0–17, respectively). Higher parents’ process evaluation scores were related to a larger intervention effect, but higher teachers’ process evaluation scores were not. Conclusions The ToyBox-intervention had a small, positive effect on European preschoolers’ computer/video games use on both weekdays and weekend days, but not on TV/DVD/video viewing or quiet play. The lack of larger effects can possibly be due to the fact that parents were only passively involved in the intervention and to the fact that the intervention was too demanding for the teachers. Future interventions targeting preschoolers' behaviours should involve parents more actively in both the development and the implementation of the intervention and, when involving schools, less demanding activities for teachers should be developed. Trial registration clinicaltrials.gov NCT02116296 PMID:28380053
NASA Astrophysics Data System (ADS)
Meiland, Franka; Dröes, Rose-Marie; Sävenstedt, Stefan
Assistive technologies to support persons with dementia and their carers are used increasingly often. However, little is known about the effectiveness of most assistive devices. Much technology is put on the market without having been properly tested with potential end-users. To increase the chance that an assistive device is well accepted and useful for the target group, it is important, especially in the case of disabled persons, to involve potential users in the development process and to evaluate the impact of using the device on them before implementing it in the daily care and support. When evaluating the impact, decisions have to be made regarding the selection of measuring instruments. Important considerations in the selection process are the underlying domains to be addressed by the assistive technology, the target group and the availability of standardized instruments with good psychometric properties. In this chapter the COGKNOW project is used as a case example to explain how the impact of cognitive prosthetics on the daily lives of people with dementia and their carers can be measured. In COGKNOW a cognitive prosthetic device is being developed to improve the quality of life and autonomy of persons with dementia and to help them to remember and remind, to have social contact, to perform daily activities and to enhance feelings of safety. For all these areas, potential measuring instruments are described. Besides (standardized) measuring instruments, other data collection methods are used as well, such as semi-structured interviews and observations, diaries and in situ measurement. Within the COGKNOW project a first uncontrolled small-scale impact measurement takes place during the development process of the assistive device. However, it is recommended to perform a larger randomized controlled study as soon as the final product is ready to evaluate the impact of the device on persons with dementia and carers before it is released on the market.
Development and evaluation of the INSPIRE measure of staff support for personal recovery.
Williams, Julie; Leamy, Mary; Bird, Victoria; Le Boutillier, Clair; Norton, Sam; Pesola, Francesca; Slade, Mike
2015-05-01
No individualised standardised measure of staff support for mental health recovery exists. To develop and evaluate a measure of staff support for recovery. initial draft of measure based on systematic review of recovery processes; consultation (n = 61); and piloting (n = 20). Psychometric evaluation: three rounds of data collection from mental health service users (n = 92). INSPIRE has two sub-scales. The 20-item Support sub-scale has convergent validity (0.60) and adequate sensitivity to change. Exploratory factor analysis (variance 71.4-85.1 %, Kaiser-Meyer-Olkin 0.65-0.78) and internal consistency (range 0.82-0.85) indicate each recovery domain is adequately assessed. The 7-item Relationship sub-scale has convergent validity 0.69, test-retest reliability 0.75, internal consistency 0.89, a one-factor solution (variance 70.5 %, KMO 0.84) and adequate sensitivity to change. A 5-item Brief INSPIRE was also evaluated. INSPIRE and Brief INSPIRE demonstrate adequate psychometric properties, and can be recommended for research and clinical use.
Esposito, Dominick; Taylor, Erin Fries; Gold, Marsha
2009-02-01
Interest in disease management programs continues to grow as managed care plans, the federal and state governments, and other organizations consider such efforts as a means to improve health care quality and reduce costs. These efforts vary in size, scope, and target population. While large-scale programs provide the means to measure impacts, evaluation of smaller interventions remains valuable as they often represent the early planning stages of larger initiatives. This paper describes a multi-method approach for evaluating small interventions that sought to improve the quality of care for Medicaid beneficiaries with multiple chronic conditions. Our approach relied on quantitative and qualitative methods to develop a complete understanding of each intervention. Quantitative data in the form of both process measures, such as case manager contacts, and outcome measures, such as hospital use, were reported and analyzed. Qualitative information was collected through interviews and the development of logic models to document the flow of intervention activities and how they were intended to affect outcomes. The logic models helped us to understand the underlying reasons for the success or lack thereof of each intervention. The analysis provides useful information on several fronts. First, qualitative data provided valuable information about implementation. Second, process measures helped determine whether implementation occurred as anticipated. Third, outcome measures indicated the potential for favorable results later, possibly suggesting further study. Finally, the evaluation of qualitative and quantitative data in combination helped us assess the potential promise of each intervention and identify common themes and challenges across all interventions.
Occupational Noise Reduction in CNC Striping Process
NASA Astrophysics Data System (ADS)
Mahmad Khairai, Kamarulzaman; Shamime Salleh, Nurul; Razlan Yusoff, Ahmad
2018-03-01
Occupational noise hearing loss with high level exposure is common occupational hazards. In CNC striping process, employee that exposed to high noise level for a long time as 8-hour contributes to hearing loss, create physical and psychological stress that reduce productivity. In this paper, CNC stripping process with high level noises are measured and reduced to the permissible noise exposure. First condition is all machines shutting down and second condition when all CNC machine under operations. For both conditions, noise exposures were measured to evaluate the noise problems and sources. After improvement made, the noise exposures were measured to evaluate the effectiveness of reduction. The initial average noise level at the first condition is 95.797 dB (A). After the pneumatic system with leakage was solved, the noise reduced to 55.517 dB (A). The average noise level at the second condition is 109.340 dB (A). After six machines were gathered at one area and cover that area with plastic curtain, the noise reduced to 95.209 dB (A). In conclusion, the noise level exposure in CNC striping machine is high and exceed the permissible noise exposure can be reduced to acceptable levels. The reduction of noise level in CNC striping processes enhanced productivity in the industry.
Modeling biogechemical reactive transport in a fracture zone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molinero, Jorge; Samper, Javier; Yang, Chan Bing, and Zhang, Guoxiang
2005-01-14
A coupled model of groundwater flow, reactive solute transport and microbial processes for a fracture zone of the Aspo site at Sweden is presented. This is the model of the so-called Redox Zone Experiment aimed at evaluating the effects of tunnel construction on the geochemical conditions prevailing in a fracture granite. It is found that a model accounting for microbially-mediated geochemical processes is able to reproduce the unexpected measured increasing trends of dissolved sulfate and bicarbonate. The model is also useful for testing hypotheses regarding the role of microbial processes and evaluating the sensitivity of model results to changes inmore » biochemical parameters.« less
Social network supported process recommender system.
Ye, Yanming; Yin, Jianwei; Xu, Yueshen
2014-01-01
Process recommendation technologies have gained more and more attention in the field of intelligent business process modeling to assist the process modeling. However, most of the existing technologies only use the process structure analysis and do not take the social features of processes into account, while the process modeling is complex and comprehensive in most situations. This paper studies the feasibility of social network research technologies on process recommendation and builds a social network system of processes based on the features similarities. Then, three process matching degree measurements are presented and the system implementation is discussed subsequently. Finally, experimental evaluations and future works are introduced.
Measuring cognition in teams: a cross-domain review.
Wildman, Jessica L; Salas, Eduardo; Scott, Charles P R
2014-08-01
The purpose of this article is twofold: to provide a critical cross-domain evaluation of team cognition measurement options and to provide novice researchers with practical guidance when selecting a measurement method. A vast selection of measurement approaches exist for measuring team cognition constructs including team mental models, transactive memory systems, team situation awareness, strategic consensus, and cognitive processes. Empirical studies and theoretical articles were reviewed to identify all of the existing approaches for measuring team cognition. These approaches were evaluated based on theoretical perspective assumed, constructs studied, resources required, level of obtrusiveness, internal consistency reliability, and predictive validity. The evaluations suggest that all existing methods are viable options from the point of view of reliability and validity, and that there are potential opportunities for cross-domain use. For example, methods traditionally used only to measure mental models may be useful for examining transactive memory and situation awareness. The selection of team cognition measures requires researchers to answer several key questions regarding the theoretical nature of team cognition and the practical feasibility of each method. We provide novice researchers with guidance regarding how to begin the search for a team cognition measure and suggest several new ideas regarding future measurement research. We provide (1) a broad overview and evaluation of existing team cognition measurement methods, (2) suggestions for new uses of those methods across research domains, and (3) critical guidance for novice researchers looking to measure team cognition.
Study on processing immiscible materials in zero gravity
NASA Technical Reports Server (NTRS)
Reger, J. L.; Mendelson, R. A.
1975-01-01
An experimental investigation was conducted to evaluate mixing immiscible metal combinations under several process conditions. Under one-gravity, these included thermal processing, thermal plus electromagnetic mixing, and thermal plus acoustic mixing. The same process methods were applied during free fall on the MSFC drop tower facility. The design is included of drop tower apparatus to provide the electromagnetic and acoustic mixing equipment, and a thermal model was prepared to design the specimen and cooling procedure. Materials systems studied were Ca-La, Cd-Ga and Al-Bi; evaluation of the processed samples included the morphology and electronic property measurements. The morphology was developed using optical and scanning electron microscopy and microprobe analyses. Electronic property characterization of the superconducting transition temperatures were made using an impedance change-tuned coil method.
Georgakis, D. Christine; Trace, David A.; Naeymi-Rad, Frank; Evens, Martha
1990-01-01
Medical expert systems require comprehensive evaluation of their diagnostic accuracy. The usefulness of these systems is limited without established evaluation methods. We propose a new methodology for evaluating the diagnostic accuracy and the predictive capacity of a medical expert system. We have adapted to the medical domain measures that have been used in the social sciences to examine the performance of human experts in the decision making process. Thus, in addition to the standard summary measures, we use measures of agreement and disagreement, and Goodman and Kruskal's λ and τ measures of predictive association. This methodology is illustrated by a detailed retrospective evaluation of the diagnostic accuracy of the MEDAS system. In a study using 270 patients admitted to the North Chicago Veterans Administration Hospital, diagnoses produced by MEDAS are compared with the discharge diagnoses of the attending physicians. The results of the analysis confirm the high diagnostic accuracy and predictive capacity of the MEDAS system. Overall, the agreement of the MEDAS system with the “gold standard” diagnosis of the attending physician has reached a 90% level.
ERIC Educational Resources Information Center
Appenzellar, Anne B.; Kelley, H. Paul
The Measurement and Evaluation Center of the University of Texas (Austin) conducted a validity study to assist the Department of Management Science and Information (DMSI) at the College of Business Administration in establishing a program of credit by examination for an introductory course in electronic data processing--Data Processing Analysis…
Test Item Analysis: An Educator Professionalism Approach
ERIC Educational Resources Information Center
Hamzah, Mohd Sahandri Gani; Abdullah, Saifuddin Kumar
2011-01-01
The evaluation of learning is a systematic process involving testing, measuring and evaluation. In the testing step, a teacher needs to choose the best instrument that can test the minds of students. Testing will produce scores or marks with many variations either in homogeneous or heterogeneous forms that will be used to categorize the scores…
ERIC Educational Resources Information Center
Jensen, Shelly M.
2017-01-01
As a response to accountability and pressure from the No Child Left Behind Act of 2002, the South Dakota Department of Education informed school administrators of a state-wide transformation of the way districts measure teacher effectiveness in their schools. In 2011, "The Framework for Teaching Evaluation Instrument" (Danielson, 2002)…
PAL[R] Services Being Measured through Scientifically-Based Evaluation Process
ERIC Educational Resources Information Center
Perspectives in Peer Programs, 2007
2007-01-01
In January 2006, PAL[R] Peer Assistance and Leadership, a Promising Prevention Program of Workers Assistance Program, Inc. (WAP), received a $30,000 grant from the Center for Substance Abuse Prevention (CSAP) in order to be scientifically-evaluated on the outcomes and effectiveness of its programs and services. According to the grant, the…
49 CFR 611.203 - New Starts project justification criteria.
Code of Federal Regulations, 2014 CFR
2014-10-01
... project justification, FTA will evaluate information developed locally through the planning and NEPA processes. (1) The method used by FTA to evaluate and rate projects will be a multiple measure approach by... each of the criteria in § 611.203(b)(1) through (6) will be expressed in terms of descriptive...
49 CFR 611.203 - New Starts project justification criteria.
Code of Federal Regulations, 2013 CFR
2013-10-01
... project justification, FTA will evaluate information developed locally through the planning and NEPA processes. (1) The method used by FTA to evaluate and rate projects will be a multiple measure approach by... each of the criteria in § 611.203(b)(1) through (6) will be expressed in terms of descriptive...
Measuring Music Education: Music Teacher Evaluation in Pennsylvania
ERIC Educational Resources Information Center
Emert, Dennis; Sheehan, Scott; Deitz, O. David
2013-01-01
A major challenge currently facing Pennsylvania music educators (and many music educators across the country) is change to the evaluation process of teachers in Non-Tested Grades and Subjects (NTGS). The law directing this change is known as Act 82 and comes from the Pennsylvania legislature, authorized through House Bill 1901. The Pennsylvania…
The work processes and products of the environment are now recognized by scientists and economists alike for the contributions that they make to human and natural systems. Thus, the nature and evaluation of nonmarket goods and services are the subject of much current research in...
Systematic review of HIV prevention interventions in China: a health communication perspective.
Xiao, Zhiwen; Noar, Seth M; Zeng, Lily
2014-02-01
To examine whether communication strategies and principles have been utilized in the HIV prevention intervention programs conducted in China. Comprehensive literature searches were conducted using PsycINFO, Medline, and Academic Search Complete with combinations of a number of keywords. Studies were included if they (1) were conducted in China and published prior to October 2011; (2) tested interventions promoting HIV/sexual risk reduction; and (3) reported empirical outcome evaluations on HIV knowledge, condom use and other condom-related variables. Data on 11 dimensions were extracted and analyzed, including formative research, theory, message targeting, messenger and channels, process evaluation, evaluation design, outcome measures. The majority of the 45 intervention studies were not theory-based, did not report conducting formative research or process evaluation, used pretest-posttest control group designs, combined nonmedia channels, printed and visual materials, and employed HIV knowledge and condom use as outcome measures. Many HIV prevention interventions in China have been successful in reducing HIV risk-related outcomes. This literature has its weaknesses; however, the current review illuminates gaps in the literature and points to important future directions for research.
Human skin surface evaluation by image processing
NASA Astrophysics Data System (ADS)
Zhu, Liangen; Zhan, Xuemin; Xie, Fengying
2003-12-01
Human skin gradually lose its tension and becomes very dry as time flies by. Use of cosmetics is effective to prevent skin aging. Recently, there are many choices of products of cosmetics. To show their effects, It is desirable to develop a way to evaluate quantificationally skin surface condition. In this paper, An automatic skin evaluating method is proposed. The skin surface has the pattern called grid-texture. This pattern is composed of the valleys that spread vertically, horizontally, and obliquely and the hills separated by them. Changes of the grid are closely linked to the skin surface condition. They can serve as a good indicator for the skin condition. By measuring the skin grid using digital image processing technologies, we can evaluate skin surface about its aging, health, and alimentary status. In this method, the skin grid is first detected to form a closed net. Then, some skin parameters such as Roughness, tension, scale and gloss can be calculated from the statistical measurements of the net. Through analyzing these parameters, the condition of the skin can be monitored.
Examining the Role of Patient Experience Surveys in Measuring Health Care Quality
Elliott, Marc N.; Zaslavsky, Alan M.; Hays, Ron D.; Lehrman, William G.; Rybowski, Lise; Edgman-Levitan, Susan; Cleary, Paul D.
2015-01-01
Patient care experience surveys evaluate the degree to which care is patient-centered. This article reviews the literature on the association between patient experiences and other measures of health care quality. Research indicates that better patient care experiences are associated with higher levels of adherence to recommended prevention and treatment processes, better clinical outcomes, better patient safety within hospitals, and less health care utilization. Patient experience measures that are collected using psychometrically sound instruments, employing recommended sample sizes and adjustment procedures, and implemented according to standard protocols are intrinsically meaningful and are appropriate complements for clinical process and outcome measures in public reporting and pay-for-performance programs. PMID:25027409
Models and techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1982-01-01
Models, measures, and techniques for evaluating the effectiveness of aircraft computing systems were developed. By "effectiveness" in this context we mean the extent to which the user, i.e., a commercial air carrier, may expect to benefit from the computational tasks accomplished by a computing system in the environment of an advanced commercial aircraft. Thus, the concept of effectiveness involves aspects of system performance, reliability, and worth (value, benefit) which are appropriately integrated in the process of evaluating system effectiveness. Specifically, the primary objectives are: the development of system models that provide a basis for the formulation and evaluation of aircraft computer system effectiveness, the formulation of quantitative measures of system effectiveness, and the development of analytic and simulation techniques for evaluating the effectiveness of a proposed or existing aircraft computer.
Dorn, Barry C; Savoia, Elena; Testa, Marcia A; Stoto, Michael A; Marcus, Leonard J
2007-01-01
Survey instruments for evaluating public health preparedness have focused on measuring the structure and capacity of local, state, and federal agencies, rather than linkages among structure, process, and outcomes. To focus evaluation on the latter, we evaluated the linkages among individuals, organizations, and systems using the construct of "connectivity" and developed a measurement instrument. Results from focus groups of emergency preparedness first responders generated 62 items used in the development sample of 187 respondents. Item reduction and factors analyses were conducted to confirm the scale's components. The 62 items were reduced to 28. Five scales explained 70% of the total variance (number of items, percent variance explained, Cronbach's alpha) including connectivity with the system (8, 45%, 0.94), coworkers (7, 7%, 0.91), organization (7, 12%, 0.93), and perceptions (6, 6%, 0.90). Discriminant validity was found to be consistent with the factor structure. We developed a Connectivity Measurement Tool for the public health workforce consisting of a 34-item questionnaire found to be a reliable measure of connectivity with preliminary evidence of construct validity.
NASA Astrophysics Data System (ADS)
Vuori, Tero; Olkkonen, Maria
2006-01-01
The aim of the study is to test both customer image quality rating (subjective image quality) and physical measurement of user behavior (eye movements tracking) to find customer satisfaction differences in imaging technologies. Methodological aim is to find out whether eye movements could be quantitatively used in image quality preference studies. In general, we want to map objective or physically measurable image quality to subjective evaluations and eye movement data. We conducted a series of image quality tests, in which the test subjects evaluated image quality while we recorded their eye movements. Results show that eye movement parameters consistently change according to the instructions given to the user, and according to physical image quality, e.g. saccade duration increased with increasing blur. Results indicate that eye movement tracking could be used to differentiate image quality evaluation strategies that the users have. Results also show that eye movements would help mapping between technological and subjective image quality. Furthermore, these results give some empirical emphasis to top-down perception processes in image quality perception and evaluation by showing differences between perceptual processes in situations when cognitive task varies.
Rajaram, Ravi; Saadat, Lily; Chung, Jeanette; Dahlke, Allison; Yang, Anthony D; Odell, David D; Bilimoria, Karl Y
2016-12-01
In 2011, the Accreditation Council for Graduate Medical Education (ACGME) expanded restrictions on resident duty hours. While studies have shown no association between these restrictions and improved outcomes, process-of-care and patient experience measures may be more sensitive to resident performance, and thus may be impacted by duty hour policies. The objective of this study was to evaluate the association between the 2011 resident duty hour reform and measures of processes-of-care and patient experience. Hospital Consumer Assessment of Healthcare Providers and Systems survey data and process-of-care scores were obtained from the Centers for Medicare and Medicaid Services Hospital Compare website for 1 year prior to (1 July 2010 to 30 June 2011) and 1 year after (1 July 2011 to 30 June 2012) duty hour reform implementation. Using a difference-in-differences model, non-teaching and teaching hospitals were compared before and after the 2011 reform to test the association of this policy with changes in process-of-care and patient experience measure scores. Duty hour reform was not associated with a change in the five patient experience measures evaluated, including patients rating a hospital 9 or 10 (coefficient -0.003, 95% CI -0.79 to 0.79) or stating they would 'definitely recommend' a hospital (coefficient -0.28, 95% CI -1.01 to 0.44). For all 10 process-of-care measures examined, such as antibiotic timing (coefficient -0.462, 95% CI -1.502 to 0.579) and discontinuation (0.188, 95% CI -0.529 to 0.904), duty hour reform was not associated with a change in scores. The 2011 ACGME duty hour reform was not associated with improvements in process-of-care and patient experience measures. These data should be considered when considering reform of resident duty hour policies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Systematic evaluation of atmospheric chemistry-transport model CHIMERE
NASA Astrophysics Data System (ADS)
Khvorostyanov, Dmitry; Menut, Laurent; Mailler, Sylvain; Siour, Guillaume; Couvidat, Florian; Bessagnet, Bertrand; Turquety, Solene
2017-04-01
Regional-scale atmospheric chemistry-transport models (CTM) are used to develop air quality regulatory measures, to support environmentally sensitive decisions in the industry, and to address variety of scientific questions involving the atmospheric composition. Model performance evaluation with measurement data is critical to understand their limits and the degree of confidence in model results. CHIMERE CTM (http://www.lmd.polytechnique.fr/chimere/) is a French national tool for operational forecast and decision support and is widely used in the international research community in various areas of atmospheric chemistry and physics, climate, and environment (http://www.lmd.polytechnique.fr/chimere/CW-articles.php). This work presents the model evaluation framework applied systematically to the new CHIMERE CTM versions in the course of the continuous model development. The framework uses three of the four CTM evaluation types identified by the Environmental Protection Agency (EPA) and the American Meteorological Society (AMS): operational, diagnostic, and dynamic. It allows to compare the overall model performance in subsequent model versions (operational evaluation), identify specific processes and/or model inputs that could be improved (diagnostic evaluation), and test the model sensitivity to the changes in air quality, such as emission reductions and meteorological events (dynamic evaluation). The observation datasets currently used for the evaluation are: EMEP (surface concentrations), AERONET (optical depths), and WOUDC (ozone sounding profiles). The framework is implemented as an automated processing chain and allows interactive exploration of the results via a web interface.
Clerkin, Elise M.; Fisher, Christopher R.; Sherman, Jeffrey W.; Teachman, Bethany A.
2013-01-01
Objective This study explored the automatic and controlled processes that may influence performance on an implicit measure across cognitive-behavioral group therapy for panic disorder. Method The Quadruple Process model was applied to error scores from an Implicit Association Test evaluating associations between the concepts Me (vs. Not Me) + Calm (vs. Panicked) to evaluate four distinct processes: Association Activation, Detection, Guessing, and Overcoming Bias. Parameter estimates were calculated in the panic group (n=28) across each treatment session where the IAT was administered, and at matched times when the IAT was completed in the healthy control group (n=31). Results Association Activation for Me + Calm became stronger over treatment for participants in the panic group, demonstrating that it is possible to change automatically activated associations in memory (vs. simply overriding those associations) in a clinical sample via therapy. As well, the Guessing bias toward the calm category increased over treatment for participants in the panic group. Conclusions This research evaluates key tenets about the role of automatic processing in cognitive models of anxiety, and emphasizes the viability of changing the actual activation of automatic associations in the context of treatment, versus only changing a person’s ability to use reflective processing to overcome biased automatic processing. PMID:24275066
Construction of Gallium Point at NMIJ
NASA Astrophysics Data System (ADS)
Widiatmo, J. V.; Saito, I.; Yamazawa, K.
2017-03-01
Two open-type gallium point cells were fabricated using ingots whose nominal purities are 7N. Measurement systems for the realization of the melting point of gallium using these cells were built. The melting point of gallium is repeatedly realized by means of the measurement systems for evaluating the repeatability. Measurements for evaluating the effect of hydrostatic pressure coming from the molten gallium existing during the melting process and the effect of gas pressure that fills the cell were also performed. Direct cell comparisons between those cells were conducted. This comparison was aimed to evaluate the consistency of each cell, especially related to the nominal purity. Direct cell comparison between the open-type and the sealed-type gallium point cell was also conducted. Chemical analysis was conducted using samples extracted from ingots used in both the newly built open-type gallium point cells, from which the effect of impurities in the ingot was evaluated.
Kal, Betül Ilhan; Baksi, B Güniz; Dündar, Nesrin; Sen, Bilge Hakan
2007-02-01
The aim of this study was to compare the accuracy of endodontic file lengths after application of various image enhancement modalities. Endodontic files of three different ISO sizes were inserted in 20 single-rooted extracted permanent mandibular premolar teeth and standardized images were obtained. Original digital images were then enhanced using five processing algorithms. Six evaluators measured the length of each file on each image. The measurements from each processing algorithm and each file size were compared using repeated measures ANOVA and Bonferroni tests (P = 0.05). Paired t test was performed to compare the measurements with the true lengths of the files (P = 0.05). All of the processing algorithms provided significantly shorter measurements than the true length of each file size (P < 0.05). The threshold enhancement modality produced significantly higher mean error values (P < 0.05), while there was no significant difference among the other enhancement modalities (P > 0.05). Decrease in mean error value was observed with increasing file size (P < 0.05). Invert, contrast/brightness and edge enhancement algorithms may be recommended for accurate file length measurements when utilizing storage phosphor plates.
John, Mary; Jeffries, Fiona W; Acuna-Rivera, Marcela; Warren, Fiona; Simonds, Laura M
2015-01-01
Recovery has become a central concept in mental health service delivery, and several recovery-focused measures exist for adults. The concept's applicability to young people's mental health experience has been neglected, and no measures yet exist. Aim The aim of this work is to develop measures of recovery for use in specialist child and adolescent mental health services. On the basis of 21 semi-structured interviews, three recovery measures were devised, one for completion by the young person and two for completion by the parent/carer. Two parent/carer measures were devised in order to assess both their perspective on their child's recovery and their own recovery process. The questionnaires were administered to a UK sample of 47 young people (10-18 years old) with anxiety and depression and their parents, along with a measure used to routinely assess treatment progress and outcome and a measure of self-esteem. All three measures had high internal consistency (alpha ≥ 0.89). Young people's recovery scores were correlated negatively with scores on a measure used to routinely assess treatment progress and outcome (r = -0.75) and positively with self-esteem (r = 0.84). Parent and young persons' reports of the young person's recovery were positively correlated (r = 0.61). Parent report of the young person's recovery and of their own recovery process were positively correlated (r = 0.75). The three measures have the potential to be used in mental health services to assess recovery processes in young people with mental health difficulties and correspondence with symptomatic improvement. The measures provide a novel way of capturing the parental/caregiver perspective on recovery and caregivers' own wellbeing. No tools exist to evaluate recovery-relevant processes in young people treated in specialist mental health services. This study reports on the development and psychometric evaluation of three self-report recovery-relevant assessments for young people and their caregivers. Findings indicate a high degree of correspondence between young person and caregiver reports of recovery in the former. The recovery assessments correlate inversely with a standardized symptom-focused measure and positively with self-esteem. Copyright © 2014 John Wiley & Sons, Ltd.
Performance Evaluation of Nano-JASMINE
NASA Astrophysics Data System (ADS)
Hatsutori, Y.; Kobayashi, Y.; Gouda, N.; Yano, T.; Murooka, J.; Niwa, Y.; Yamada, Y.
We report the results of performance evaluation of the first Japanese astrometry satellite, Nano-JASMINE. It is a very small satellite and weighs only 35 kg. It aims to carry out astrometry measurement of nearby bright stars (z ≤ 7.5 mag) with an accuracy of 3 milli-arcseconds. Nano-JASMINE will be launched by Cyclone-4 rocket in August 2011 from Brazil. The current status is in the process of evaluating the performances. A series of performance tests and numerical analysis were conducted. As a result, the engineering model (EM) of the telescope was measured to be achieving a diffraction-limited performance and confirmed that it has enough performance for scientific astrometry.
Lindgren, Kristen P.; Ramirez, Jason J.; Olin, Cecilia C.; Neighbors, Clayton
2016-01-01
Drinking identity – how much individuals view themselves as drinkers– is a promising cognitive factor that predicts problem drinking. Implicit and explicit measures of drinking identity have been developed (the former assesses more reflexive/automatic cognitive processes; the latter more reflective/controlled cognitive processes): each predicts unique variance in alcohol consumption and problems. However, implicit and explicit identity’s utility and uniqueness as a predictor relative to cognitive factors important for problem drinking screening and intervention has not been evaluated. Thus, the current study evaluated implicit and explicit drinking identity as predictors of consumption and problems over time. Baseline measures of drinking identity, social norms, alcohol expectancies, and drinking motives were evaluated as predictors of consumption and problems (evaluated every three months over two academic years) in a sample of 506 students (57% female) in their first or second year of college. Results found that baseline identity measures predicted unique variance in consumption and problems over time. Further, when compared to each set of cognitive factors, the identity measures predicted unique variance in consumption and problems over time. Findings were more robust for explicit, versus, implicit identity and in models that did not control for baseline drinking. Drinking identity appears to be a unique predictor of problem drinking relative to social norms, alcohol expectancies, and drinking motives. Intervention and theory could benefit from including and considering drinking identity. PMID:27428756
Lindgren, Kristen P; Ramirez, Jason J; Olin, Cecilia C; Neighbors, Clayton
2016-09-01
Drinking identity-how much individuals view themselves as drinkers-is a promising cognitive factor that predicts problem drinking. Implicit and explicit measures of drinking identity have been developed (the former assesses more reflexive/automatic cognitive processes; the latter more reflective/controlled cognitive processes): each predicts unique variance in alcohol consumption and problems. However, implicit and explicit identity's utility and uniqueness as predictors relative to cognitive factors important for problem drinking screening and intervention has not been evaluated. Thus, the current study evaluated implicit and explicit drinking identity as predictors of consumption and problems over time. Baseline measures of drinking identity, social norms, alcohol expectancies, and drinking motives were evaluated as predictors of consumption and problems (evaluated every 3 months over 2 academic years) in a sample of 506 students (57% female) in their first or second year of college. Results found that baseline identity measures predicted unique variance in consumption and problems over time. Further, when compared to each set of cognitive factors, the identity measures predicted unique variance in consumption and problems over time. Findings were more robust for explicit versus implicit identity and in models that did not control for baseline drinking. Drinking identity appears to be a unique predictor of problem drinking relative to social norms, alcohol expectancies, and drinking motives. Intervention and theory could benefit from including and considering drinking identity. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
An Effective Model for Continuing Education Training in Evidence-Based Practice
ERIC Educational Resources Information Center
Parrish, Danielle E.; Rubin, Allen
2011-01-01
This study utilized a replicated one-group pretest-posttest design with 3 month follow-up to evaluate the impact of a one-day continuing education training on the evidence-based practice (EBP) process with community practitioners (N = 69). Outcome measures assessed the level of workshop participants' familiarity with the EBP process, their…
Tom Leuschen; Dale Wade; Paula Seamon
2001-01-01
The success of a fire use program is in large part dependent on a solid foundation set in clear and concise planning. The planning process results in specific goals and measurable objectives for fire application, provides a means of setting priorities, and establishes a mechanism for evaluating and refining the process to meet the desired future condition. It is an...
ERIC Educational Resources Information Center
Lamers, Audri; Delsing, Marc J. M. H.; van Widenfelt, Brigit M.; Vermeiren, Robert R. J. M.
2015-01-01
Background: The therapeutic alliance between multidisciplinary teams and parents within youth (semi) residential psychiatry is essential for the treatment process and forms a promising process variable for Routine Outcome Monitoring (ROM). No short evaluative instrument, however, is currently available to assess parent-team alliance. Objective: In…
The Effects of Directional Processing on Objective and Subjective Listening Effort
ERIC Educational Resources Information Center
Picou, Erin M.; Moore, Travis M.; Ricketts, Todd A.
2017-01-01
Purpose: The purposes of this investigation were (a) to evaluate the effects of hearing aid directional processing on subjective and objective listening effort and (b) to investigate the potential relationships between subjective and objective measures of effort. Method: Sixteen adults with mild to severe hearing loss were tested with study…
The Use of Outcome Mapping in the Educational Context
ERIC Educational Resources Information Center
Lewis, Anna
2014-01-01
Outcome Mapping is intended to measure the process by which change occurs, it shifts away from the products of the program to focus on changes in behaviors, relationships, actions, and/or activities of the people involved in the treatment program. This process-oriented methodology, most often used in designing and evaluating community development…
Depressed Mood Mediates Decline in Cognitive Processing Speed in Caregivers
ERIC Educational Resources Information Center
Vitaliano, Peter P.; Zhang, Jianping; Young, Heather M.; Caswell, Lisa W.; Scanlan, James M.; Echeverria, Diana
2009-01-01
Purpose: Very few studies have examined cognitive decline in caregivers versus noncaregivers, and only 1 study has examined mediators of such decline. We evaluated the relationship between caregiver status and decline on the digit symbol test (DST; a measure of processing speed, attention, cognitive-motor translation, and visual scanning) and…
ERIC Educational Resources Information Center
Lightburn, Millard E.; Fraser, Barry J.
The study involved implementing and evaluating activities that actively engage students in the process of gathering, processing and analyzing data derived from human body measurements, with students using their prior knowledge acquired in science, mathematics, and computer classes to interpret this information. In the classroom activities…
North by Northwest: Quality Assurance and Evaluation Processes in European Education
ERIC Educational Resources Information Center
Grek, Sotiria; Lawn, Martin; Lingard, Bob; Varjo, Janne
2009-01-01
Governing processes in Europe and within Europeanization are often opaque and appearances can deceive. The normative practices of improvement in education, and the connected growth in performance measurement, have been largely understood in their own terms. However, the management of flows of information through quality assurance can be examined…
Conklin, Annalijn; Nolte, Ellen; Vrijhoef, Hubertus
2013-01-01
An overview was produced of approaches currently used to evaluate chronic disease management in selected European countries. The study aims to describe the methods and metrics used in Europe as a first to help advance the methodological basis for their assessment. A common template for collection of evaluation methods and performance measures was sent to key informants in twelve European countries; responses were summarized in tables based on template evaluation categories. Extracted data were descriptively analyzed. Approaches to the evaluation of chronic disease management vary widely in objectives, designs, metrics, observation period, and data collection methods. Half of the reported studies used noncontrolled designs. The majority measure clinical process measures, patient behavior and satisfaction, cost and utilization; several also used a range of structural indicators. Effects are usually observed over 1 or 3 years on patient populations with a single, commonly prevalent, chronic disease. There is wide variation within and between European countries on approaches to evaluating chronic disease management in their objectives, designs, indicators, target audiences, and actors involved. This study is the first extensive, international overview of the area reported in the literature.
Mirzazadeh, Azim; Gandomkar, Roghayeh; Hejri, Sara Mortaz; Hassanzadeh, Gholamreza; Koochak, Hamid Emadi; Golestani, Abolfazl; Jafarian, Ali; Jalili, Mohammad; Nayeri, Fatemeh; Saleh, Narges; Shahi, Farhad; Razavi, Seyed Hasan Emami
2016-02-01
The purpose of this study was to utilize the Context, Input, Process and Product (CIPP) evaluation model as a comprehensive framework to guide initiating, planning, implementing and evaluating a revised undergraduate medical education programme. The eight-year longitudinal evaluation study consisted of four phases compatible with the four components of the CIPP model. In the first phase, we explored the strengths and weaknesses of the traditional programme as well as contextual needs, assets, and resources. For the second phase, we proposed a model for the programme considering contextual features. During the process phase, we provided formative information for revisions and adjustments. Finally, in the fourth phase, we evaluated the outcomes of the new undergraduate medical education programme in the basic sciences phase. Information was collected from different sources such as medical students, faculty members, administrators, and graduates, using various qualitative and quantitative methods including focus groups, questionnaires, and performance measures. The CIPP model has the potential to guide policy makers to systematically collect evaluation data and to manage stakeholders' reactions at each stage of the reform in order to make informed decisions. However, the model may result in evaluation burden and fail to address some unplanned evaluation questions.
Digital signal processing for velocity measurements in dynamical material's behaviour studies.
Devlaminck, Julien; Luc, Jérôme; Chanal, Pierre-Yves
2014-03-01
In this work, we describe different configurations of optical fiber interferometers (types Michelson and Mach-Zehnder) used to measure velocities during dynamical material's behaviour studies. We detail the algorithms of processing developed and optimized to improve the performance of these interferometers especially in terms of time and frequency resolutions. Three methods of analysis of interferometric signals were studied. For Michelson interferometers, the time-frequency analysis of signals by Short-Time Fourier Transform (STFT) is compared to a time-frequency analysis by Continuous Wavelet Transform (CWT). The results have shown that the CWT was more suitable than the STFT for signals with low signal-to-noise, and low velocity and high acceleration areas. For Mach-Zehnder interferometers, the measurement is carried out by analyzing the phase shift between three interferometric signals (Triature processing). These three methods of digital signal processing were evaluated, their measurement uncertainties estimated, and their restrictions or operational limitations specified from experimental results performed on a pulsed power machine.
Measurement of company effectiveness using analytic network process method
NASA Astrophysics Data System (ADS)
Goran, Janjić; Zorana, Tanasić; Borut, Kosec
2017-07-01
The sustainable development of an organisation is monitored through the organisation's performance, which beforehand incorporates all stakeholders' requirements in its strategy. The strategic management concept enables organisations to monitor and evaluate their effectiveness along with efficiency by monitoring of the implementation of set strategic goals. In the process of monitoring and measuring effectiveness, an organisation can use multiple-criteria decision-making methods as help. This study uses the method of analytic network process (ANP) to define the weight factors of the mutual influences of all the important elements of an organisation's strategy. The calculation of an organisation's effectiveness is based on the weight factors and the degree of fulfilment of the goal values of the strategic map measures. New business conditions influence the changes in the importance of certain elements of an organisation's business in relation to competitive advantage on the market, and on the market, increasing emphasis is given to non-material resources in the process of selection of the organisation's most important measures.
A systematic review of the care coordination measurement landscape
2013-01-01
Background Care coordination has increasingly been recognized as an important aspect of high-quality health care delivery. Robust measures of coordination processes will be essential tools to evaluate, guide and support efforts to understand and improve coordination, yet little agreement exists among stakeholders about how to best measure care coordination. We aimed to review and characterize existing measures of care coordination processes and identify areas of high and low density to guide future measure development. Methods We conducted a systematic review of measures published in MEDLINE through April 2012 and identified from additional key sources and informants. We characterized included measures with respect to the aspects of coordination measured (domain), measurement perspective (patient/family, health care professional, system representative), applicable settings and patient populations (by age and condition), and data used (survey, chart review, administrative claims). Results Among the 96 included measure instruments, most relied on survey methods (88%) and measured aspects of communication (93%), in particular the transfer of information (81%). Few measured changing coordination needs (11%). Nearly half (49%) of instruments mapped to the patient/family perspective; 29% to the system representative and 27% to the health care professionals perspective. Few instruments were applicable to settings other than primary care (58%), inpatient facilities (25%), and outpatient specialty care (22%). Conclusions New measures are needed that evaluate changing coordination needs, coordination as perceived by health care professionals, coordination in the home health setting, and for patients at the end of life. PMID:23537350
The specification of personalised insoles using additive manufacturing.
Salles, André S; Gyi, Diane E
2012-01-01
Research has been conducted to explore a process that delivers insoles for personalised footwear for the high street using additive manufacturing (AM) and to evaluate the use of such insoles in terms of discomfort. Therefore, the footwear personalisation process was first identified: (1) foot capture; (2) anthropometric measurements; (3) insole design; and (4) additive manufacturing. In order to explore and evaluate this process, recreational runners were recruited. They had both feet scanned and 15 anthropometric measurements taken. Personalised insoles were designed from the scans and manufactured using AM. Participants were fitted with footwear under two experimental conditions: personalised and control, which were compared in terms of discomfort. The mean ratings for discomfort variables were generally low for both conditions and no significant differences were detected between conditions. In general, the personalisation process showed promise in terms of the scan data, although the foot capture position may not be considered 'gold standard'. Polyamide, the material used for the insoles, demonstrated positive attributes: visual inspection revealed no signs of breaking. The footwear personalisation process described and explored in this study shows potential and can be considered a good starting point for designer and researchers.
Experimental design for evaluating WWTP data by linear mass balances.
Le, Quan H; Verheijen, Peter J T; van Loosdrecht, Mark C M; Volcke, Eveline I P
2018-05-15
A stepwise experimental design procedure to obtain reliable data from wastewater treatment plants (WWTPs) was developed. The proposed procedure aims at determining sets of additional measurements (besides available ones) that guarantee the identifiability of key process variables, which means that their value can be calculated from other, measured variables, based on available constraints in the form of linear mass balances. Among all solutions, i.e. all possible sets of additional measurements allowing the identifiability of all key process variables, the optimal solutions were found taking into account two objectives, namely the accuracy of the identified key variables and the cost of additional measurements. The results of this multi-objective optimization problem were represented in a Pareto-optimal front. The presented procedure was applied to a full-scale WWTP. Detailed analysis of the relation between measurements allowed the determination of groups of overlapping mass balances. Adding measured variables could only serve in identifying key variables that appear in the same group of mass balances. Besides, the application of the experimental design procedure to these individual groups significantly reduced the computational effort in evaluating available measurements and planning additional monitoring campaigns. The proposed procedure is straightforward and can be applied to other WWTPs with or without prior data collection. Copyright © 2018 Elsevier Ltd. All rights reserved.
In-Situ-measurement of restraining forces during forming of rectangular cups
NASA Astrophysics Data System (ADS)
Singer, M.; Liewald, M.
2016-11-01
This contribution introduces a new method for evaluating the restraining forces during forming of rectangular cups with the goal of eliminating the disadvantages of the currently used scientifically established measurement procedures. With this method forming forces are measured indirectly by the elastic deformation of die structure caused by locally varying tribological system. Therefore, two sensors were integrated into the punch, which measure the restraining forces during the forming process. Furthermore, it was possible to evaluate the effects of different lubricants showing the time dependent trend as a function of stroke during the forming of the materials DP600 and DC04. A main advantage of this testing method is to get real friction corresponding data out of the physical deep drawing process as well as the measurement of real acting restraining forces at different areas of the deep drawing part by one single test. Measurement results gained by both sensors have been integrated into LS-Dyna simulation in which the coefficient of friction was regarded as a function of time. The simulated and deep drawn parts afterwards are analysed and compared to specific areas with regard to locally measured thickness of part. Results show an improvement of simulation quality when using locally varying, time dependent coefficients of friction compared to commonly used constant values.
Quality of narrative operative reports in pancreatic surgery
Wiebe, Meagan E.; Sandhu, Lakhbir; Takata, Julie L.; Kennedy, Erin D.; Baxter, Nancy N.; Gagliardi, Anna R.; Urbach, David R.; Wei, Alice C.
2013-01-01
Background Quality in health care can be evaluated using quality indicators (QIs). Elements contained in the surgical operative report are potential sources for QI data, but little is known about the completeness of the narrative operative report (NR). We evaluated the completeness of the NR for patients undergoing a pancreaticoduodenectomy. Methods We reviewed NRs for patients undergoing a pancreaticoduodenectomy over a 1-year period. We extracted 79 variables related to patient and narrator characteristics, process of care measures, surgical technique and oncology-related outcomes by document analysis. Data were coded and evaluated for completeness. Results We analyzed 74 NRs. The median number of variables reported was 43.5 (range 13–54). Variables related to surgical technique were most complete. Process of care and oncology-related variables were often omitted. Completeness of the NR was associated with longer operative duration. Conclusion The NRs were often incomplete and of poor quality. Important elements, including process of care and oncology-related data, were frequently missing. Thus, the NR is an inadequate data source for QI. Development and use of alternative reporting methods, including standardized synoptic operative reports, should be encouraged to improve documentation of care and serve as a measure of quality of surgical care. PMID:24067527
The Role of Intuition in the Generation and Evaluation Stages of Creativity
Pétervári, Judit; Osman, Magda; Bhattacharya, Joydeep
2016-01-01
Both intuition and creativity are associated with knowledge creation, yet a clear link between them has not been adequately established. First, the available empirical evidence for an underlying relationship between intuition and creativity is sparse in nature. Further, this evidence is arguable as the concepts are diversely operationalized and the measures adopted are often not validated sufficiently. Combined, these issues make the findings from various studies examining the link between intuition and creativity difficult to replicate. Nevertheless, the role of intuition in creativity should not be neglected as it is often reported to be a core component of the idea generation process, which in conjunction with idea evaluation are crucial phases of creative cognition. We review the prior research findings in respect of idea generation and idea evaluation from the view that intuition can be construed as the gradual accumulation of cues to coherence. Thus, we summarize the literature on what role intuitive processes play in the main stages of the creative problem-solving process and outline a conceptual framework of the interaction between intuition and creativity. Finally, we discuss the main challenges of measuring intuition as well as possible directions for future research. PMID:27703439
Quality of narrative operative reports in pancreatic surgery.
Wiebe, Meagan E; Sandhu, Lakhbir; Takata, Julie L; Kennedy, Erin D; Baxter, Nancy N; Gagliardi, Anna R; Urbach, David R; Wei, Alice C
2013-10-01
Quality in health care can be evaluated using quality indicators (QIs). Elements contained in the surgical operative report are potential sources for QI data, but little is known about the completeness of the narrative operative report (NR). We evaluated the completeness of the NR for patients undergoing a pancreaticoduodenectomy. We reviewed NRs for patients undergoing a pancreaticoduodenectomy over a 1-year period. We extracted 79 variables related to patient and narrator characteristics, process of care measures, surgical technique and oncology-related outcomes by document analysis. Data were coded and evaluated for completeness. We analyzed 74 NRs. The median number of variables reported was 43.5 (range 13-54). Variables related to surgical technique were most complete. Process of care and oncology-related variables were often omitted. Completeness of the NR was associated with longer operative duration. The NRs were often incomplete and of poor quality. Important elements, including process of care and oncology-related data, were frequently missing. Thus, the NR is an inadequate data source for QI. Development and use of alternative reporting methods, including standardized synoptic operative reports, should be encouraged to improve documentation of care and serve as a measure of quality of surgical care.
Application of a responsive evaluation approach in medical education.
Curran, Vernon; Christopher, Jeanette; Lemire, Francine; Collins, Alice; Barrett, Brendan
2003-03-01
This paper reports on the usefulness of a responsive evaluation model in evaluating the clinical skills assessment and training (CSAT) programme at the Faculty of Medicine, Memorial University of Newfoundland, Canada. The purpose of this paper is to introduce the responsive evaluation approach, ascertain its utility, feasibility, propriety and accuracy in a medical education context, and discuss its applicability as a model for medical education programme evaluation. Robert Stake's original 12-step responsive evaluation model was modified and reduced to five steps, including: (1) stakeholder audience identification, consultation and issues exploration; (2) stakeholder concerns and issues analysis; (3) identification of evaluative standards and criteria; (4) design and implementation of evaluation methodology; and (5) data analysis and reporting. This modified responsive evaluation process was applied to the CSAT programme and a meta-evaluation was conducted to evaluate the effectiveness of the approach. The responsive evaluation approach was useful in identifying the concerns and issues of programme stakeholders, solidifying the standards and criteria for measuring the success of the CSAT programme, and gathering rich and descriptive evaluative information about educational processes. The evaluation was perceived to be human resource dependent in nature, yet was deemed to have been practical, efficient and effective in uncovering meaningful and useful information for stakeholder decision-making. Responsive evaluation is derived from the naturalistic paradigm and concentrates on examining the educational process rather than predefined outcomes of the process. Responsive evaluation results are perceived as having more relevance to stakeholder concerns and issues, and therefore more likely to be acted upon. Conducting an evaluation that is responsive to the needs of these groups will ensure that evaluative information is meaningful and more likely to be used for programme enhancement and improvement.
Evaluating healthcare priority setting at the meso level: A thematic review of empirical literature
Waithaka, Dennis; Tsofa, Benjamin; Barasa, Edwine
2018-01-01
Background: Decentralization of health systems has made sub-national/regional healthcare systems the backbone of healthcare delivery. These regions are tasked with the difficult responsibility of determining healthcare priorities and resource allocation amidst scarce resources. We aimed to review empirical literature that evaluated priority setting practice at the meso (sub-national) level of health systems. Methods: We systematically searched PubMed, ScienceDirect and Google scholar databases and supplemented these with manual searching for relevant studies, based on the reference list of selected papers. We only included empirical studies that described and evaluated, or those that only evaluated priority setting practice at the meso-level. A total of 16 papers were identified from LMICs and HICs. We analyzed data from the selected papers by thematic review. Results: Few studies used systematic priority setting processes, and all but one were from HICs. Both formal and informal criteria are used in priority-setting, however, informal criteria appear to be more perverse in LMICs compared to HICs. The priority setting process at the meso-level is a top-down approach with minimal involvement of the community. Accountability for reasonableness was the most common evaluative framework as it was used in 12 of the 16 studies. Efficiency, reallocation of resources and options for service delivery redesign were the most common outcome measures used to evaluate priority setting. Limitations: Our study was limited by the fact that there are very few empirical studies that have evaluated priority setting at the meso-level and there is likelihood that we did not capture all the studies. Conclusions: Improving priority setting practices at the meso level is crucial to strengthening health systems. This can be achieved through incorporating and adapting systematic priority setting processes and frameworks to the context where used, and making considerations of both process and outcome measures during priority setting and resource allocation. PMID:29511741
Serviceable pavement marking retroreflectivity levels : technical report.
DOT National Transportation Integrated Search
2009-03-01
This research addressed an array of issues related to measuring pavement markings retroreflectivity, factors : related to pavement marking performance, subjective evaluation process, best practices for using mobile : retroreflectometers, sampling pav...
Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models
2015-07-06
Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models David Frederic Crouse Naval Research Laboratory 4555 Overlook Ave...measurement and process non- linearities, such as the cubature Kalman filter , can perform ex- tremely poorly in many applications involving angular... Kalman filtering is a realization of the best linear unbiased estimator (BLUE) that evaluates certain integrals for expected values using different forms
NASA Astrophysics Data System (ADS)
Kongar, N. Elif
2004-12-01
Today, since customers are able to obtain similar-quality products for similar prices, the lead time has become the only preference criterion for most of the consumers. Therefore, it is crucial that the lead time, i.e., the time spent from the raw material phase till the manufactured good reaches the customer, is minimized. This issue can be investigated under the title of Supply Chain Management (SCM). An efficiently managed supply chain can lead to reduced response time for customers. To achieve this, continuous observation of supply chain efficiency, i.e., a constant performance evaluation of the current SCM is required. Widely used conventional performance measurement methods lack the ability to evaluate a SCM since the supply chain is a dynamic system that requires a more thorough and flexible performance measurement technique. Balanced Scorecard (BS) is an efficient tool for measuring the performance of dynamic systems and has a proven capability of providing the decision makers with the appropriate feedback data. In addition to SCM, a relatively new management field, namely reverse supply chain management (RSCM), also necessitates an appropriate evaluation approach. RSCM differs from SCM in many aspects, i.e., the criteria used for evaluation, the high level of uncertainty involved etc., not allowing the usage of identical evaluation techniques used for SCM. This study proposes a generic Balanced Scorecard to measure the performance of supply chain management while defining the appropriate performance measures for SCM. A scorecard prototype, ESCAPE, is presented to demonstrate the evaluation process.
Roup, Christina M; Leigh, Elizabeth D
2015-06-01
The purpose of the present study was to examine individual differences in binaural processing across the adult life span. Sixty listeners (aged 23-80 years) with symmetrical hearing were tested. Binaural behavioral processing was measured by the Words-in-Noise Test, the 500-Hz masking level difference, and the Dichotic Digit Test. Electrophysiologic responses were assessed by the auditory middle latency response binaural interaction component. No correlations among binaural measures were found. Age accounted for the greatest amount of variability in speech-in-noise performance. Age was significantly correlated with the Words-in-Noise Test binaural advantage and dichotic ear advantage. Partial correlations, however, revealed that this was an effect of hearing status rather than age per se. Inspection of individual results revealed that 20% of listeners demonstrated reduced binaural performance for at least 2 of the binaural measures. The lack of significant correlations among variables suggests that each is an important measurement of binaural abilities. For some listeners, binaural processing was abnormal, reflecting a binaural processing deficit not identified by monaural audiologic tests. The inclusion of a binaural test battery in the audiologic evaluation is supported given that these listeners may benefit from alternative forms of audiologic rehabilitation.
Effect of Electron Beam Freeform Fabrication (EBF3) Processing Parameters on Composition of Ti-6-4
NASA Technical Reports Server (NTRS)
Lach, Cynthia L.; Taminger, Karen; Schuszler, A. Bud, II; Sankaran, Sankara; Ehlers, Helen; Nasserrafi, Rahbar; Woods, Bryan
2007-01-01
The Electron Beam Freeform Fabrication (EBF3) process developed at NASA Langley Research Center was evaluated using a design of experiments approach to determine the effect of processing parameters on the composition and geometry of Ti-6-4 deposits. The effects of three processing parameters: beam power, translation speed, and wire feed rate, were investigated by varying one while keeping the remaining parameters constant. A three-factorial, three-level, fully balanced mutually orthogonal array (L27) design of experiments approach was used to examine the effects of low, medium, and high settings for the processing parameters on the chemistry, geometry, and quality of the resulting deposits. Single bead high deposits were fabricated and evaluated for 27 experimental conditions. Loss of aluminum in Ti-6-4 was observed in EBF3 processing due to selective vaporization of the aluminum from the sustained molten pool in the vacuum environment; therefore, the chemistries of the deposits were measured and compared with the composition of the initial wire and base plate to determine if the loss of aluminum could be minimized through careful selection of processing parameters. The influence of processing parameters and coupling between these parameters on bulk composition, measured by Direct Current Plasma (DCP), local microchemistries determined by Wavelength Dispersive Spectrometry (WDS), and deposit geometry will also be discussed.
New atmospheric sensor analysis study
NASA Technical Reports Server (NTRS)
Parker, K. G.
1989-01-01
The functional capabilities of the ESAD Research Computing Facility are discussed. The system is used in processing atmospheric measurements which are used in the evaluation of sensor performance, conducting design-concept simulation studies, and also in modeling the physical and dynamical nature of atmospheric processes. The results may then be evaluated to furnish inputs into the final design specifications for new space sensors intended for future Spacelab, Space Station, and free-flying missions. In addition, data gathered from these missions may subsequently be analyzed to provide better understanding of requirements for numerical modeling of atmospheric phenomena.
Early-Stage Visual Processing and Cortical Amplification Deficits in Schizophrenia
Butler, Pamela D.; Zemon, Vance; Schechter, Isaac; Saperstein, Alice M.; Hoptman, Matthew J.; Lim, Kelvin O.; Revheim, Nadine; Silipo, Gail; Javitt, Daniel C.
2005-01-01
Background Patients with schizophrenia show deficits in early-stage visual processing, potentially reflecting dysfunction of the magnocellular visual pathway. The magnocellular system operates normally in a nonlinear amplification mode mediated by glutamatergic (N-methyl-d-aspartate) receptors. Investigating magnocellular dysfunction in schizophrenia therefore permits evaluation of underlying etiologic hypotheses. Objectives To evaluate magnocellular dysfunction in schizophrenia, relative to known neurochemical and neuroanatomical substrates, and to examine relationships between electrophysiological and behavioral measures of visual pathway dysfunction and relationships with higher cognitive deficits. Design, Setting, and Participants Between-group study at an inpatient state psychiatric hospital and out-patient county psychiatric facilities. Thirty-three patients met DSM-IV criteria for schizophrenia or schizoaffective disorder, and 21 nonpsychiatric volunteers of similar ages composed the control group. Main Outcome Measures (1) Magnocellular and parvocellular evoked potentials, analyzed using nonlinear (Michaelis-Menten) and linear contrast gain approaches; (2) behavioral contrast sensitivity measures; (3) white matter integrity; (4) visual and nonvisual neuropsychological measures, and (5) clinical symptom and community functioning measures. Results Patients generated evoked potentials that were significantly reduced in response to magnocellular-biased, but not parvocellular-biased, stimuli (P=.001). Michaelis-Menten analyses demonstrated reduced contrast gain of the magnocellular system (P=.001). Patients showed decreased contrast sensitivity to magnocellular-biased stimuli (P<.001). Evoked potential deficits were significantly related to decreased white matter integrity in the optic radiations (P<.03). Evoked potential deficits predicted impaired contrast sensitivity (P=.002), which was in turn related to deficits in complex visual processing (P≤.04). Both evoked potential (P≤.04) and contrast sensitivity (P=.01) measures significantly predicted community functioning. Conclusions These findings confirm the existence of early-stage visual processing dysfunction in schizophrenia and provide the first evidence that such deficits are due to decreased nonlinear signal amplification, consistent with glutamatergic theories. Neuroimaging studies support the hypothesis of dysfunction within low-level visual pathways involving thalamocortical radiations. Deficits in early-stage visual processing significantly predict higher cognitive deficits. PMID:15867102
A comparison of representations for discrete multi-criteria decision problems☆
Gettinger, Johannes; Kiesling, Elmar; Stummer, Christian; Vetschera, Rudolf
2013-01-01
Discrete multi-criteria decision problems with numerous Pareto-efficient solution candidates place a significant cognitive burden on the decision maker. An interactive, aspiration-based search process that iteratively progresses toward the most preferred solution can alleviate this task. In this paper, we study three ways of representing such problems in a DSS, and compare them in a laboratory experiment using subjective and objective measures of the decision process as well as solution quality and problem understanding. In addition to an immediate user evaluation, we performed a re-evaluation several weeks later. Furthermore, we consider several levels of problem complexity and user characteristics. Results indicate that different problem representations have a considerable influence on search behavior, although long-term consistency appears to remain unaffected. We also found interesting discrepancies between subjective evaluations and objective measures. Conclusions from our experiments can help designers of DSS for large multi-criteria decision problems to fit problem representations to the goals of their system and the specific task at hand. PMID:24882912
Feature extraction for document text using Latent Dirichlet Allocation
NASA Astrophysics Data System (ADS)
Prihatini, P. M.; Suryawan, I. K.; Mandia, IN
2018-01-01
Feature extraction is one of stages in the information retrieval system that used to extract the unique feature values of a text document. The process of feature extraction can be done by several methods, one of which is Latent Dirichlet Allocation. However, researches related to text feature extraction using Latent Dirichlet Allocation method are rarely found for Indonesian text. Therefore, through this research, a text feature extraction will be implemented for Indonesian text. The research method consists of data acquisition, text pre-processing, initialization, topic sampling and evaluation. The evaluation is done by comparing Precision, Recall and F-Measure value between Latent Dirichlet Allocation and Term Frequency Inverse Document Frequency KMeans which commonly used for feature extraction. The evaluation results show that Precision, Recall and F-Measure value of Latent Dirichlet Allocation method is higher than Term Frequency Inverse Document Frequency KMeans method. This shows that Latent Dirichlet Allocation method is able to extract features and cluster Indonesian text better than Term Frequency Inverse Document Frequency KMeans method.
The role of sorption processes in the removal of pharmaceuticals by fungal treatment of wastewater.
Lucas, D; Castellet-Rovira, F; Villagrasa, M; Badia-Fabregat, M; Barceló, D; Vicent, T; Caminal, G; Sarrà, M; Rodríguez-Mozaz, S
2018-01-01
The contribution of the sorption processes in the elimination of pharmaceuticals (PhACs) during the fungal treatment of wastewater has been evaluated in this work. The sorption of four PhACs (carbamazepine, diclofenac, iopromide and venlafaxine) by 6 different fungi was first evaluated in batch experiments. Concentrations of PhACs in both liquid and solid (biomass) matrices from the fungal treatment were measured. Contribution of the sorption to the total removal of pollutants ranged between 3% and 13% in relation to the initial amount. The sorption of 47 PhACs in fungi was also evaluated in a fungal treatment performed in 26days in a continuous bioreactor treating wastewater from a veterinary hospital. PhACs levels measured in the fungal biomass were similar to those detected in conventional wastewater treatment (WWTP) sludge. This may suggest the necessity of manage fungal biomass as waste in the same manner that the WWTP sludge is managed. Copyright © 2017 Elsevier B.V. All rights reserved.
Is School-Based Height and Weight Screening of Elementary Students Private and Reliable?
ERIC Educational Resources Information Center
Stoddard, Sarah A.; Kubik, Martha Y.; Skay, Carol
2008-01-01
The Institute of Medicine recommends school-based body mass index (BMI) screening as an obesity prevention strategy. While school nurses have provided height/weight screening for years, little has been published describing measurement reliability or process. This study evaluated the reliability of height/weight measures collected by school nurses…
Development of an Instrument to Measure Medical Students' Attitudes toward People with Disabilities
ERIC Educational Resources Information Center
Symons, Andrew B.; Fish, Reva; McGuigan, Denise; Fox, Jeffery; Akl, Elie A.
2012-01-01
As curricula to improve medical students' attitudes toward people with disabilities are developed, instruments are needed to guide the process and evaluate effectiveness. The authors developed an instrument to measure medical students' attitudes toward people with disabilities. A pilot instrument with 30 items in four sections was administered to…
An evaluation of underwater epoxies to permanently install temperature sensors in mountain streams
Daniel J. Isaak; Dona L. Horan
2011-01-01
Stream temperature regimes are of fundamental importance in understanding the patterns and processes in aquatic ecosystems, and inexpensive digital sensors provide accurate and repeated measurements of temperature. Most temperature measurements in mountain streams are made only during summer months because of logistical constraints associated with stream access and...
Perception or Fact: Measuring the Effectiveness of the Terrorism Early Warning (TEW) Group
2005-09-01
alternatives ” (Campbell 2005). The logic model process is a tool that has been used by evaluators for many years to identify performance measures and...pertinent information is obtained, this cell is responsible for the development (pre-event) and use (trans- and post-event) of playbooks and...
The Relationship between Emotional Intelligence and Student Teacher Performance
ERIC Educational Resources Information Center
Drew, Todd L.
2006-01-01
The purpose of this mixed methods study (N = 40) was to determine whether Student Teacher Performance (STP), as measured by a behavior-based performance evaluation process, is associated with Emotional Intelligence (EI), as measured by a personality assessment instrument. The study is an important contribution to the literature in that it appears…
The PATH project in eight European countries: an evaluation.
Veillard, Jeremy Henri Maurice; Schiøtz, Michaela Louise; Guisset, Ann-Lise; Brown, Adalsteinn Davidson; Klazinga, Niek S
2013-01-01
This paper's aim is to evaluate the perceived impact and the enabling factors and barriers experienced by hospital staff participating in an international hospital performance measurement project focused on internal quality improvement. Semi-structured interviews involving international hospital performance measurement project coordinators, including 140 hospitals from eight European countries (Belgium, Estonia, France, Germany, Hungary, Poland, Slovakia and Slovenia). Inductively analyzing the interview transcripts was carried out using the grounded theory approach. Even when public reporting is absent, the project was perceived as having stimulated performance measurement and quality improvement initiatives in participating hospitals. Attention should be paid to leadership/ownership, context, content (project intrinsic features) and processes supporting elements. Generalizing the findings is limited by the study's small sample size. Possible implications for the WHO European Regional Office and for participating hospitals would be to assess hospital preparedness to participate in the PATH project, depending on context, process and structural elements; and enhance performance and practice benchmarking through suggested approaches. This research gathered rich and unique material related to an international performance measurement project. It derived actionable findings.
Caldas, Stephanie V; Broaddus, Elena T; Winch, Peter J
2016-08-01
Substantial evidence supports the value of outdoor education programs for promoting healthy adolescent development, yet measurement of program outcomes often lacks rigor. Accurately assessing the impacts of programs that seek to promote positive youth development is critical for determining whether youth are benefitting as intended, identifying best practices and areas for improvement, and informing decisions about which programs to invest in. We generated brief, customized instruments for measuring three outcomes among youth participants in Baltimore City Outward Bound programs: conflict management, emotional self-efficacy, and problem solving confidence. Measures were validated through exploratory and confirmatory factor analyses of pilot-testing data from two groups of program participants. We describe our process of identifying outcomes for measurement, developing and adapting measurement instruments, and validating these instruments. The finalized measures support evaluations of outdoor education programs serving urban adolescent youth. Such evaluations enhance accountability by determining if youth are benefiting from programs as intended, and strengthen the case for investment in programs with demonstrated success. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mander, Johannes
2015-01-01
There is a dearth of measures specifically designed to assess empirically validated mechanisms of therapeutic change. To fill in this research gap, the aim of the current study was to develop a measure that covers a large variety of empirically validated mechanisms of change with corresponding versions for the patient and therapist. To develop an instrument that is based on several important change process frameworks, we combined two established change mechanisms instruments: the Scale for the Multiperspective Assessment of General Change Mechanisms in Psychotherapy (SACiP) and the Scale of the Therapeutic Alliance-Revised (STA-R). In our study, 457 psychosomatic inpatients completed the SACiP and the STA-R and diverse outcome measures in early, middle and late stages of psychotherapy. Data analyses were conducted using factor analyses and multilevel modelling. The psychometric properties of the resulting Individual Therapy Process Questionnaire were generally good to excellent, as demonstrated by (a) exploratory factor analyses on both patient and therapist ratings, (b) CFA on later measuring times, (c) high internal consistencies and (d) significant outcome predictive effects. The parallel forms of the ITPQ deliver opportunities to compare the patient and therapist perspectives for a broader range of facets of change mechanisms than was hitherto possible. Consequently, the measure can be applied in future research to more specifically analyse different change mechanism profiles in session-to-session development and outcome prediction. Key Practitioner Message This article describes the development of an instrument that measures general mechanisms of change in psychotherapy from both the patient and therapist perspectives. Post-session item ratings from both the patient and therapist can be used as feedback to optimize therapeutic processes. We provide a detailed discussion of measures developed to evaluate therapeutic change mechanisms. Copyright © 2014 John Wiley & Sons, Ltd.
Ostrich specific semen diluent and sperm motility characteristics during in vitro storage.
Smith, A M J; Bonato, M; Dzama, K; Malecki, I A; Cloete, S W P
2018-06-01
The dilution of semen is a very important initial process for semen processing and evaluation, storage and preservation in vitro and efficient artificial insemination. The aim of the study was to evaluate the effect of two synthetic diluents (OS1 and OS2) on ostrich sperm motility parameters during in vitro storage. Formulation of OS1 was based on macro minerals (Na, K, P, Ca, Mg) and OS2 on the further addition of micro minerals (Se and Zn), based on mineral concentration determined in the ostrich seminal plasma (SP). Sperm motility was evaluated at different processing stages (neat, after dilution, during storage and after storage) by measuring several sperm motility variables using the Sperm Class Analyzer® (SCA). Processing (dilution, cooling and storage) of semen for in vitro storage purposes decreased the values for all sperm motility variables measured. The percentage motile (MOT) and progressive motile (PMOT) sperm decreased 20% to 30% during 24 h of storage, independent of diluent type. Quality of sperm swim (LIN, STR and WOB), however, was sustained during the longer storage periods (48 h) with the OS2 diluent modified with Se and Zn additions. Quality of sperm swim with use of OS1 was 6% to 8% less for the LIN, STR, and WOB variables. Male fitted as a fixed effect accounted for >60% of the variation for certain sperm motility variables (PMOT, MOT, VCL, VSL, VAP and ALH) evaluated at different processing stages. Semen from specific males had sustained sperm motility characteristics to a greater extent than that of other males during the 24-h storage period. Copyright © 2018 Elsevier B.V. All rights reserved.
In Situ Fringe Projection Profilometry for Laser Power Bed Fusion Process
NASA Astrophysics Data System (ADS)
Zhang, Bin
Additive manufacturing (AM) offers an industrial solution to produce parts with complex geometries and internal structures that conventional manufacturing techniques cannot produce. However, current metal additive process, particularly the laser powder bed fusion (LPBF) process, suffers from poor surface finish and various material defects which hinder its wide applications. One way to solve this problem is by adding in situ metrology sensor onto the machine chamber. Matured manufacturing processes are tightly monitored and controlled, and instrumentation advances are needed to realize this same advantage for metal additive process. This encourages us to develop an in situ fringe projection system for the LPBF process. The development of such a system and the measurement capability are demonstrated in this dissertation. We show that this system can measure various powder bed signatures including powder layer variations, the average height drop between fused metal and unfused powder, and the height variations on the fused surfaces. The ability to measure textured surface is also evaluated through the instrument transfer function (ITF). We analyze the mathematical model of the proposed fringe projection system, and prove the linearity of the system through simulations. A practical ITF measurement technique using a stepped surface is also demonstrated. The measurement results are compared with theoretical predictions generated through the ITF simulations.
Study on photochemical analysis system (VLES) for EUV lithography
NASA Astrophysics Data System (ADS)
Sekiguchi, A.; Kono, Y.; Kadoi, M.; Minami, Y.; Kozawa, T.; Tagawa, S.; Gustafson, D.; Blackborow, P.
2007-03-01
A system for photo-chemical analysis of EUV lithography processes has been developed. This system has consists of 3 units: (1) an exposure that uses the Z-Pinch (Energetiq Tech.) EUV Light source (DPP) to carry out a flood exposure, (2) a measurement system RDA (Litho Tech Japan) for the development rate of photo-resists, and (3) a simulation unit that utilizes PROLITH (KLA-Tencor) to calculate the resist profiles and process latitude using the measured development rate data. With this system, preliminary evaluation of the performance of EUV lithography can be performed without any lithography tool (Stepper and Scanner system) that is capable of imaging and alignment. Profiles for 32 nm line and space pattern are simulated for the EUV resist (Posi-2 resist by TOK) by using VLES that hat has sensitivity at the 13.5nm wavelength. The simulation successfully predicts the resist behavior. Thus it is confirmed that the system enables efficient evaluation of the performance of EUV lithography processes.
In Situ XRD Studies of the Process Dynamics During Annealing in Cold-Rolled Copper
NASA Astrophysics Data System (ADS)
Dey, Santu; Gayathri, N.; Bhattacharya, M.; Mukherjee, P.
2016-12-01
The dynamics of the release of stored energy during annealing along two different crystallographic planes, i.e., {111} and {220}, in deformed copper have been investigated using in situ X-ray diffraction measurements at 458 K and 473 K (185 °C and 200 °C). The study has been carried out on 50 and 80 pct cold-rolled Cu sheets. The microstructures of the rolled samples have been characterized using optical microscopy and electron backscattered diffraction measurements. The microstructural parameters were evaluated from the X-ray diffractogram using the Scherrer equation and the modified Rietveld method. The stored energy along different planes was determined using the modified Stibitz formula from the X-ray peak broadening, and the bulk stored energy was evaluated using differential scanning calorimetry. The process dynamics of recovery and recrystallization as observed through the release of stored energy have been modeled as the second-order and first-order processes, respectively.
Quantitative evaluation of photoplethysmographic artifact reduction for pulse oximetry
NASA Astrophysics Data System (ADS)
Hayes, Matthew J.; Smith, Peter R.
1999-01-01
Motion artefact corruption of pulse oximeter output, causing both measurement inaccuracies and false alarm conditions, is a primary restriction in the current clinical practice and future applications of this useful technique. Artefact reduction in photoplethysmography (PPG), and therefore by application in pulse oximetry, is demonstrated using a novel non-linear methodology recently proposed by the authors. The significance of these processed PPG signals for pulse oximetry measurement is discussed, with particular attention to the normalization inherent in the artefact reduction process. Quantitative experimental investigation of the performance of PPG artefact reduction is then utilized to evaluate this technology for application to pulse oximetry. While the successfully demonstrated reduction of severe artefacts may widen the applicability of all PPG technologies and decrease the occurrence of pulse oximeter false alarms, the observed reduction of slight artefacts suggests that many such effects may go unnoticed in clinical practice. The signal processing and output averaging used in most commercial oximeters can incorporate these artefact errors into the output, while masking the true PPG signal corruption. It is therefore suggested that PPG artefact reduction should be incorporated into conventional pulse oximetry measurement, even in the absence of end-user artefact problems.
Zeng, Pei-Yuan; Li, Jian-Jun; Liao, Dong-Qi; Tu, Xiang; Xu, Mei-Ying; Sun, Guo-Ping
2013-12-01
Emission characteristics of volatile organic compounds (VOCs) were investigated in an automotive coating manufacturing enterprise. Air samples were taken from eight different manufacturing areas in three workshops, and the species of VOCs and their concentrations were measured by gas chromatography-mass spectrometry (GC-MS). Safety evaluation was also conducted by comparing the concentration of VOCs with the permissible concentration-short term exposure limit (PC-STEL) regulated by the Ministry of Health. The results showed that fifteen VOCs were detected in the indoor air of the automotive coatings workshop, including benzene, toluene, ethylbenzene, xylene, ethyl acetate, butyl acetate, methyl isobutyl ketone, propylene glycol monomethyl ether acetate, trimethylbenzene and ethylene glycol monobutyl ether, Their concentrations widely ranged from 0.51 to 593.14 mg x m(-3). The concentrations of TVOCs were significantly different among different manufacturing processes. Even in the same manufacturing process, the concentrations of each component measured at different times were also greatly different. The predominant VOCs of indoor air in the workshop were identified to be ethylbenzene and butyl acetate. The concentrations of most VOCs exceeded the occupational exposure limits, so the corresponding control measures should be taken to protect the health of the workers.