Sample records for performance evaluation methodologies

  1. METHODOLOGICAL QUALITY OF ECONOMIC EVALUATIONS ALONGSIDE TRIALS OF KNEE PHYSIOTHERAPY.

    PubMed

    García-Pérez, Lidia; Linertová, Renata; Arvelo-Martín, Alejandro; Guerra-Marrero, Carolina; Martínez-Alberto, Carlos Enrique; Cuéllar-Pompa, Leticia; Escobar, Antonio; Serrano-Aguilar, Pedro

    2017-01-01

    The methodological quality of an economic evaluation performed alongside a clinical trial can be underestimated if the paper does not report key methodological features. This study discusses methodological assessment issues on the example of a systematic review on cost-effectiveness of physiotherapy for knee osteoarthritis. Six economic evaluation studies included in the systematic review and related clinical trials were assessed using the 10-question check-list by Drummond and the Physiotherapy Evidence Database (PEDro) scale. All economic evaluations were performed alongside a clinical trial but the studied interventions were too heterogeneous to be synthesized. Methodological quality of the economic evaluations reported in the papers was not free of drawbacks, and in some cases, it improved when information from the related clinical trial was taken into account. Economic evaluation papers dedicate little space to methodological features of related clinical trials; therefore, the methodological quality can be underestimated if evaluated separately from the trials. Future economic evaluations should follow more strictly the recommendations about methodology and the authors should pay special attention to the quality of reporting.

  2. Applying operational research and data mining to performance based medical personnel motivation system.

    PubMed

    Niaksu, Olegas; Zaptorius, Jonas

    2014-01-01

    This paper presents the methodology suitable for creation of a performance related remuneration system in healthcare sector, which would meet requirements for efficiency and sustainable quality of healthcare services. Methodology for performance indicators selection, ranking and a posteriori evaluation has been proposed and discussed. Priority Distribution Method is applied for unbiased performance criteria weighting. Data mining methods are proposed to monitor and evaluate the results of motivation system.We developed a method for healthcare specific criteria selection consisting of 8 steps; proposed and demonstrated application of Priority Distribution Method for the selected criteria weighting. Moreover, a set of data mining methods for evaluation of the motivational system outcomes was proposed. The described methodology for calculating performance related payment needs practical approbation. We plan to develop semi-automated tools for institutional and personal performance indicators monitoring. The final step would be approbation of the methodology in a healthcare facility.

  3. Development and testing of methodology for evaluating the performance of multi-input/multi-output digital control systems

    NASA Technical Reports Server (NTRS)

    Polotzky, Anthony S.; Wieseman, Carol; Hoadley, Sherwood Tiffany; Mukhopadhyay, Vivek

    1990-01-01

    The development of a controller performance evaluation (CPE) methodology for multiinput/multioutput digital control systems is described. The equations used to obtain the open-loop plant, controller transfer matrices, and return-difference matrices are given. Results of applying the CPE methodology to evaluate MIMO digital flutter suppression systems being tested on an active flexible wing wind-tunnel model are presented to demonstrate the CPE capability.

  4. OrbView-3 Technical Performance Evaluation 2005: Modulation Transfer Function

    NASA Technical Reports Server (NTRS)

    Cole, Aaron

    2007-01-01

    The Technical performance evaluation of OrbView-3 using the Modulation Transfer Function (MTF) is presented. The contents include: 1) MTF Results and Methodology; 2) Radiometric Calibration Methodology; and 3) Relative Radiometric Assessment Results

  5. Performance in physiology evaluation: possible improvement by active learning strategies.

    PubMed

    Montrezor, Luís H

    2016-12-01

    The evaluation process is complex and extremely important in the teaching/learning process. Evaluations are constantly employed in the classroom to assist students in the learning process and to help teachers improve the teaching process. The use of active methodologies encourages students to participate in the learning process, encourages interaction with their peers, and stimulates thinking about physiological mechanisms. This study examined the performance of medical students on physiology over four semesters with and without active engagement methodologies. Four activities were used: a puzzle, a board game, a debate, and a video. The results show that engaging in activities with active methodologies before a physiology cognitive monitoring test significantly improved student performance compared with not performing the activities. We integrate the use of these methodologies with classic lectures, and this integration appears to improve the teaching/learning process in the discipline of physiology and improves the integration of physiology with cardiology and neurology. In addition, students enjoy the activities and perform better on their evaluations when they use them. Copyright © 2016 The American Physiological Society.

  6. 76 FR 6795 - Statement of Organization, Functions, and Delegations of Authority; Office of the National...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-08

    ... Coordinator; (2) applies research methodologies to perform evaluation studies of health information technology grant programs; and, (3) applies advanced mathematical or quantitative modeling to the U.S. health care... remaining items in the paragraph accordingly: ``(1) Applying research methodologies to perform evaluation...

  7. Evaluation methodologies for an advanced information processing system

    NASA Technical Reports Server (NTRS)

    Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.

    1984-01-01

    The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.

  8. A Novel Performance Evaluation Methodology for Single-Target Trackers.

    PubMed

    Kristan, Matej; Matas, Jiri; Leonardis, Ales; Vojir, Tomas; Pflugfelder, Roman; Fernandez, Gustavo; Nebehay, Georg; Porikli, Fatih; Cehovin, Luka

    2016-11-01

    This paper addresses the problem of single-target tracker performance evaluation. We consider the performance measures, the dataset and the evaluation system to be the most important components of tracker evaluation and propose requirements for each of them. The requirements are the basis of a new evaluation methodology that aims at a simple and easily interpretable tracker comparison. The ranking-based methodology addresses tracker equivalence in terms of statistical significance and practical differences. A fully-annotated dataset with per-frame annotations with several visual attributes is introduced. The diversity of its visual properties is maximized in a novel way by clustering a large number of videos according to their visual attributes. This makes it the most sophistically constructed and annotated dataset to date. A multi-platform evaluation system allowing easy integration of third-party trackers is presented as well. The proposed evaluation methodology was tested on the VOT2014 challenge on the new dataset and 38 trackers, making it the largest benchmark to date. Most of the tested trackers are indeed state-of-the-art since they outperform the standard baselines, resulting in a highly-challenging benchmark. An exhaustive analysis of the dataset from the perspective of tracking difficulty is carried out. To facilitate tracker comparison a new performance visualization technique is proposed.

  9. Addressing social aspects associated with wastewater treatment facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Padilla-Rivera, Alejandro; Morgan-Sagastume, Juan Manuel; Noyola, Adalberto

    In wastewater treatment facilities (WWTF), technical and financial aspects have been considered a priority, while other issues, such as social aspects, have not been evaluated seriously and there is not an accepted methodology for assessing it. In this work, a methodology focused on social concerns related to WWTF is presented. The methodology proposes the use of 25 indicators as a framework for measuring social performance to evaluate the progress in moving towards sustainability. The methodology was applied to test its applicability and effectiveness in two WWTF in Mexico (urban and rural). This evaluation helped define the key elements, stakeholders andmore » barriers in the facilities. In this context, the urban facility showed a better overall performance, a result that may be explained mainly by the better socioeconomic context of the urban municipality. Finally, the evaluation of social aspects using the semi-qualitative approach proposed in this work allows for a comparison between different facilities and for the identification of strengths and weakness, and it provides an alternative tool for achieving and improving wastewater management. - Highlights: • The methodology proposes 25 indicators as a framework for measuring social performance in wastewater treatment facilities. • The evaluation helped to define the key elements, stakeholders and barriers in the wastewater treatment facilities. • The evaluation of social aspects allows the identification of strengths and weakness for improving wastewater management. • It provides a social profile of the facility that highlights the best and worst performances.« less

  10. Virginia Transit Performance Evaluation Package (VATPEP).

    DOT National Transportation Integrated Search

    1987-01-01

    The Virginia Transit Performance Evaluation Package (VATPEP), a computer software package, is documented. This is the computerized version of the methodology used by the Virginia Department of Transportation to evaluate the performance of public tran...

  11. Research performance evaluation: the experience of an independent medical research institute.

    PubMed

    Schapper, Catherine C; Dwyer, Terence; Tregear, Geoffrey W; Aitken, MaryAnne; Clay, Moira A

    2012-05-01

    Evaluation of the social and economic outcomes of health research funding is an area of intense interest and debate. Typically, approaches have sought to assess the impact of research funding by medical charities or regional government bodies. Independent research institutes have a similar need for accountability in investment decisions but have different objectives and funding, thus the existing approaches are not appropriate. An evaluation methodology using eight indicators was developed to assess research performance across three broad categories: knowledge creation; inputs to research; and commercial, clinical and public health outcomes. The evaluation approach was designed to provide a balanced assessment across laboratory, clinical and public health research. With a diverse research agenda supported by a large number of researchers, the Research Performance Evaluation process at the Murdoch Childrens Research Institute has, by necessity, been iterative and responsive to the needs of the Institute and its staff. Since its inception 5 years ago, data collection systems have been refined, the methodology has been adjusted to capture appropriate data, staff awareness and participation has increased, and issues regarding the methodology and scoring have been resolved. The Research Performance Evaluation methodology described here provides a fair and transparent means of disbursing internal funding. It is also a powerful tool for evaluating the Institute's progress towards achieving its strategic goals, and is therefore a key driver for research excellence.

  12. Evaluating Multi-Input/Multi-Output Digital Control Systems

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.; Wieseman, Carol D.; Hoadley, Sherwood T.; Mukhopadhyay, Vivek

    1994-01-01

    Controller-performance-evaluation (CPE) methodology for multi-input/multi-output (MIMO) digital control systems developed. Procedures identify potentially destabilizing controllers and confirm satisfactory performance of stabilizing ones. Methodology generic and used in many types of multi-loop digital-controller applications, including digital flight-control systems, digitally controlled spacecraft structures, and actively controlled wind-tunnel models. Also applicable to other complex, highly dynamic digital controllers, such as those in high-performance robot systems.

  13. Evaluating building performance in healthcare facilities: an organizational perspective.

    PubMed

    Steinke, Claudia; Webster, Lynn; Fontaine, Marie

    2010-01-01

    Using the environment as a strategic tool is one of the most cost-effective and enduring approaches for improving public health; however, it is one that requires multiple perspectives. The purpose of this article is to highlight an innovative methodology that has been developed for conducting comprehensive performance evaluations in public sector health facilities in Canada. The building performance evaluation methodology described in this paper is a government initiative. The project team developed a comprehensive building evaluation process for all new capital health projects that would respond to the aforementioned need for stakeholders to be more accountable and to better integrate the larger organizational strategy of facilities. The Balanced Scorecard, which is a multiparadigmatic, performance-based business framework, serves as the underlying theoretical framework for this initiative. It was applied in the development of the conceptual model entitled the Building Performance Evaluation Scorecard, which provides the following benefits: (1) It illustrates a process to link facilities more effectively to the overall mission and goals of an organization; (2) It is both a measurement and a management system that has the ability to link regional facilities to measures of success and larger business goals; (3) It provides a standardized methodology that ensures consistency in assessing building performance; and (4) It is more comprehensive than traditional building evaluations. The methodology presented in this paper is both a measurement and management system that integrates the principles of evidence-based design with the practices of pre- and post-occupancy evaluation. It promotes accountability and continues throughout the life cycle of a project. The advantage of applying this framework is that it engages health organizations in clarifying a vision and strategy for their facilities and helps translate those strategies into action and measurable performance outcomes.

  14. Bridging the gap between neurocognitive processing theory and performance validity assessment among the cognitively impaired: a review and methodological approach.

    PubMed

    Leighton, Angela; Weinborn, Michael; Maybery, Murray

    2014-10-01

    Bigler (2012) and Larrabee (2012) recently addressed the state of the science surrounding performance validity tests (PVTs) in a dialogue highlighting evidence for the valid and increased use of PVTs, but also for unresolved problems. Specifically, Bigler criticized the lack of guidance from neurocognitive processing theory in the PVT literature. For example, individual PVTs have applied the simultaneous forced-choice methodology using a variety of test characteristics (e.g., word vs. picture stimuli) with known neurocognitive processing implications (e.g., the "picture superiority effect"). However, the influence of such variations on classification accuracy has been inadequately evaluated, particularly among cognitively impaired individuals. The current review places the PVT literature in the context of neurocognitive processing theory, and identifies potential methodological factors to account for the significant variability we identified in classification accuracy across current PVTs. We subsequently evaluated the utility of a well-known cognitive manipulation to provide a Clinical Analogue Methodology (CAM), that is, to alter the PVT performance of healthy individuals to be similar to that of a cognitively impaired group. Initial support was found, suggesting the CAM may be useful alongside other approaches (analogue malingering methodology) for the systematic evaluation of PVTs, particularly the influence of specific neurocognitive processing components on performance.

  15. Progress Report for the Robotic Intelligence Evaluation. Program Year 1: Developing Test Methodology for Anti-Rollover Systems

    DTIC Science & Technology

    2006-06-01

    Scientific Research. 5PAM-Crash is a trademark of the ESI Group . 6MATLAB and SIMULINK are registered trademarks of the MathWorks. 14 maneuvers...Laboratory (ARL) to develop methodologies to evaluate robotic behavior algorithms that control the actions of individual robots or groups of robots...methodologies to evaluate robotic behavior algorithms that control the actions of individual robots or groups of robots acting as a team to perform a

  16. Development of Testing Methodologies to Evaluate Postflight Locomotor Performance

    NASA Technical Reports Server (NTRS)

    Mulavara, A. P.; Peters, B. T.; Cohen, H. S.; Richards, J. T.; Miller, C. A.; Brady, R.; Warren, L. E.; Bloomberg, J. J.

    2006-01-01

    Crewmembers experience locomotor and postural instabilities during ambulation on Earth following their return from space flight. Gait training programs designed to facilitate recovery of locomotor function following a transition to a gravitational environment need to be accompanied by relevant assessment methodologies to evaluate their efficacy. The goal of this paper is to demonstrate the operational validity of two tests of locomotor function that were used to evaluate performance after long duration space flight missions on the International Space Station (ISS).

  17. Traumatic brain injury: methodological approaches to estimate health and economic outcomes.

    PubMed

    Lu, Juan; Roe, Cecilie; Aas, Eline; Lapane, Kate L; Niemeier, Janet; Arango-Lasprilla, Juan Carlos; Andelic, Nada

    2013-12-01

    The effort to standardize the methodology and adherence to recommended principles for all economic evaluations has been emphasized in medical literature. The objective of this review is to examine whether economic evaluations in traumatic brain injury (TBI) research have been compliant with existing guidelines. Medline search was performed between January 1, 1995 and August 11, 2012. All original TBI-related full economic evaluations were included in the study. Two authors independently rated each study's methodology and data presentation to determine compliance to the 10 methodological principles recommended by Blackmore et al. Descriptive analysis was used to summarize the data. Inter-rater reliability was assessed with Kappa statistics. A total of 28 studies met the inclusion criteria. Eighteen of these studies described cost-effectiveness, seven cost-benefit, and three cost-utility analyses. The results showed a rapid growth in the number of published articles on the economic impact of TBI since 2000 and an improvement in their methodological quality. However, overall compliance with recommended methodological principles of TBI-related economic evaluation has been deficient. On average, about six of the 10 criteria were followed in these publications, and only two articles met all 10 criteria. These findings call for an increased awareness of the methodological standards that should be followed by investigators both in performance of economic evaluation and in reviews of evaluation reports prior to publication. The results also suggest that all economic evaluations should be made by following the guidelines within a conceptual framework, in order to facilitate evidence-based practices in the field of TBI.

  18. Performance evaluation methodology for historical document image binarization.

    PubMed

    Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis

    2013-02-01

    Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.

  19. Facility Energy Performance Benchmarking in a Data-Scarce Environment

    DTIC Science & Technology

    2017-08-01

    environment, and analyze occupant-, system-, and component-level faults contributing to energy in- efficiency. A methodology for developing DoD-specific...Research, Development, Test, and Evaluation (RDTE) Program to develop an intelligent framework, encompassing methodology and model- ing, that...energy performers by installation, climate zone, and other criteria. A methodology for creating the DoD-specific EUIs would be an important part of a

  20. Evaluation of analytical performance based on partial order methodology.

    PubMed

    Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin

    2015-01-01

    Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points". Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Belgian guidelines for economic evaluations: second edition.

    PubMed

    Thiry, Nancy; Neyt, Mattias; Van De Sande, Stefaan; Cleemput, Irina

    2014-12-01

    The aim of this study was to present the updated methodological guidelines for economic evaluations of healthcare interventions (drugs, medical devices, and other interventions) in Belgium. The update of the guidelines was performed by three Belgian health economists following feedback from users of the former guidelines and personal experience. The updated guidelines were discussed with a multidisciplinary team consisting of other health economists, assessors of reimbursement request files, representatives of Belgian databases and representatives of the drugs and medical devices industry. The final document was validated by three external validators that were not involved in the previous discussions. The guidelines give methodological guidance for the following components of an economic evaluation: literature review, perspective of the evaluation, definition of the target population, choice of the comparator, analytic technique and study design, calculation of costs, valuation of outcomes, definition of the time horizon, modeling, handling uncertainty and discounting. We present a reference case that can be considered as the minimal requirement for Belgian economic evaluations of health interventions. These guidelines will improve the methodological quality, transparency and uniformity of the economic evaluations performed in Belgium. The guidelines will also provide support to the researchers and assessors performing or evaluating economic evaluations.

  2. KSC management training system project

    NASA Technical Reports Server (NTRS)

    Sepulveda, Jose A.

    1993-01-01

    The stated objectives for the summer of 1993 were: to review the Individual Development Plan Surveys for 1994 in order to automate the analysis of the Needs Assessment effort; and to develop and implement evaluation methodologies to perform ongoing program-wide course-to-course assessment. This includes the following: to propose a methodology to develop and implement objective, performance-based assessment instruments for each training effort; to mechanize course evaluation forms and develop software to facilitate the data gathering, analysis, and reporting processes; and to implement the methodology, forms, and software in at lease one training course or seminar selected among those normally offered in the summer at KSC. Section two of this report addresses the work done in regard to the Individual Development Plan Surveys for 1994. Section three presents the methodology proposed to develop and implement objective, performance-based assessment instruments for each training course offered at KSC.

  3. District Heating Systems Performance Analyses. Heat Energy Tariff

    NASA Astrophysics Data System (ADS)

    Ziemele, Jelena; Vigants, Girts; Vitolins, Valdis; Blumberga, Dagnija; Veidenbergs, Ivars

    2014-12-01

    The paper addresses an important element of the European energy sector: the evaluation of district heating (DH) system operations from the standpoint of increasing energy efficiency and increasing the use of renewable energy resources. This has been done by developing a new methodology for the evaluation of the heat tariff. The paper presents an algorithm of this methodology, which includes not only a data base and calculation equation systems, but also an integrated multi-criteria analysis module using MADM/MCDM (Multi-Attribute Decision Making / Multi-Criteria Decision Making) based on TOPSIS (Technique for Order Performance by Similarity to Ideal Solution). The results of the multi-criteria analysis are used to set the tariff benchmarks. The evaluation methodology has been tested for Latvian heat tariffs, and the obtained results show that only half of heating companies reach a benchmark value equal to 0.5 for the efficiency closeness to the ideal solution indicator. This means that the proposed evaluation methodology would not only allow companies to determine how they perform with regard to the proposed benchmark, but also to identify their need to restructure so that they may reach the level of a low-carbon business.

  4. Development and testing of controller performance evaluation methodology for multi-input/multi-output digital control systems

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony; Wieseman, Carol; Hoadley, Sherwood Tiffany; Mukhopadhyay, Vivek

    1991-01-01

    Described here is the development and implementation of on-line, near real time controller performance evaluation (CPE) methods capability. Briefly discussed are the structure of data flow, the signal processing methods used to process the data, and the software developed to generate the transfer functions. This methodology is generic in nature and can be used in any type of multi-input/multi-output (MIMO) digital controller application, including digital flight control systems, digitally controlled spacecraft structures, and actively controlled wind tunnel models. Results of applying the CPE methodology to evaluate (in near real time) MIMO digital flutter suppression systems being tested on the Rockwell Active Flexible Wing (AFW) wind tunnel model are presented to demonstrate the CPE capability.

  5. Towards more sustainable management of European food waste: Methodological approach and numerical application.

    PubMed

    Manfredi, Simone; Cristobal, Jorge

    2016-09-01

    Trying to respond to the latest policy needs, the work presented in this article aims at developing a life-cycle based framework methodology to quantitatively evaluate the environmental and economic sustainability of European food waste management options. The methodology is structured into six steps aimed at defining boundaries and scope of the evaluation, evaluating environmental and economic impacts and identifying best performing options. The methodology is able to accommodate additional assessment criteria, for example the social dimension of sustainability, thus moving towards a comprehensive sustainability assessment framework. A numerical case study is also developed to provide an example of application of the proposed methodology to an average European context. Different options for food waste treatment are compared, including landfilling, composting, anaerobic digestion and incineration. The environmental dimension is evaluated with the software EASETECH, while the economic assessment is conducted based on different indicators expressing the costs associated with food waste management. Results show that the proposed methodology allows for a straightforward identification of the most sustainable options for food waste, thus can provide factual support to decision/policy making. However, it was also observed that results markedly depend on a number of user-defined assumptions, for example on the choice of the indicators to express the environmental and economic performance. © The Author(s) 2016.

  6. Display/control requirements for automated VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Hoffman, W. C.; Kleinman, D. L.; Young, L. R.

    1976-01-01

    A systematic design methodology for pilot displays in advanced commercial VTOL aircraft was developed and refined. The analyst is provided with a step-by-step procedure for conducting conceptual display/control configurations evaluations for simultaneous monitoring and control pilot tasks. The approach consists of three phases: formulation of information requirements, configuration evaluation, and system selection. Both the monitoring and control performance models are based upon the optimal control model of the human operator. Extensions to the conventional optimal control model required in the display design methodology include explicit optimization of control/monitoring attention; simultaneous monitoring and control performance predictions; and indifference threshold effects. The methodology was applied to NASA's experimental CH-47 helicopter in support of the VALT program. The CH-47 application examined the system performance of six flight conditions. Four candidate configurations are suggested for evaluation in pilot-in-the-loop simulations and eventual flight tests.

  7. The Researching on Evaluation of Automatic Voltage Control Based on Improved Zoning Methodology

    NASA Astrophysics Data System (ADS)

    Xiao-jun, ZHU; Ang, FU; Guang-de, DONG; Rui-miao, WANG; De-fen, ZHU

    2018-03-01

    According to the present serious phenomenon of increasing size and structure of power system, hierarchically structured automatic voltage control(AVC) has been the researching spot. In the paper, the reduced control model is built and the adaptive reduced control model is researched to improve the voltage control effect. The theories of HCSD, HCVS, SKC and FCM are introduced and the effect on coordinated voltage regulation caused by different zoning methodologies is also researched. The generic framework for evaluating performance of coordinated voltage regulation is built. Finally, the IEEE-96 stsyem is used to divide the network. The 2383-bus Polish system is built to verify that the selection of a zoning methodology affects not only the coordinated voltage regulation operation, but also its robustness to erroneous data and proposes a comprehensive generic framework for evaluating its performance. The New England 39-bus network is used to verify the adaptive reduced control models’ performance.

  8. On the Performance Evaluation of 3D Reconstruction Techniques from a Sequence of Images

    NASA Astrophysics Data System (ADS)

    Eid, Ahmed; Farag, Aly

    2005-12-01

    The performance evaluation of 3D reconstruction techniques is not a simple problem to solve. This is not only due to the increased dimensionality of the problem but also due to the lack of standardized and widely accepted testing methodologies. This paper presents a unified framework for the performance evaluation of different 3D reconstruction techniques. This framework includes a general problem formalization, different measuring criteria, and a classification method as a first step in standardizing the evaluation process. Performance characterization of two standard 3D reconstruction techniques, stereo and space carving, is also presented. The evaluation is performed on the same data set using an image reprojection testing methodology to reduce the dimensionality of the evaluation domain. Also, different measuring strategies are presented and applied to the stereo and space carving techniques. These measuring strategies have shown consistent results in quantifying the performance of these techniques. Additional experiments are performed on the space carving technique to study the effect of the number of input images and the camera pose on its performance.

  9. Methodological reviews of economic evaluations in health care: what do they target?

    PubMed

    Hutter, Maria-Florencia; Rodríguez-Ibeas, Roberto; Antonanzas, Fernando

    2014-11-01

    An increasing number of published studies of economic evaluations of health technologies have been reviewed and summarized with different purposes, among them to facilitate decision-making processes. These reviews have covered different aspects of economic evaluations, using a variety of methodological approaches. The aim of this study is to analyze the methodological characteristics of the reviews of economic evaluations in health care, published during the period 1990-2010, to identify their main features and the potential missing elements. This may help to develop a common procedure for elaborating these kinds of reviews. We performed systematic searches in electronic databases (Scopus, Medline and PubMed) of methodological reviews published in English, period 1990-2010. We selected the articles whose main purpose was to review and assess the methodology applied in the economic evaluation studies. We classified the data according to the study objectives, period of the review, number of reviewed studies, methodological and non-methodological items assessed, medical specialty, type of disease and technology, databases used for the review and their main conclusions. We performed a descriptive statistical analysis and checked how generalizability issues were considered in the reviews. We identified 76 methodological reviews, 42 published in the period 1990-2001 and 34 during 2002-2010. The items assessed most frequently (by 70% of the reviews) were perspective, type of economic study, uncertainty and discounting. The reviews also described the type of intervention and disease, funding sources, country in which the evaluation took place, type of journal and author's characteristics. Regarding the intertemporal comparison, higher frequencies were found in the second period for two key methodological items: the source of effectiveness data and the models used in the studies. However, the generalizability issues that apparently are creating a growing interest in the economic evaluation literature did not receive as much attention in the reviews of the second period. The remaining items showed similar frequencies in both periods. Increasingly more reviews of economic evaluation studies aim to analyze the application of methodological principles, and offer summaries of papers classified by either diseases or health technologies. These reviews are useful for finding literature trends, aims of studies and possible deficiencies in the implementation of methods of specific health interventions. As no significant methodological improvement was clearly detected in the two periods analyzed, it would be convenient to pay more attention to the methodological aspects of the reviews.

  10. Combining Theory-Driven Evaluation and Causal Loop Diagramming for Opening the 'Black Box' of an Intervention in the Health Sector: A Case of Performance-Based Financing in Western Uganda.

    PubMed

    Renmans, Dimitri; Holvoet, Nathalie; Criel, Bart

    2017-09-03

    Increased attention on "complexity" in health systems evaluation has resulted in many different methodological responses. Theory-driven evaluations and systems thinking are two such responses that aim for better understanding of the mechanisms underlying given outcomes. Here, we studied the implementation of a performance-based financing intervention by the Belgian Technical Cooperation in Western Uganda to illustrate a methodological strategy of combining these two approaches. We utilized a systems dynamics tool called causal loop diagramming (CLD) to generate hypotheses feeding into a theory-driven evaluation. Semi-structured interviews were conducted with 30 health workers from two districts (Kasese and Kyenjojo) and with 16 key informants. After CLD, we identified three relevant hypotheses: "success to the successful", "growth and underinvestment", and "supervision conundrum". The first hypothesis leads to increasing improvements in performance, as better performance leads to more incentives, which in turn leads to better performance. The latter two hypotheses point to potential bottlenecks. Thus, the proposed methodological strategy was a useful tool for identifying hypotheses that can inform a theory-driven evaluation. The hypotheses are represented in a comprehensible way while highlighting the underlying assumptions, and are more easily falsifiable than hypotheses identified without using CLD.

  11. Performance evaluation in full-mission simulation - Methodological advances and research challenges. [in air transport operations

    NASA Technical Reports Server (NTRS)

    Chidester, Thomas R.; Kanki, Barbara G.; Helmreich, Robert L.

    1989-01-01

    The crew-factors research program at NASA Ames has developed a methodology for studying the impact of a variety of variables on the effectiveness of crews flying realistic but high workload simulated trips. The validity of investigations using the methodology is enhanced by careful design of full-mission scenarios, performance assessment using converging sources of data, and recruitment of representative subjects. Recently, portions of this methodology have been adapted for use in assessing the effectiveness of crew coordination among participants in line-oriented flight training.

  12. Biomimetic Dissolution: A Tool to Predict Amorphous Solid Dispersion Performance.

    PubMed

    Puppolo, Michael M; Hughey, Justin R; Dillon, Traciann; Storey, David; Jansen-Varnum, Susan

    2017-11-01

    The presented study describes the development of a membrane permeation non-sink dissolution method that can provide analysis of complete drug speciation and emulate the in vivo performance of poorly water-soluble Biopharmaceutical Classification System class II compounds. The designed membrane permeation methodology permits evaluation of free/dissolved/unbound drug from amorphous solid dispersion formulations with the use of a two-cell apparatus, biorelevant dissolution media, and a biomimetic polymer membrane. It offers insight into oral drug dissolution, permeation, and absorption. Amorphous solid dispersions of felodipine were prepared by hot melt extrusion and spray drying techniques and evaluated for in vitro performance. Prior to ranking performance of extruded and spray-dried felodipine solid dispersions, optimization of the dissolution methodology was performed for parameters such as agitation rate, membrane type, and membrane pore size. The particle size and zeta potential were analyzed during dissolution experiments to understand drug/polymer speciation and supersaturation sustainment of felodipine solid dispersions. Bland-Altman analysis was performed to measure the agreement or equivalence between dissolution profiles acquired using polymer membranes and porcine intestines and to establish the biomimetic nature of the treated polymer membranes. The utility of the membrane permeation dissolution methodology is seen during the evaluation of felodipine solid dispersions produced by spray drying and hot melt extrusion. The membrane permeation dissolution methodology can suggest formulation performance and be employed as a screening tool for selection of candidates to move forward to pharmacokinetic studies. Furthermore, the presented model is a cost-effective technique.

  13. [The methodological assessment and qualitative evaluation of psychometric performance tests based on the example of modern tests that assess reading and spelling skills].

    PubMed

    Galuschka, Katharina; Rothe, Josefine; Schulte-Körne, Gerd

    2015-09-01

    This article looks at a means of objectively evaluating the quality of psychometric tests. This approach enables users to evaluate psychometric tests based on their methodological characteristics, in order to decide which instrument should be used. Reading and spelling assessment tools serve as examples. The paper also provides a review of German psychometric tests for the assessment of reading and spelling skills. This method facilitates the identification of psychometric tests.of high methodological quality which can be used for the assessment of reading and spelling skills. Reading performance should ideally be assessed with the following instruments: ELFE 1-6, LGVT 6-12, LESEN 6-7, LESEN 8-9, or WLLP-R. The tests to be used for the evaluation of spelling skills are DERET 1-2+, DERET 3-4+, WRT 1+, WRT 2+, WRT 3+, WRT 4+ or HSP 1-10.

  14. Is There a European View on Health Economic Evaluations? Results from a Synopsis of Methodological Guidelines Used in the EUnetHTA Partner Countries.

    PubMed

    Heintz, Emelie; Gerber-Grote, Andreas; Ghabri, Salah; Hamers, Francoise F; Rupel, Valentina Prevolnik; Slabe-Erker, Renata; Davidson, Thomas

    2016-01-01

    The objectives of this study were to review current methodological guidelines for economic evaluations of all types of technologies in the 33 countries with organizations involved in the European Network for Health Technology Assessment (EUnetHTA), and to provide a general framework for economic evaluation at a European level. Methodological guidelines for health economic evaluations used by EUnetHTA partners were collected through a survey. Information from each guideline was extracted using a pre-tested extraction template. On the basis of the extracted information, a summary describing the methods used by the EUnetHTA countries was written for each methodological item. General recommendations were formulated for methodological issues where the guidelines of the EUnetHTA partners were in agreement or where the usefulness of economic evaluations may be increased by presenting the results in a specific way. At least one contact person from all 33 EUnetHTA countries (100 %) responded to the survey. In total, the review included 51 guidelines, representing 25 countries (eight countries had no methodological guideline for health economic evaluations). On the basis of the results of the extracted information from all 51 guidelines, EUnetHTA issued ten main recommendations for health economic evaluations. The presented review of methodological guidelines for health economic evaluations and the consequent recommendations will hopefully improve the comparability, transferability and overall usefulness of economic evaluations performed within EUnetHTA. Nevertheless, there are still methodological issues that need to be investigated further.

  15. Closed-loop, pilot/vehicle analysis of the approach and landing task

    NASA Technical Reports Server (NTRS)

    Schmidt, D. K.; Anderson, M. R.

    1985-01-01

    Optimal-control-theoretic modeling and frequency-domain analysis is the methodology proposed to evaluate analytically the handling qualities of higher-order manually controlled dynamic systems. Fundamental to the methodology is evaluating the interplay between pilot workload and closed-loop pilot/vehicle performance and stability robustness. The model-based metric for pilot workload is the required pilot phase compensation. Pilot/vehicle performance and loop stability is then evaluated using frequency-domain techniques. When these techniques were applied to the flight-test data for thirty-two highly-augmented fighter configurations, strong correlation was obtained between the analytical and experimental results.

  16. Peer Review of a Formal Verification/Design Proof Methodology

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The role of formal verification techniques in system validation was examined. The value and the state of the art of performance proving for fault-tolerant compuers were assessed. The investigation, development, and evaluation of performance proving tools were reviewed. The technical issues related to proof methodologies are examined. The technical issues discussed are summarized.

  17. Performance-cost evaluation methodology for ITS equipment deployment

    DOT National Transportation Integrated Search

    2000-09-01

    Although extensive Intelligent Transportation Systems (ITS) technology is being deployed in the field, little analysis is being performed to evaluate the benefits of implementation schemes. Benefit analysis is particularly in need for one popular ITS...

  18. RPP-PRT-58489, Revision 1, One Systems Consistent Safety Analysis Methodologies Report. 24590-WTP-RPT-MGT-15-014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, Mukesh; Niemi, Belinda; Paik, Ingle

    2015-09-02

    In 2012, One System Nuclear Safety performed a comparison of the safety bases for the Tank Farms Operations Contractor (TOC) and Hanford Tank Waste Treatment and Immobilization Plant (WTP) (RPP-RPT-53222 / 24590-WTP-RPT-MGT-12-018, “One System Report of Comparative Evaluation of Safety Bases for Hanford Waste Treatment and Immobilization Plant Project and Tank Operations Contract”), and identified 25 recommendations that required further evaluation for consensus disposition. This report documents ten NSSC approved consistent methodologies and guides and the results of the additional evaluation process using a new set of evaluation criteria developed for the evaluation of the new methodologies.

  19. On a methodology for robust segmentation of nonideal iris images.

    PubMed

    Schmid, Natalia A; Zuo, Jinyu

    2010-06-01

    Iris biometric is one of the most reliable biometrics with respect to performance. However, this reliability is a function of the ideality of the data. One of the most important steps in processing nonideal data is reliable and precise segmentation of the iris pattern from remaining background. In this paper, a segmentation methodology that aims at compensating various nonidealities contained in iris images during segmentation is proposed. The virtue of this methodology lies in its capability to reliably segment nonideal imagery that is simultaneously affected with such factors as specular reflection, blur, lighting variation, occlusion, and off-angle images. We demonstrate the robustness of our segmentation methodology by evaluating ideal and nonideal data sets, namely, the Chinese Academy of Sciences iris data version 3 interval subdirectory, the iris challenge evaluation data, the West Virginia University (WVU) data, and the WVU off-angle data. Furthermore, we compare our performance to that of our implementation of Camus and Wildes's algorithm and Masek's algorithm. We demonstrate considerable improvement in segmentation performance over the formerly mentioned algorithms.

  20. Guidelines for reporting evaluations based on observational methodology.

    PubMed

    Portell, Mariona; Anguera, M Teresa; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana

    2015-01-01

    Observational methodology is one of the most suitable research designs for evaluating fidelity of implementation, especially in complex interventions. However, the conduct and reporting of observational studies is hampered by the absence of specific guidelines, such as those that exist for other evaluation designs. This lack of specific guidance poses a threat to the quality and transparency of these studies and also constitutes a considerable publication hurdle. The aim of this study thus was to draw up a set of proposed guidelines for reporting evaluations based on observational methodology. The guidelines were developed by triangulating three sources of information: observational studies performed in different fields by experts in observational methodology, reporting guidelines for general studies and studies with similar designs to observational studies, and proposals from experts in observational methodology at scientific meetings. We produced a list of guidelines grouped into three domains: intervention and expected outcomes, methods, and results. The result is a useful, carefully crafted set of simple guidelines for conducting and reporting observational studies in the field of program evaluation.

  1. Combining Theory-Driven Evaluation and Causal Loop Diagramming for Opening the ‘Black Box’ of an Intervention in the Health Sector: A Case of Performance-Based Financing in Western Uganda

    PubMed Central

    Holvoet, Nathalie; Criel, Bart

    2017-01-01

    Increased attention on “complexity” in health systems evaluation has resulted in many different methodological responses. Theory-driven evaluations and systems thinking are two such responses that aim for better understanding of the mechanisms underlying given outcomes. Here, we studied the implementation of a performance-based financing intervention by the Belgian Technical Cooperation in Western Uganda to illustrate a methodological strategy of combining these two approaches. We utilized a systems dynamics tool called causal loop diagramming (CLD) to generate hypotheses feeding into a theory-driven evaluation. Semi-structured interviews were conducted with 30 health workers from two districts (Kasese and Kyenjojo) and with 16 key informants. After CLD, we identified three relevant hypotheses: “success to the successful”, “growth and underinvestment”, and “supervision conundrum”. The first hypothesis leads to increasing improvements in performance, as better performance leads to more incentives, which in turn leads to better performance. The latter two hypotheses point to potential bottlenecks. Thus, the proposed methodological strategy was a useful tool for identifying hypotheses that can inform a theory-driven evaluation. The hypotheses are represented in a comprehensible way while highlighting the underlying assumptions, and are more easily falsifiable than hypotheses identified without using CLD. PMID:28869518

  2. U.S. Department of Energy worker health risk evaluation methodology for assessing risks associated with environmental restoration and waste management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaylock, B.P.; Legg, J.; Travis, C.C.

    1995-06-01

    This document describes a worker health risk evaluation methodology for assessing risks associated with Environmental Restoration (ER) and Waste Management (WM). The methodology is appropriate for estimating worker risks across the Department of Energy (DOE) Complex at both programmatic and site-specific levels. This document supports the worker health risk methodology used to perform the human health risk assessment portion of the DOE Programmatic Environmental Impact Statement (PEIS) although it has applications beyond the PEIS, such as installation-wide worker risk assessments, screening-level assessments, and site-specific assessments.

  3. Reliability-Based Stability Analysis of Rock Slopes Using Numerical Analysis and Response Surface Method

    NASA Astrophysics Data System (ADS)

    Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.

    2017-08-01

    While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.

  4. Performance evaluation of contrast-detail in full field digital mammography systems using ideal (Hotelling) observer vs. conventional automated analysis of CDMAM images for quality control of contrast-detail characteristics.

    PubMed

    Delakis, Ioannis; Wise, Robert; Morris, Lauren; Kulama, Eugenia

    2015-11-01

    The purpose of this work was to evaluate the contrast-detail performance of full field digital mammography (FFDM) systems using ideal (Hotelling) observer Signal-to-Noise Ratio (SNR) methodology and ascertain whether it can be considered an alternative to the conventional, automated analysis of CDMAM phantom images. Five FFDM units currently used in the national breast screening programme were evaluated, which differed with respect to age, detector, Automatic Exposure Control (AEC) and target/filter combination. Contrast-detail performance was analysed using CDMAM and ideal observer SNR methodology. The ideal observer SNR was calculated for input signal originating from gold discs of varying thicknesses and diameters, and then used to estimate the threshold gold thickness for each diameter as per CDMAM analysis. The variability of both methods and the dependence of CDMAM analysis on phantom manufacturing discrepancies also investigated. Results from both CDMAM and ideal observer methodologies were informative differentiators of FFDM systems' contrast-detail performance, displaying comparable patterns with respect to the FFDM systems' type and age. CDMAM results suggested higher threshold gold thickness values compared with the ideal observer methodology, especially for small-diameter details, which can be attributed to the behaviour of the CDMAM phantom used in this study. In addition, ideal observer methodology results showed lower variability than CDMAM results. The Ideal observer SNR methodology can provide a useful metric of the FFDM systems' contrast detail characteristics and could be considered a surrogate for conventional, automated analysis of CDMAM images. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. A methodology for the evaluation of the human-bioclimatic performance of open spaces

    NASA Astrophysics Data System (ADS)

    Charalampopoulos, Ioannis; Tsiros, Ioannis; Chronopoulou-Sereli, Aik.; Matzarakis, Andreas

    2017-05-01

    The purpose of this paper is to present a simple methodology to improve the evaluation of the human-biometeorological benefits of open spaces. It is based on two groups of new indices using as basis the well-known PET index. This simple methodology along with the accompanying indices allows a qualitative and quantitative evaluation of the climatic behavior of the selected sites. The proposed methodology was applied in a human-biometeorology research in the city of Athens, Greece. The results of this study are in line with the results of other related studies indicating the considerable influence of the sky view factor (SVF), the existence of the vegetation and the building material on human-biometeorological conditions. The proposed methodology may provide new insights in the decision-making process related to urban open spaces' best configuration.

  6. Methodology for the Preliminary Design of High Performance Schools in Hot and Humid Climates

    ERIC Educational Resources Information Center

    Im, Piljae

    2009-01-01

    A methodology to develop an easy-to-use toolkit for the preliminary design of high performance schools in hot and humid climates was presented. The toolkit proposed in this research will allow decision makers without simulation knowledge easily to evaluate accurately energy efficient measures for K-5 schools, which would contribute to the…

  7. Adaptation of Mesoscale Weather Models to Local Forecasting

    NASA Technical Reports Server (NTRS)

    Manobianco, John T.; Taylor, Gregory E.; Case, Jonathan L.; Dianic, Allan V.; Wheeler, Mark W.; Zack, John W.; Nutter, Paul A.

    2003-01-01

    Methodologies have been developed for (1) configuring mesoscale numerical weather-prediction models for execution on high-performance computer workstations to make short-range weather forecasts for the vicinity of the Kennedy Space Center (KSC) and the Cape Canaveral Air Force Station (CCAFS) and (2) evaluating the performances of the models as configured. These methodologies have been implemented as part of a continuing effort to improve weather forecasting in support of operations of the U.S. space program. The models, methodologies, and results of the evaluations also have potential value for commercial users who could benefit from tailoring their operations and/or marketing strategies based on accurate predictions of local weather. More specifically, the purpose of developing the methodologies for configuring the models to run on computers at KSC and CCAFS is to provide accurate forecasts of winds, temperature, and such specific thunderstorm-related phenomena as lightning and precipitation. The purpose of developing the evaluation methodologies is to maximize the utility of the models by providing users with assessments of the capabilities and limitations of the models. The models used in this effort thus far include the Mesoscale Atmospheric Simulation System (MASS), the Regional Atmospheric Modeling System (RAMS), and the National Centers for Environmental Prediction Eta Model ( Eta for short). The configuration of the MASS and RAMS is designed to run the models at very high spatial resolution and incorporate local data to resolve fine-scale weather features. Model preprocessors were modified to incorporate surface, ship, buoy, and rawinsonde data as well as data from local wind towers, wind profilers, and conventional or Doppler radars. The overall evaluation of the MASS, Eta, and RAMS was designed to assess the utility of these mesoscale models for satisfying the weather-forecasting needs of the U.S. space program. The evaluation methodology includes objective and subjective verification methodologies. Objective (e.g., statistical) verification of point forecasts is a stringent measure of model performance, but when used alone, it is not usually sufficient for quantifying the value of the overall contribution of the model to the weather-forecasting process. This is especially true for mesoscale models with enhanced spatial and temporal resolution that may be capable of predicting meteorologically consistent, though not necessarily accurate, fine-scale weather phenomena. Therefore, subjective (phenomenological) evaluation, focusing on selected case studies and specific weather features, such as sea breezes and precipitation, has been performed to help quantify the added value that cannot be inferred solely from objective evaluation.

  8. Evaluation of stormwater harvesting sites using multi criteria decision methodology

    NASA Astrophysics Data System (ADS)

    Inamdar, P. M.; Sharma, A. K.; Cook, Stephen; Perera, B. J. C.

    2018-07-01

    Selection of suitable urban stormwater harvesting sites and associated project planning are often complex due to spatial, temporal, economic, environmental and social factors, and related various other variables. This paper is aimed at developing a comprehensive methodology framework for evaluating of stormwater harvesting sites in urban areas using Multi Criteria Decision Analysis (MCDA). At the first phase, framework selects potential stormwater harvesting (SWH) sites using spatial characteristics in a GIS environment. In second phase, MCDA methodology is used for evaluating and ranking of SWH sites in multi-objective and multi-stakeholder environment. The paper briefly describes first phase of framework and focuses chiefly on the second phase of framework. The application of the methodology is also demonstrated over a case study comprising of the local government area, City of Melbourne (CoM), Australia for the benefit of wider water professionals engaged in this area. Nine performance measures (PMs) were identified to characterise the objectives and system performance related to the eight alternative SWH sites for the demonstration of the application of developed methodology. To reflect the stakeholder interests in the current study, four stakeholder participant groups were identified, namely, water authorities (WA), academics (AC), consultants (CS), and councils (CL). The decision analysis methodology broadly consisted of deriving PROMETHEE II rankings of eight alternative SWH sites in the CoM case study, under two distinct group decision making scenarios. The major innovation of this work is the development and application of comprehensive methodology framework that assists in the selection of potential sites for SWH, and facilitates the ranking in multi-objective and multi-stakeholder environment. It is expected that the proposed methodology will assist the water professionals and managers with better knowledge that will reduce the subjectivity in the selection and evaluation of SWH sites.

  9. MULTI-SITE PERFORMANCE EVALUATIONS OF CANDIDATE METHODOLOGIES FOR DETERMINING COARSE PARTICULATE MATTER (PMC) CONCENTRATIONS

    EPA Science Inventory

    Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discret...

  10. 75 FR 42760 - Statement of Organization, Functions, and Delegations of Authority

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-22

    ... accounting reports and invoices, and monitoring all spending. The Team develops, defends and executes the... results; performance measurement; research and evaluation methodologies; demonstration testing and model... ACF programs; strategic planning; performance measurement; program and policy evaluation; research and...

  11. A methodology for evaluating detection performance of ultrasonic array imaging algorithms for coarse-grained materials.

    PubMed

    Van Pamel, Anton; Brett, Colin R; Lowe, Michael J S

    2014-12-01

    Improving the ultrasound inspection capability for coarse-grained metals remains of longstanding interest and is expected to become increasingly important for next-generation electricity power plants. Conventional ultrasonic A-, B-, and C-scans have been found to suffer from strong background noise caused by grain scattering, which can severely limit the detection of defects. However, in recent years, array probes and full matrix capture (FMC) imaging algorithms have unlocked exciting possibilities for improvements. To improve and compare these algorithms, we must rely on robust methodologies to quantify their performance. This article proposes such a methodology to evaluate the detection performance of imaging algorithms. For illustration, the methodology is applied to some example data using three FMC imaging algorithms; total focusing method (TFM), phase-coherent imaging (PCI), and decomposition of the time-reversal operator with multiple scattering filter (DORT MSF). However, it is important to note that this is solely to illustrate the methodology; this article does not attempt the broader investigation of different cases that would be needed to compare the performance of these algorithms in general. The methodology considers the statistics of detection, presenting the detection performance as probability of detection (POD) and probability of false alarm (PFA). A test sample of coarse-grained nickel super alloy, manufactured to represent materials used for future power plant components and containing some simple artificial defects, is used to illustrate the method on the candidate algorithms. The data are captured in pulse-echo mode using 64-element array probes at center frequencies of 1 and 5 MHz. In this particular case, it turns out that all three algorithms are shown to perform very similarly when comparing their flaw detection capabilities.

  12. Protocol for Reliability Assessment of Structural Health Monitoring Systems Incorporating Model-assisted Probability of Detection (MAPOD) Approach

    DTIC Science & Technology

    2011-09-01

    a quality evaluation with limited data, a model -based assessment must be...that affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a ...affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a wide range

  13. A review and preliminary evaluation of methodological factors in performance assessments of time-varying aircraft noise effects

    NASA Technical Reports Server (NTRS)

    Coates, G. D.; Alluisi, E. A.

    1975-01-01

    The effects of aircraft noise on human performance is considered. Progress is reported in the following areas: (1) review of the literature to identify the methodological and stimulus parameters involved in the study of noise effects on human performance; (2) development of a theoretical framework to provide working hypotheses as to the effects of noise on complex human performance; and (3) data collection on the first of several experimental investigations designed to provide tests of the hypotheses.

  14. Fuzzy logic based sensor performance evaluation of vehicle mounted metal detector systems

    NASA Astrophysics Data System (ADS)

    Abeynayake, Canicious; Tran, Minh D.

    2015-05-01

    Vehicle Mounted Metal Detector (VMMD) systems are widely used for detection of threat objects in humanitarian demining and military route clearance scenarios. Due to the diverse nature of such operational conditions, operational use of VMMD without a proper understanding of its capability boundaries may lead to heavy causalities. Multi-criteria fitness evaluations are crucial for determining capability boundaries of any sensor-based demining equipment. Evaluation of sensor based military equipment is a multi-disciplinary topic combining the efforts of researchers, operators, managers and commanders having different professional backgrounds and knowledge profiles. Information acquired through field tests usually involves uncertainty, vagueness and imprecision due to variations in test and evaluation conditions during a single test or series of tests. This report presents a fuzzy logic based methodology for experimental data analysis and performance evaluation of VMMD. This data evaluation methodology has been developed to evaluate sensor performance by consolidating expert knowledge with experimental data. A case study is presented by implementing the proposed data analysis framework in a VMMD evaluation scenario. The results of this analysis confirm accuracy, practicability and reliability of the fuzzy logic based sensor performance evaluation framework.

  15. An Integrated Low-Speed Performance and Noise Prediction Methodology for Subsonic Aircraft

    NASA Technical Reports Server (NTRS)

    Olson, E. D.; Mavris, D. N.

    2000-01-01

    An integrated methodology has been assembled to compute the engine performance, takeoff and landing trajectories, and community noise levels for a subsonic commercial aircraft. Where feasible, physics-based noise analysis methods have been used to make the results more applicable to newer, revolutionary designs and to allow for a more direct evaluation of new technologies. The methodology is intended to be used with approximation methods and risk analysis techniques to allow for the analysis of a greater number of variable combinations while retaining the advantages of physics-based analysis. Details of the methodology are described and limited results are presented for a representative subsonic commercial aircraft.

  16. Subjective comparison and evaluation of speech enhancement algorithms

    PubMed Central

    Hu, Yi; Loizou, Philipos C.

    2007-01-01

    Making meaningful comparisons between the performance of the various speech enhancement algorithms proposed over the years, has been elusive due to lack of a common speech database, differences in the types of noise used and differences in the testing methodology. To facilitate such comparisons, we report on the development of a noisy speech corpus suitable for evaluation of speech enhancement algorithms. This corpus is subsequently used for the subjective evaluation of 13 speech enhancement methods encompassing four classes of algorithms: spectral subtractive, subspace, statistical-model based and Wiener-type algorithms. The subjective evaluation was performed by Dynastat, Inc. using the ITU-T P.835 methodology designed to evaluate the speech quality along three dimensions: signal distortion, noise distortion and overall quality. This paper reports the results of the subjective tests. PMID:18046463

  17. Effects of disease severity distribution on the performance of quantitative diagnostic methods and proposal of a novel 'V-plot' methodology to display accuracy values.

    PubMed

    Petraco, Ricardo; Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P

    2018-01-01

    Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test's performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Chol rapid and Chol gold ) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard.

  18. IMPAC: An Integrated Methodology for Propulsion and Airframe Control

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Ouzts, Peter J.; Lorenzo, Carl F.; Mattern, Duane L.

    1991-01-01

    The National Aeronautics and Space Administration is actively involved in the development of enabling technologies that will lead towards aircraft with new/enhanced maneuver capabilities such as Short Take-Off Vertical Landing (STOVL) and high angle of attack performance. Because of the high degree of dynamic coupling between the airframe and propulsion systems of these types of aircraft, one key technology is the integration of the flight and propulsion control. The NASA Lewis Research Center approach to developing Integrated Flight Propulsion Control (IFPC) technologies is an in-house research program referred to as IMPAC (Integrated Methodology for Propulsion and Airframe Control). The goals of IMPAC are to develop a viable alternative to the existing integrated control design methodologies that will allow for improved system performance and simplicity of control law synthesis and implementation, and to demonstrate the applicability of the methodology to a supersonic STOVL fighter aircraft. Based on some preliminary control design studies that included evaluation of the existing methodologies, the IFPC design methodology that is emerging at the Lewis Research Center consists of considering the airframe and propulsion system as one integrated system for an initial centralized controller design and then partitioning the centralized controller into separate airframe and propulsion system subcontrollers to ease implementation and to set meaningful design requirements for detailed subsystem control design and evaluation. An overview of IMPAC is provided and detailed discussion of the various important design and evaluation steps in the methodology are included.

  19. Empirically evaluating the impact of adjudicative tribunals in the health sector: context, challenges and opportunities.

    PubMed

    Hoffman, Steven J; Sossin, Lorne

    2012-04-01

    Adjudicative tribunals are an integral part of health system governance, yet their real-world impact remains largely unknown. Most assessments focus on internal accountability and use anecdotal methodologies; few, studies if any, empirically evaluate their external impact and use these data to test effectiveness, track performance, inform service improvements and ultimately strengthen health systems. Given that such assessments would yield important benefits and have been conducted successfully in similar settings (e.g. specialist courts), their absence is likely attributable to complexity in the health system, methodological difficulties and the legal environment within which tribunals operate. We suggest practical steps for potential evaluators to conduct empirical impact evaluations along with an evaluation matrix template featuring possible target outcomes and corresponding surrogate endpoints, performance indicators and empirical methodologies. Several system-level strategies for supporting such assessments have also been suggested for academics, health system institutions, health planners and research funders. Action is necessary to ensure that policymakers do not continue operating without evidence but can rather pursue data-driven strategies that are more likely to achieve their health system goals in a cost-effective way.

  20. Statewide transit evaluation : Michigan

    DOT National Transportation Integrated Search

    1981-07-01

    The objective of this report is to share the experience gained during the : development of a performance evaluation methodology for public transportation in : the State of Michigan. The report documents the process through which an : evaluation metho...

  1. Credentials versus Performance: Review of the Teacher Performance Pay Research

    ERIC Educational Resources Information Center

    Podgursky, Michael; Springer, Matthew G.

    2007-01-01

    In this article we examine the economic case for merit or performance-based pay for K-12 teachers. We review several areas of germane research. The direct evaluation literature on these incentive plans is slender; highly diverse in terms of methodology, targeted populations, and programs evaluated; and primarily focused on short-run motivational…

  2. 78 FR 58307 - Statement of Organization, Functions, and Delegations of Authority

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-23

    ... reproduction, natality, and mortality; (10) performs theoretical and experimental investigations into the... dissemination; (15) conducts methodological research on the tools for evaluation, utilization, and presentation... classification to states, local areas, other countries, and private organizations; (12) conducts methodological...

  3. The Utilization of Navy People-Related RDT&E (Research, Development, Test, and Evaluation): Fiscal Year 1983.

    DTIC Science & Technology

    1984-06-01

    emostraion. Tese eserch ool wee deignted and experimental demonstrations wre successfully con- for demonstrations. These research tools wre designated ...Topics 4.02 Instructional Systems Design Methodology Instructional Systems Development and Effectiveness Evaluation .................................... 1...6 53 0 0 67w Report Page 10.07 Human Performance Variables/Factors 10.08 Man-Machine Design Methodology Computer Assisted Methods for Human

  4. The methodological quality of health economic evaluations for the management of hip fractures: A systematic review of the literature.

    PubMed

    Sabharwal, Sanjeeve; Carter, Alexander; Darzi, Lord Ara; Reilly, Peter; Gupte, Chinmay M

    2015-06-01

    Approximately 76,000 people a year sustain a hip fracture in the UK and the estimated cost to the NHS is £1.4 billion a year. Health economic evaluations (HEEs) are one of the methods employed by decision makers to deliver healthcare policy supported by clinical and economic evidence. The objective of this study was to (1) identify and characterize HEEs for the management of patients with hip fractures, and (2) examine their methodological quality. A literature search was performed in MEDLINE, EMBASE and the NHS Economic Evaluation Database. Studies that met the specified definition for a HEE and evaluated hip fracture management were included. Methodological quality was assessed using the Consensus on Health Economic Criteria (CHEC). Twenty-seven publications met the inclusion criteria of this study and were included in our descriptive and methodological analysis. Domains of methodology that performed poorly included use of an appropriate time horizon (66.7% of studies), incremental analysis of costs and outcomes (63%), future discounting (44.4%), sensitivity analysis (40.7%), declaration of conflicts of interest (37%) and discussion of ethical considerations (29.6%). HEEs for patients with hip fractures are increasing in publication in recent years. Most of these studies fail to adopt a societal perspective and key aspects of their methodology are poor. The development of future HEEs in this field must adhere to established principles of methodology, so that better quality research can be used to inform health policy on the management of patients with a hip fracture. Copyright © 2014 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.

  5. Using Videos Derived from Simulations to Support the Analysis of Spatial Awareness in Synthetic Vision Displays

    NASA Technical Reports Server (NTRS)

    Boton, Matthew L.; Bass, Ellen J.; Comstock, James R., Jr.

    2006-01-01

    The evaluation of human-centered systems can be performed using a variety of different methodologies. This paper describes a human-centered systems evaluation methodology where participants watch 5-second non-interactive videos of a system in operation before supplying judgments and subjective measures based on the information conveyed in the videos. This methodology was used to evaluate the ability of different textures and fields of view to convey spatial awareness in synthetic vision systems (SVS) displays. It produced significant results for both judgment based and subjective measures. This method is compared to other methods commonly used to evaluate SVS displays based on cost, the amount of experimental time required, experimental flexibility, and the type of data provided.

  6. Human perception testing methodology for evaluating EO/IR imaging systems

    NASA Astrophysics Data System (ADS)

    Graybeal, John J.; Monfort, Samuel S.; Du Bosq, Todd W.; Familoni, Babajide O.

    2018-04-01

    The U.S. Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) Perception Lab is tasked with supporting the development of sensor systems for the U.S. Army by evaluating human performance of emerging technologies. Typical research questions involve detection, recognition and identification as a function of range, blur, noise, spectral band, image processing techniques, image characteristics, and human factors. NVESD's Perception Lab provides an essential bridge between the physics of the imaging systems and the performance of the human operator. In addition to quantifying sensor performance, perception test results can also be used to generate models of human performance and to drive future sensor requirements. The Perception Lab seeks to develop and employ scientifically valid and efficient perception testing procedures within the practical constraints of Army research, including rapid development timelines for critical technologies, unique guidelines for ethical testing of Army personnel, and limited resources. The purpose of this paper is to describe NVESD Perception Lab capabilities, recent methodological improvements designed to align our methodology more closely with scientific best practice, and to discuss goals for future improvements and expanded capabilities. Specifically, we discuss modifying our methodology to improve training, to account for human fatigue, to improve assessments of human performance, and to increase experimental design consultation provided by research psychologists. Ultimately, this paper outlines a template for assessing human perception and overall system performance related to EO/IR imaging systems.

  7. Development of task network models of human performance in microgravity

    NASA Technical Reports Server (NTRS)

    Diaz, Manuel F.; Adam, Susan

    1992-01-01

    This paper discusses the utility of task-network modeling for quantifying human performance variability in microgravity. The data are gathered for: (1) improving current methodologies for assessing human performance and workload in the operational space environment; (2) developing tools for assessing alternative system designs; and (3) developing an integrated set of methodologies for the evaluation of performance degradation during extended duration spaceflight. The evaluation entailed an analysis of the Remote Manipulator System payload-grapple task performed on many shuttle missions. Task-network modeling can be used as a tool for assessing and enhancing human performance in man-machine systems, particularly for modeling long-duration manned spaceflight. Task-network modeling can be directed toward improving system efficiency by increasing the understanding of basic capabilities of the human component in the system and the factors that influence these capabilities.

  8. Horizon Mission Methodology - A tool for the study of technology innovation and new paradigms

    NASA Technical Reports Server (NTRS)

    Anderson, John L.

    1993-01-01

    The Horizon Mission (HM) methodology was developed to provide a means of identifying and evaluating highly innovative, breakthrough technology concepts (BTCs) and for assessing their potential impact on advanced space missions. The methodology is based on identifying new capabilities needed by hypothetical 'horizon' space missions having performance requirements that cannot be met even by extrapolating known space technologies. Normal human evaluation of new ideas such as BTCs appears to be governed (and limited) by 'inner models of reality' defined as paradigms. Thus, new ideas are evaluated by old models. This paper describes the use of the HM Methodology to define possible future paradigms that would provide alternatives to evaluation by current paradigms. The approach is to represent a future paradigm by a set of new BTC-based capabilities - called a paradigm abstract. The paper describes methods of constructing and using the abstracts for evaluating BTCs for space applications and for exploring the concept of paradigms and paradigm shifts as a representation of technology innovation.

  9. Cost and Information Effectiveness Analysis (CIEA): A Methodology for Evaluating a Training Device Operational Readiness Assessment Capability (DORAC).

    DTIC Science & Technology

    1981-02-01

    Report 528 COST AIND I*FO•?JidTH ?i EFFECT•• ES1BS ANALYSIS (CDEA): A METiBLOBU Y FOR EVALUATIN1G A TRAINING DEMCE OPERATMDN1AL MAEA3 ],SE 3SSESS$ iElT ...8217, N. Within a military setting, the uses of training devices in performance evaluation have generally mirrored civilian uses and primarily...Technical Report 528 COST AND INFORMATION EFFECTIVENESS ANALYSIS (CIEA): A METHODOLOGY FOR EVALUATING A TRAINING DEVICE OPERATIONAL READINESS

  10. Evaluation of ridesharing programs in Michigan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulp, G.; Tsao, H.J.; Webber, R.E.

    1982-10-01

    The design, implementation, and results of a carpool and vanpool evaluation are described. Objectives of the evaluation were: to develop credible estimates of the energy savings attributable to the ridesharing program, to provide information for improving the performance of the ridesharing program, and to add to a general understanding of the ridesharing process. Previous evaluation work is critiqued and the research methodology adopted for this study is discussed. The ridesharing program in Michigan is described and the basis for selecting Michigan as the evaluation site is discussed. The evaluation methodology is presented, including research design, sampling procedure, data collection, andmore » data validation. Evaluation results are analyzed. (LEW)« less

  11. Dribble Files: Methodologies to Evaluate Learning and Performance in Complex Environments

    ERIC Educational Resources Information Center

    Schrader, P. G.; Lawless, Kimberly A.

    2007-01-01

    Research in the area of technology learning environments is tremendously complex. Tasks performed in these contexts are highly cognitive and mostly invisible to the observer. The nature of performance in these contexts is explained not only by the outcome but also by the process. However, evaluating the learning process with respect to tasks…

  12. The Development of Methodologies for Determining Non-Linear Effects in Infrasound Sensors

    DTIC Science & Technology

    2010-09-01

    THE DEVELOPMENT OF METHODOLOGIES FOR DETERMINING NON-LINEAR EFFECTS IN INFRASOUND SENSORS Darren M. Hart, Harold V. Parks, and Randy K. Rembold...the past year, four new infrasound sensor designs were evaluated for common performance characteristics, i.e., power consumption, response (amplitude...and phase), noise, full-scale, and dynamic range. In the process of evaluating a fifth infrasound sensor, which is an update of an original design

  13. Evaluation For Intelligent Transportation Systems, Evaluation Methodologies

    DOT National Transportation Integrated Search

    1996-03-01

    THE BRIEFING ALSO PRESENTS THOUGHTS ON EVALUATION IN LIGHT OF THE RECENT LAUNCH OF OPERATION TIMESAVER, THE MODEL DEPLOYMENT INITIATIVE FOR FOUR DIFFERENT CITIES, AND THE IMPLICATIONS OF THE RECENT "GOVERNMENT PERFORMANCE AND RESULTS ACT" THAT REQUIR...

  14. Development of Methodologies Evaluating Emissions from Metal-Containing Explosives and Propellants

    DTIC Science & Technology

    Experiments were performed to develop methodologies that will allow determination of pollutant emission factors for gases and particles produced by...micrometer, 16 by weight). Although not included here, the analysis methods described will be directly applicable to the study of pyrotechnics.

  15. A Computational Tool for Evaluating THz Imaging Performance in Brownout Conditions at Land Sites Throughout the World

    DTIC Science & Technology

    2009-03-01

    III. Methodology ...............................................................................................................26 Overview...applications relating to this research and the results they have obtained, as well as the background on LEEDR. Chapter 3 will detail the methodology ...different in that the snow dissipates faster and it is better to descend slower, at rates of 200 – 300 ft/min. 26 III. Methodology This chapter

  16. Methodology issues concerning the accuracy of kinematic data collection and analysis using the ariel performance analysis system

    NASA Technical Reports Server (NTRS)

    Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)

    1992-01-01

    Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.

  17. Evaluation of Performance and Perceptions of Electronic vs. Paper Multiple-Choice Exams

    ERIC Educational Resources Information Center

    Washburn, Shannon; Herman, James; Stewart, Randolph

    2017-01-01

    In the veterinary professional curriculum, methods of examination in many courses are transitioning from the traditional paper-based exams to electronic-based exams. Therefore, a controlled trial to evaluate the impact of testing methodology on examination performance in a veterinary physiology course was designed and implemented. Formalized…

  18. Assessing Faculty Performance: A Test of Method.

    ERIC Educational Resources Information Center

    Clark, Mary Jo; Blackburn, Robert T.

    A methodology for evaluating faculty work performance was discussed, using data obtained from a typical liberal arts college faculty. Separate evaluations of teaching effectiveness and of overall contributions to the college for 45 full-time faculty (85% response rate) were collected from administrators, faculty colleagues, students, and from the…

  19. Improved methodology to assess modification and completion of landfill gas management in the aftercare period

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, Jeremy W.F., E-mail: jmorris@geosyntec.com; Crest, Marion, E-mail: marion.crest@suez-env.com; Barlaz, Morton A., E-mail: barlaz@ncsu.edu

    Highlights: Black-Right-Pointing-Pointer Performance-based evaluation of landfill gas control system. Black-Right-Pointing-Pointer Analytical framework to evaluate transition from active to passive gas control. Black-Right-Pointing-Pointer Focus on cover oxidation as an alternative means of passive gas control. Black-Right-Pointing-Pointer Integrates research on long-term landfill behavior with practical guidance. - Abstract: Municipal solid waste landfills represent the dominant option for waste disposal in many parts of the world. While some countries have greatly reduced their reliance on landfills, there remain thousands of landfills that require aftercare. The development of cost-effective strategies for landfill aftercare is in society's interest to protect human health and the environmentmore » and to prevent the emergence of landfills with exhausted aftercare funding. The Evaluation of Post-Closure Care (EPCC) methodology is a performance-based approach in which landfill performance is assessed in four modules including leachate, gas, groundwater, and final cover. In the methodology, the objective is to evaluate landfill performance to determine when aftercare monitoring and maintenance can be reduced or possibly eliminated. This study presents an improved gas module for the methodology. While the original version of the module focused narrowly on regulatory requirements for control of methane migration, the improved gas module also considers best available control technology for landfill gas in terms of greenhouse gas emissions, air quality, and emissions of odoriferous compounds. The improved module emphasizes the reduction or elimination of fugitive methane by considering the methane oxidation capacity of the cover system. The module also allows for the installation of biologically active covers or other features designed to enhance methane oxidation. A methane emissions model, CALMIM, was used to assist with an assessment of the methane oxidation capacity of landfill covers.« less

  20. Development of an Evaluation Methodology for Triple Bottom Line Reports Using International Standards on Reporting

    NASA Astrophysics Data System (ADS)

    Skouloudis, Antonis; Evangelinos, Konstantinos; Kourmousis, Fotis

    2009-08-01

    The purpose of this article is twofold. First, evaluation scoring systems for triple bottom line (TBL) reports to date are examined and potential methodological weaknesses and problems are highlighted. In this context, a new assessment methodology is presented based explicitly on the most widely acknowledged standard on non-financial reporting worldwide, the Global Reporting Initiative (GRI) guidelines. The set of GRI topics and performance indicators was converted into scoring criteria while the generic scoring devise was set from 0 to 4 points. Secondly, the proposed benchmark tool was applied to the TBL reports published by Greek companies. Results reveal major gaps in reporting practices, stressing the need for the further development of internal systems and processes in order to collect essential non-financial performance data. A critical overview of the structure and rationale of the evaluation tool in conjunction with the Greek case study is discussed while recommendations for future research on the field of this relatively new form of reporting are suggested.

  1. Development of an evaluation methodology for triple bottom line reports using international standards on reporting.

    PubMed

    Skouloudis, Antonis; Evangelinos, Konstantinos; Kourmousis, Fotis

    2009-08-01

    The purpose of this article is twofold. First, evaluation scoring systems for triple bottom line (TBL) reports to date are examined and potential methodological weaknesses and problems are highlighted. In this context, a new assessment methodology is presented based explicitly on the most widely acknowledged standard on non-financial reporting worldwide, the Global Reporting Initiative (GRI) guidelines. The set of GRI topics and performance indicators was converted into scoring criteria while the generic scoring devise was set from 0 to 4 points. Secondly, the proposed benchmark tool was applied to the TBL reports published by Greek companies. Results reveal major gaps in reporting practices, stressing the need for the further development of internal systems and processes in order to collect essential non-financial performance data. A critical overview of the structure and rationale of the evaluation tool in conjunction with the Greek case study is discussed while recommendations for future research on the field of this relatively new form of reporting are suggested.

  2. Strategic Analysis Overview

    NASA Technical Reports Server (NTRS)

    Cirillo, William M.; Earle, Kevin D.; Goodliff, Kandyce E.; Reeves, J. D.; Stromgren, Chel; Andraschko, Mark R.; Merrill, R. Gabe

    2008-01-01

    NASA s Constellation Program employs a strategic analysis methodology in providing an integrated analysis capability of Lunar exploration scenarios and to support strategic decision-making regarding those scenarios. The strategic analysis methodology integrates the assessment of the major contributors to strategic objective satisfaction performance, affordability, and risk and captures the linkages and feedbacks between all three components. Strategic analysis supports strategic decision making by senior management through comparable analysis of alternative strategies, provision of a consistent set of high level value metrics, and the enabling of cost-benefit analysis. The tools developed to implement the strategic analysis methodology are not element design and sizing tools. Rather, these models evaluate strategic performance using predefined elements, imported into a library from expert-driven design/sizing tools or expert analysis. Specific components of the strategic analysis tool set include scenario definition, requirements generation, mission manifesting, scenario lifecycle costing, crew time analysis, objective satisfaction benefit, risk analysis, and probabilistic evaluation. Results from all components of strategic analysis are evaluated a set of pre-defined figures of merit (FOMs). These FOMs capture the high-level strategic characteristics of all scenarios and facilitate direct comparison of options. The strategic analysis methodology that is described in this paper has previously been applied to the Space Shuttle and International Space Station Programs and is now being used to support the development of the baseline Constellation Program lunar architecture. This paper will present an overview of the strategic analysis methodology and will present sample results from the application of the strategic analysis methodology to the Constellation Program lunar architecture.

  3. Modeling left turn queue lengths.

    DOT National Transportation Integrated Search

    2011-01-01

    This guidebook provides methodologies and procedures for using incident data collected at Texas transportation management centers (TMCs) to perform two types of analysis - evaluation/planning analysis and predictive analysis. For the evaluation/plann...

  4. Effects of disease severity distribution on the performance of quantitative diagnostic methods and proposal of a novel ‘V-plot’ methodology to display accuracy values

    PubMed Central

    Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P

    2018-01-01

    Background Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test’s performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. Methods and findings We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Cholrapid and Cholgold) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). Conclusion No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard. PMID:29387424

  5. Methodology, Methods, and Metrics for Testing and Evaluating Augmented Cognition Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greitzer, Frank L.

    The augmented cognition research community seeks cognitive neuroscience-based solutions to improve warfighter performance by applying and managing mitigation strategies to reduce workload and improve the throughput and quality of decisions. The focus of augmented cognition mitigation research is to define, demonstrate, and exploit neuroscience and behavioral measures that support inferences about the warfighter’s cognitive state that prescribe the nature and timing of mitigation. A research challenge is to develop valid evaluation methodologies, metrics and measures to assess the impact of augmented cognition mitigations. Two considerations are external validity, which is the extent to which the results apply to operational contexts;more » and internal validity, which reflects the reliability of performance measures and the conclusions based on analysis of results. The scientific rigor of the research methodology employed in conducting empirical investigations largely affects the validity of the findings. External validity requirements also compel us to demonstrate operational significance of mitigations. Thus it is important to demonstrate effectiveness of mitigations under specific conditions. This chapter reviews some cognitive science and methodological considerations in designing augmented cognition research studies and associated human performance metrics and analysis methods to assess the impact of augmented cognition mitigations.« less

  6. Assessing the environmental performance of English arable and livestock holdings using data from the Farm Accountancy Data Network (FADN).

    PubMed

    Westbury, D B; Park, J R; Mauchline, A L; Crane, R T; Mortimer, S R

    2011-03-01

    Agri-environment schemes (AESs) have been implemented across EU member states in an attempt to reconcile agricultural production methods with protection of the environment and maintenance of the countryside. To determine the extent to which such policy objectives are being fulfilled, participating countries are obliged to monitor and evaluate the environmental, agricultural and socio-economic impacts of their AESs. However, few evaluations measure precise environmental outcomes and critically, there are no agreed methodologies to evaluate the benefits of particular agri-environmental measures, or to track the environmental consequences of changing agricultural practices. In response to these issues, the Agri-Environmental Footprint project developed a common methodology for assessing the environmental impact of European AES. The Agri-Environmental Footprint Index (AFI) is a farm-level, adaptable methodology that aggregates measurements of agri-environmental indicators based on Multi-Criteria Analysis (MCA) techniques. The method was developed specifically to allow assessment of differences in the environmental performance of farms according to participation in agri-environment schemes. The AFI methodology is constructed so that high values represent good environmental performance. This paper explores the use of the AFI methodology in combination with Farm Business Survey data collected in England for the Farm Accountancy Data Network (FADN), to test whether its use could be extended for the routine surveillance of environmental performance of farming systems using established data sources. Overall, the aim was to measure the environmental impact of three different types of agriculture (arable, lowland livestock and upland livestock) in England and to identify differences in AFI due to participation in agri-environment schemes. However, because farm size, farmer age, level of education and region are also likely to influence the environmental performance of a holding, these factors were also considered. Application of the methodology revealed that only arable holdings participating in agri-environment schemes had a greater environmental performance, although responses differed between regions. Of the other explanatory variables explored, the key factors determining the environmental performance for lowland livestock holdings were farm size, farmer age and level of education. In contrast, the AFI value of upland livestock holdings differed only between regions. The paper demonstrates that the AFI methodology can be used readily with English FADN data and therefore has the potential to be applied more widely to similar data sources routinely collected across the EU-27 in a standardised manner. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Combining users' activity survey and simulators to evaluate human activity recognition systems.

    PubMed

    Azkune, Gorka; Almeida, Aitor; López-de-Ipiña, Diego; Chen, Liming

    2015-04-08

    Evaluating human activity recognition systems usually implies following expensive and time-consuming methodologies, where experiments with humans are run with the consequent ethical and legal issues. We propose a novel evaluation methodology to overcome the enumerated problems, which is based on surveys for users and a synthetic dataset generator tool. Surveys allow capturing how different users perform activities of daily living, while the synthetic dataset generator is used to create properly labelled activity datasets modelled with the information extracted from surveys. Important aspects, such as sensor noise, varying time lapses and user erratic behaviour, can also be simulated using the tool. The proposed methodology is shown to have very important advantages that allow researchers to carry out their work more efficiently. To evaluate the approach, a synthetic dataset generated following the proposed methodology is compared to a real dataset computing the similarity between sensor occurrence frequencies. It is concluded that the similarity between both datasets is more than significant.

  8. Evaluating markers for the early detection of cancer: overview of study designs and methods.

    PubMed

    Baker, Stuart G; Kramer, Barnett S; McIntosh, Martin; Patterson, Blossom H; Shyr, Yu; Skates, Steven

    2006-01-01

    The field of cancer biomarker development has been evolving rapidly. New developments both in the biologic and statistical realms are providing increasing opportunities for evaluation of markers for both early detection and diagnosis of cancer. To review the major conceptual and methodological issues in cancer biomarker evaluation, with an emphasis on recent developments in statistical methods together with practical recommendations. We organized this review by type of study: preliminary performance, retrospective performance, prospective performance and cancer screening evaluation. For each type of study, we discuss methodologic issues, provide examples and discuss strengths and limitations. Preliminary performance studies are useful for quickly winnowing down the number of candidate markers; however their results may not apply to the ultimate target population, asymptomatic subjects. If stored specimens from cohort studies with clinical cancer endpoints are available, retrospective studies provide a quick and valid way to evaluate performance of the markers or changes in the markers prior to the onset of clinical symptoms. Prospective studies have a restricted role because they require large sample sizes, and, if the endpoint is cancer on biopsy, there may be bias due to overdiagnosis. Cancer screening studies require very large sample sizes and long follow-up, but are necessary for evaluating the marker as a trigger of early intervention.

  9. OPTIMIZATION METHODOLOGY FOR LAND USE PATTERNS-EVALUATION BASED ON MULTISCALE HABITAT PATTERN COMPARISON. (R827169)

    EPA Science Inventory

    In this paper, the methodological concept of landscape optimization presented by Seppelt and Voinov [Ecol. Model. 151 (2/3) (2002) 125] is analyzed. Two aspects are chosen for detailed study. First, we generalize the performance criterion to assess a vector of ecosystem functi...

  10. Comparison between two methodologies for urban drainage decision aid.

    PubMed

    Moura, P M; Baptista, M B; Barraud, S

    2006-01-01

    The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.

  11. DETERMINING COARSE PARTICULATE MATTER CONCENTRATIONS: A PERFORMANCE EVALUATION OF CANDIDATE METHODOLOGIES UNDER WINTERTIME CONDITIONS

    EPA Science Inventory

    The main objective of this study is to evaluate the performance of sampling methods for potential use as a Federal Reference Method (FRM) capable of providing an estimate of coarse particle (PMc: particulate matter with an aerodynamic diameter between 2.5 um and 10 um) mass con...

  12. Evaluating improvements in landside access for airports.

    DOT National Transportation Integrated Search

    1998-10-01

    The purpose of this research was to describe the elements that comprise airport access and develop a methodology for identifying and evaluating existing landside access performance and proposed improvements from a passenger perspective. The scope was...

  13. Subsurface condition evaluation for asphalt pavement preservation treatments.

    DOT National Transportation Integrated Search

    2013-04-01

    This report presents a case study on the SR70 section with microsurface for understanding its performance; a development of a : methodology for evaluating the asphalt pavement subsurface condition for applying pavement preservation treatments; and...

  14. Performance and Perception in the Flipped Learning Model: An Initial Approach to Evaluate the Effectiveness of a New Teaching Methodology in a General Science Classroom

    NASA Astrophysics Data System (ADS)

    González-Gómez, David; Jeong, Jin Su; Airado Rodríguez, Diego; Cañada-Cañada, Florentina

    2016-06-01

    "Flipped classroom" teaching methodology is a type of blended learning in which the traditional class setting is inverted. Lecture is shifted outside of class, while the classroom time is employed to solve problems or doing practical works through the discussion/peer collaboration of students and instructors. This relatively new instructional methodology claims that flipping your classroom engages more effectively students with the learning process, achieving better teaching results. Thus, this research aimed to evaluate the effects of the flipped classroom on the students' performance and perception of this new methodology. This study was conducted in a general science course, sophomore of the Primary Education bachelor degree in the Training Teaching School of the University of Extremadura (Spain) during the course 2014/2015. In order to assess the suitability of the proposed methodology, the class was divided in two groups. For the first group, a traditional methodology was followed, and it was used as control. On the other hand, the "flipped classroom" methodology was used in the second group, where the students were given diverse materials, such as video lessons and reading materials, before the class to be revised at home by them. Online questionnaires were as well provided to assess the progress of the students before the class. Finally, the results were compared in terms of students' achievements and a post-task survey was also conducted to know the students' perceptions. A statistically significant difference was found on all assessments with the flipped class students performing higher on average. In addition, most students had a favorable perception about the flipped classroom noting the ability to pause, rewind and review lectures, as well as increased individualized learning and increased teacher availability.

  15. Transferring Codified Knowledge: Socio-Technical versus Top-Down Approaches

    ERIC Educational Resources Information Center

    Guzman, Gustavo; Trivelato, Luiz F.

    2008-01-01

    Purpose: This paper aims to analyse and evaluate the transfer process of codified knowledge (CK) performed under two different approaches: the "socio-technical" and the "top-down". It is argued that the socio-technical approach supports the transfer of CK better than the top-down approach. Design/methodology/approach: Case study methodology was…

  16. Temperature - Emissivity Separation Assessment in a Sub-Urban Scenario

    NASA Astrophysics Data System (ADS)

    Moscadelli, M.; Diani, M.; Corsini, G.

    2017-10-01

    In this paper, a methodology that aims at evaluating the effectiveness of different TES strategies is presented. The methodology takes into account the specific material of interest in the monitored scenario, sensor characteristics, and errors in the atmospheric compensation step. The methodology is proposed in order to predict and analyse algorithms performances during the planning of a remote sensing mission, aimed to discover specific materials of interest in the monitored scenario. As case study, the proposed methodology is applied to a real airborne data set of a suburban scenario. In order to perform the TES problem, three state-of-the-art algorithms, and a recently proposed one, are investigated: Temperature-Emissivity Separation '98 (TES-98) algorithm, Stepwise Refining TES (SRTES) algorithm, Linear piecewise TES (LTES) algorithm, and Optimized Smoothing TES (OSTES) algorithm. At the end, the accuracy obtained with real data, and the ones predicted by means of the proposed methodology are compared and discussed.

  17. Health systems around the world - a comparison of existing health system rankings.

    PubMed

    Schütte, Stefanie; Acevedo, Paula N Marin; Flahault, Antoine

    2018-06-01

    Existing health systems all over the world are different due to the different combinations of components that can be considered for their establishment. The ranking of health systems has been a focal points for many years especially the issue of performance. In 2000 the World Health Organization (WHO) performed a ranking to compare the Performance of the health system of the member countries. Since then other health system rankings have been performed and it became an issue of public discussion. A point of contention regarding these rankings is the methodology employed by each of them, since no gold standard exists. Therefore, this review focuses on evaluating the methodologies of each existing health system performance ranking to assess their reproducibility and transparency. A search was conducted to identify existing health system rankings, and a questionnaire was developed for the comparison of the methodologies based on the following indicators: (1) General information, (2) Statistical methods, (3) Data (4) Indicators. Overall nine rankings were identified whereas six of them focused rather on the measurement of population health without any financial component and were therefore excluded. Finally, three health system rankings were selected for this review: "Health Systems: Improving Performance" by the WHO, "Mirror, Mirror on the wall: How the Performance of the US Health Care System Compares Internationally" by the Commonwealth Fund and "the Most efficient Health Care" by Bloomberg. After the completion of the comparison of the rankings by giving them scores according to the indicators, the ranking performed the WHO was considered the most complete regarding the ability of reproducibility and transparency of the methodology. This review and comparison could help in establishing consensus in the field of health system research. This may also help giving recommendations for future health rankings and evaluating the current gap in the literature.

  18. Computer-Assisted Performance Evaluation for Navy Anti-Air Warfare Training: Concepts, Methods, and Constraints.

    ERIC Educational Resources Information Center

    Chesler, David J.

    An improved general methodological approach for the development of computer-assisted evaluation of trainee performance in the computer-based simulation environment is formulated in this report. The report focuses on the Tactical Advanced Combat Direction and Electronic Warfare system (TACDEW) at the Fleet Anti-Air Warfare Training Center at San…

  19. A Conceptual Framework to Help Evaluate the Quality of Institutional Performance

    ERIC Educational Resources Information Center

    Kettunen, Juha

    2008-01-01

    Purpose: This study aims to present a general conceptual framework which can be used to evaluate quality and institutional performance in higher education. Design/methodology/approach: The quality of higher education is at the heart of the setting up of the European Higher Education Area. Strategic management is widely used in higher education…

  20. Granting Teachers the "Benefit of the Doubt" in Performance Evaluations

    ERIC Educational Resources Information Center

    Rogge, Nicky

    2011-01-01

    Purpose: This paper proposes a benefit of the doubt (BoD) approach to construct and analyse teacher effectiveness scores (i.e. SET scores). Design/methodology/approach: The BoD approach is related to data envelopment analysis (DEA), a linear programming tool for evaluating the relative efficiency performance of a set of similar units (e.g. firms,…

  1. DETERMINING COARSE PARTICULATE MATTER CONCENTRATIONS: A PERFORMANCE EVALUATION OF CANDIDATE METHODOLOGIES - STUDY DESIGN AND RESULTS FROM THE RTP EQUIPMENT SHAKEDOWN

    EPA Science Inventory

    The main objective of this study is to evaluate the performance of candidate sampling methods for potential use as a Federal Reference Method (FRM) capable of providing an estimate of coarse particle (PMc: particulate matter with an aerodynamic diameter between 2.5 um and 10 um...

  2. Space network scheduling benchmark: A proof-of-concept process for technology transfer

    NASA Technical Reports Server (NTRS)

    Moe, Karen; Happell, Nadine; Hayden, B. J.; Barclay, Cathy

    1993-01-01

    This paper describes a detailed proof-of-concept activity to evaluate flexible scheduling technology as implemented in the Request Oriented Scheduling Engine (ROSE) and applied to Space Network (SN) scheduling. The criteria developed for an operational evaluation of a reusable scheduling system is addressed including a methodology to prove that the proposed system performs at least as well as the current system in function and performance. The improvement of the new technology must be demonstrated and evaluated against the cost of making changes. Finally, there is a need to show significant improvement in SN operational procedures. Successful completion of a proof-of-concept would eventually lead to an operational concept and implementation transition plan, which is outside the scope of this paper. However, a high-fidelity benchmark using actual SN scheduling requests has been designed to test the ROSE scheduling tool. The benchmark evaluation methodology, scheduling data, and preliminary results are described.

  3. Defining the Ecological Coefficient of Performance for an Aircraft Propulsion System

    NASA Astrophysics Data System (ADS)

    Şöhret, Yasin

    2018-05-01

    The aircraft industry, along with other industries, is considered responsible these days regarding environmental issues. Therefore, the performance evaluation of aircraft propulsion systems should be conducted with respect to environmental and ecological considerations. The current paper aims to present the ecological coefficient of performance calculation methodology for aircraft propulsion systems. The ecological coefficient performance is a widely-preferred performance indicator of numerous energy conversion systems. On the basis of thermodynamic laws, the methodology used to determine the ecological coefficient of performance for an aircraft propulsion system is parametrically explained and illustrated in this paper for the first time. For a better understanding, to begin with, the exergy analysis of a turbojet engine is described in detail. Following this, the outputs of the analysis are employed to define the ecological coefficient of performance for a turbojet engine. At the end of the study, the ecological coefficient of performance is evaluated parametrically and discussed depending on selected engine design parameters and performance measures. The author asserts the ecological coefficient of performance to be a beneficial indicator for researchers interested in aircraft propulsion system design and related topics.

  4. [Controversial issues in economic evaluation (I): perspective and costs of Health Care interventions].

    PubMed

    Oliva, Juan; Brosa, Max; Espín, Jaime; Figueras, Montserrat; Trapero, Marta

    2015-01-01

    Economic evaluation of health care interventions has experienced a strong growth over the past decade and is increasingly present as a support tool in the decisions making process on public funding of health services and pricing in European countries. A necessary element using them is that agents that perform economic evaluations have minimum rules with agreement on methodological aspects. Although there are methodological issues in which there is a high degree of consensus, there are others in which there is no such degree of agreement being closest to the normative field or have experienced significant methodological advances in recent years. In this first article of a series of three, we will discuss on the perspective of analysis and assessment of costs in economic evaluation of health interventions using the technique Metaplan. Finally, research lines are proposed to overcome the identified discrepancies.

  5. Low-Level Analytical Methodology Updates to Support Decontaminant Performance Evaluations

    DTIC Science & Technology

    2011-06-01

    from EPDM and tire rubber coupon materials that were spiked with a known amount of the chemical agent VX, treated with bleach decontaminant, and...to evaluate the performance of bleach decontaminant on EPDM and tire rubber coupons. Dose-confirmation or Tool samples were collected by delivering...components • An aging or damaged analytical column • Dirty detector • Other factors related to general instrument and/or sample analysis performance

  6. Investigating human cognitive performance during spaceflight

    NASA Astrophysics Data System (ADS)

    Pattyn, Nathalie; Migeotte, Pierre-Francois; Demaeseleer, Wim; Kolinsky, Regine; Morais, Jose; Zizi, Martin

    2005-08-01

    Although astronauts' subjective self-evaluation of cognitive functioning often reports impairments, to date most studies of human higher cognitive functions in space never yielded univocal results. Since no golden standard exists to evaluate the higher cognitive functions, we proposed to assess astronaut's cognitive performance through a novel series of tests combined with the simultaneous recording of physiological parameters. We report here the validation of our methodology and the cognitive results of this testing on the cosmonauts from the 11 days odISSsea mission to the ISS (2002) and on a control group of pilots, carefully matched to the characteristics of the subjects. For the first time, we show a performance decrement in higher cognitive functions during space flight. Our results show a significant performance decrement for inflight measurement, as well as measurable variations in executive control of cognitive functions. Taken together, our data establish the validity of our methodology and the presence of a different information processing in operational conditions.

  7. Visual performance-based image enhancement methodology: an investigation of contrast enhancement algorithms

    NASA Astrophysics Data System (ADS)

    Neriani, Kelly E.; Herbranson, Travis J.; Reis, George A.; Pinkus, Alan R.; Goodyear, Charles D.

    2006-05-01

    While vast numbers of image enhancing algorithms have already been developed, the majority of these algorithms have not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of this research was to apply a visual performance-based assessment methodology to evaluate six algorithms that were specifically designed to enhance the contrast of digital images. The image enhancing algorithms used in this study included three different histogram equalization algorithms, the Autolevels function, the Recursive Rational Filter technique described in Marsi, Ramponi, and Carrato1 and the multiscale Retinex algorithm described in Rahman, Jobson and Woodell2. The methodology used in the assessment has been developed to acquire objective human visual performance data as a means of evaluating the contrast enhancement algorithms. Objective performance metrics, response time and error rate, were used to compare algorithm enhanced images versus two baseline conditions, original non-enhanced images and contrast-degraded images. Observers completed a visual search task using a spatial-forcedchoice paradigm. Observers searched images for a target (a military vehicle) hidden among foliage and then indicated in which quadrant of the screen the target was located. Response time and percent correct were measured for each observer. Results of the study and future directions are discussed.

  8. Measuring the Differences between Traditional Learning and Game-Based Learning Using Electroencephalography (EEG) Physiologically Based Methodology

    ERIC Educational Resources Information Center

    Chen, Ching-Huei

    2017-01-01

    Students' cognitive states can reflect a learning experience that results in engagement in an activity. In this study, we used electroencephalography (EEG) physiologically based methodology to evaluate students' levels of attention and relaxation, as well as their learning performance within a traditional and game-based learning context. While no…

  9. NPAC-Nozzle Performance Analysis Code

    NASA Technical Reports Server (NTRS)

    Barnhart, Paul J.

    1997-01-01

    A simple and accurate nozzle performance analysis methodology has been developed. The geometry modeling requirements are minimal and very flexible, thus allowing rapid design evaluations. The solution techniques accurately couple: continuity, momentum, energy, state, and other relations which permit fast and accurate calculations of nozzle gross thrust. The control volume and internal flow analyses are capable of accounting for the effects of: over/under expansion, flow divergence, wall friction, heat transfer, and mass addition/loss across surfaces. The results from the nozzle performance methodology are shown to be in excellent agreement with experimental data for a variety of nozzle designs over a range of operating conditions.

  10. Evaluation Model for Pavement Surface Distress on 3d Point Clouds from Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Aoki, K.; Yamamoto, K.; Shimamura, H.

    2012-07-01

    This paper proposes a methodology to evaluate the pavement surface distress for maintenance planning of road pavement using 3D point clouds from Mobile Mapping System (MMS). The issue on maintenance planning of road pavement requires scheduled rehabilitation activities for damaged pavement sections to keep high level of services. The importance of this performance-based infrastructure asset management on actual inspection data is globally recognized. Inspection methodology of road pavement surface, a semi-automatic measurement system utilizing inspection vehicles for measuring surface deterioration indexes, such as cracking, rutting and IRI, have already been introduced and capable of continuously archiving the pavement performance data. However, any scheduled inspection using automatic measurement vehicle needs much cost according to the instruments' specification or inspection interval. Therefore, implementation of road maintenance work, especially for the local government, is difficult considering costeffectiveness. Based on this background, in this research, the methodologies for a simplified evaluation for pavement surface and assessment of damaged pavement section are proposed using 3D point clouds data to build urban 3D modelling. The simplified evaluation results of road surface were able to provide useful information for road administrator to find out the pavement section for a detailed examination and for an immediate repair work. In particular, the regularity of enumeration of 3D point clouds was evaluated using Chow-test and F-test model by extracting the section where the structural change of a coordinate value was remarkably achieved. Finally, the validity of the current methodology was investigated by conducting a case study dealing with the actual inspection data of the local roads.

  11. Performance in Physiology Evaluation: Possible Improvement by Active Learning Strategies

    ERIC Educational Resources Information Center

    Montrezor, Luís H.

    2016-01-01

    The evaluation process is complex and extremely important in the teaching/learning process. Evaluations are constantly employed in the classroom to assist students in the learning process and to help teachers improve the teaching process. The use of active methodologies encourages students to participate in the learning process, encourages…

  12. MULTI-SITE EVALUATIONS OF CANDIDATE METHODOLOGIES FOR DETERMINING COARSE PARTICULATE MATTER (PMC) CONCENTRATIONS

    EPA Science Inventory

    Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discret...

  13. MULTI-SITE EVALUATIONS OF CANDIDATE METHODOLOGIES FOR DETERMINING COARSE PARTICULATE MATTER (PMC) CONCENTRATIONS

    EPA Science Inventory

    Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discrete ...

  14. Evaluating stakeholder management performance using a stakeholder report card: the next step in theory and practice.

    PubMed

    Malvey, Donna; Fottler, Myron D; Slovensky, Donna J

    2002-01-01

    In the highly competitive health care environment, the survival of an organization may depend on how well powerful stakeholders are managed. Yet, the existing strategic stakeholder management process does not include evaluation of stakeholder management performance. To address this critical gap, this paper proposes a systematic method for evaluation using a stakeholder report card. An example of a physician report card based on this methodology is presented.

  15. The 2014 Michigan Public High School Context and Performance Report Card

    ERIC Educational Resources Information Center

    Spalding, Audrey

    2014-01-01

    The 2014 Michigan Public High School Context and Performance Report Card is the Mackinac Center's second effort to measure high school performance. The first high school assessment was published in 2012, followed by the Center's 2013 elementary and middle school report card, which used a similar methodology to evaluate school performance. The…

  16. Human region segmentation and description methods for domiciliary healthcare monitoring using chromatic methodology

    NASA Astrophysics Data System (ADS)

    Al-Temeemy, Ali A.

    2018-03-01

    A descriptor is proposed for use in domiciliary healthcare monitoring systems. The descriptor is produced from chromatic methodology to extract robust features from the monitoring system's images. It has superior discrimination capabilities, is robust to events that normally disturb monitoring systems, and requires less computational time and storage space to achieve recognition. A method of human region segmentation is also used with this descriptor. The performance of the proposed descriptor was evaluated using experimental data sets, obtained through a series of experiments performed in the Centre for Intelligent Monitoring Systems, University of Liverpool. The evaluation results show high recognition performance for the proposed descriptor in comparison to traditional descriptors, such as moments invariant. The results also show the effectiveness of the proposed segmentation method regarding distortion effects associated with domiciliary healthcare systems.

  17. An integrated methodology to assess the benefits of urban green space.

    PubMed

    De Ridder, K; Adamec, V; Bañuelos, A; Bruse, M; Bürger, M; Damsgaard, O; Dufek, J; Hirsch, J; Lefebre, F; Pérez-Lacorzana, J M; Thierry, A; Weber, C

    2004-12-01

    The interrelated issues of urban sprawl, traffic congestion, noise, and air pollution are major socioeconomic problems faced by most European cities. A methodology is currently being developed for evaluating the role of green space and urban form in alleviating the adverse effects of urbanisation, mainly focusing on the environment but also accounting for socioeconomic aspects. The objectives and structure of the methodology are briefly outlined and illustrated with preliminary results obtained from case studies performed on several European cities.

  18. Classification of small lesions on dynamic breast MRI: Integrating dimension reduction and out-of-sample extension into CADx methodology

    PubMed Central

    Nagarajan, Mahesh B.; Huber, Markus B.; Schlossbauer, Thomas; Leinsinger, Gerda; Krol, Andrzej; Wismüller, Axel

    2014-01-01

    Objective While dimension reduction has been previously explored in computer aided diagnosis (CADx) as an alternative to feature selection, previous implementations of its integration into CADx do not ensure strict separation between training and test data required for the machine learning task. This compromises the integrity of the independent test set, which serves as the basis for evaluating classifier performance. Methods and Materials We propose, implement and evaluate an improved CADx methodology where strict separation is maintained. This is achieved by subjecting the training data alone to dimension reduction; the test data is subsequently processed with out-of-sample extension methods. Our approach is demonstrated in the research context of classifying small diagnostically challenging lesions annotated on dynamic breast magnetic resonance imaging (MRI) studies. The lesions were dynamically characterized through topological feature vectors derived from Minkowski functionals. These feature vectors were then subject to dimension reduction with different linear and non-linear algorithms applied in conjunction with out-of-sample extension techniques. This was followed by classification through supervised learning with support vector regression. Area under the receiver-operating characteristic curve (AUC) was evaluated as the metric of classifier performance. Results Of the feature vectors investigated, the best performance was observed with Minkowski functional ’perimeter’ while comparable performance was observed with ’area’. Of the dimension reduction algorithms tested with ’perimeter’, the best performance was observed with Sammon’s mapping (0.84 ± 0.10) while comparable performance was achieved with exploratory observation machine (0.82 ± 0.09) and principal component analysis (0.80 ± 0.10). Conclusions The results reported in this study with the proposed CADx methodology present a significant improvement over previous results reported with such small lesions on dynamic breast MRI. In particular, non-linear algorithms for dimension reduction exhibited better classification performance than linear approaches, when integrated into our CADx methodology. We also note that while dimension reduction techniques may not necessarily provide an improvement in classification performance over feature selection, they do allow for a higher degree of feature compaction. PMID:24355697

  19. [Indicators of communication and degree of professional integration in healthcare].

    PubMed

    Mola, Ernesto; Maggio, Anna; Vantaggiato, Lucia

    2009-01-01

    According to the chronic care model, improving the management of chronic illness requires efficient communication between health care professionals and the creation of a web of integrated healthcare The aim of this study was to identify an efficient methodology for evaluating the degree of professional integration through indicators related to communication between healthcare professionals. The following types of indicators were identified:-structure indicators to evaluate the presence of prerequisites necessary for implementing the procedures -functional indicators to quantitatively evaluate the use of communications instruments-performance indicators Defining specific indicators may be an appropriate methodology for evaluating the degree of integration and communication between health professionals, available for a bargaining system of incentives.

  20. Automated Storm Tracking and the Lightning Jump Algorithm Using GOES-R Geostationary Lightning Mapper (GLM) Proxy Data.

    PubMed

    Schultz, Elise V; Schultz, Christopher J; Carey, Lawrence D; Cecil, Daniel J; Bateman, Monte

    2016-01-01

    This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system's performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system's performance is evaluated with adjustments to parameter sensitivity. The system's performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system's performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system.

  1. Automated Storm Tracking and the Lightning Jump Algorithm Using GOES-R Geostationary Lightning Mapper (GLM) Proxy Data

    NASA Technical Reports Server (NTRS)

    Schultz, Elise; Schultz, Christopher Joseph; Carey, Lawrence D.; Cecil, Daniel J.; Bateman, Monte

    2016-01-01

    This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system's performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system's performance is evaluated with adjustments to parameter sensitivity. The system's performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system's performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system.

  2. Automated Storm Tracking and the Lightning Jump Algorithm Using GOES-R Geostationary Lightning Mapper (GLM) Proxy Data

    PubMed Central

    SCHULTZ, ELISE V.; SCHULTZ, CHRISTOPHER J.; CAREY, LAWRENCE D.; CECIL, DANIEL J.; BATEMAN, MONTE

    2017-01-01

    This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system’s performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system’s performance is evaluated with adjustments to parameter sensitivity. The system’s performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system’s performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system. PMID:29303164

  3. Some human factors issues in the development and evaluation of cockpit alerting and warning systems

    NASA Technical Reports Server (NTRS)

    Randle, R. J., Jr.; Larsen, W. E.; Williams, D. H.

    1980-01-01

    A set of general guidelines for evaluating a newly developed cockpit alerting and warning system in terms of human factors issues are provided. Although the discussion centers around a general methodology, it is made specifically to the issues involved in alerting systems. An overall statement of the current operational problem is presented. Human factors problems with reference to existing alerting and warning systems are described. The methodology for proceeding through system development to system test is discussed. The differences between traditional human factors laboratory evaluations and those required for evaluation of complex man-machine systems under development are emphasized. Performance evaluation in the alerting and warning subsystem using a hypothetical sample system is explained.

  4. Deployment of a tool for measuring freeway safety performance.

    DOT National Transportation Integrated Search

    2011-12-01

    This project updated and deployed a freeway safety performance measurement tool, building upon a previous project that developed the core methodology. The tool evaluates the cumulative risk over time of an accident or a particular kind of accident. T...

  5. Development of an adaptive failure detection and identification system for detecting aircraft control element failures

    NASA Technical Reports Server (NTRS)

    Bundick, W. Thomas

    1990-01-01

    A methodology for designing a failure detection and identification (FDI) system to detect and isolate control element failures in aircraft control systems is reviewed. An FDI system design for a modified B-737 aircraft resulting from this methodology is also reviewed, and the results of evaluating this system via simulation are presented. The FDI system performed well in a no-turbulence environment, but it experienced an unacceptable number of false alarms in atmospheric turbulence. An adaptive FDI system, which adjusts thresholds and other system parameters based on the estimated turbulence level, was developed and evaluated. The adaptive system performed well over all turbulence levels simulated, reliably detecting all but the smallest magnitude partially-missing-surface failures.

  6. The ALMA CONOPS project: the impact of funding decisions on observatory performance

    NASA Astrophysics Data System (ADS)

    Ibsen, Jorge; Hibbard, John; Filippi, Giorgio

    2014-08-01

    In time when every penny counts, many organizations are facing the question of how much scientific impact a budget cut can have or, putting it in more general terms, which is the science impact of alternative (less costly) operational modes. In reply to such question posted by the governing bodies, the ALMA project had to develop a methodology (ALMA Concepts for Operations, CONOPS) that attempts to measure the impact that alternative operational scenarios may have on the overall scientific production of the Observatory. Although the analysis and the results are ALMA specific, the developed approach is rather general and provides a methodology for a cost-performance analysis of alternatives before any radical alterations to the operations model are adopted. This paper describes the key aspects of the methodology: a) the definition of the Figures of Merit (FoMs) for the assessment of quantitative science performance impacts as well as qualitative impacts, and presents a methodology using these FoMs to evaluate the cost and impact of the different operational scenarios; b) the definition of a REFERENCE operational baseline; c) the identification of Alternative Scenarios each replacing one or more concepts in the REFERENCE by a different concept that has a lower cost and some level of scientific and/or operational impact; d) the use of a Cost-Performance plane to graphically combine the effects that the alternative scenarios can have in terms of cost reduction and affected performance. Although is a firstorder assessment, we believe this approach is useful for comparing different operational models and to understand the cost performance impact of these choices. This can be used to take decision to meet budget cuts as well as in evaluating possible new emergent opportunities.

  7. Obtaining optic disc center and pixel region by automatic thresholding methods on morphologically processed fundus images.

    PubMed

    Marin, Diego; Gegundez-Arias, Manuel E; Suero, Angel; Bravo, Jose M

    2015-02-01

    Development of automatic retinal disease diagnosis systems based on retinal image computer analysis can provide remarkably quicker screening programs for early detection. Such systems are mainly focused on the detection of the earliest ophthalmic signs of illness and require previous identification of fundal landmark features such as optic disc (OD), fovea or blood vessels. A methodology for accurate center-position location and OD retinal region segmentation on digital fundus images is presented in this paper. The methodology performs a set of iterative opening-closing morphological operations on the original retinography intensity channel to produce a bright region-enhanced image. Taking blood vessel confluence at the OD into account, a 2-step automatic thresholding procedure is then applied to obtain a reduced region of interest, where the center and the OD pixel region are finally obtained by performing the circular Hough transform on a set of OD boundary candidates generated through the application of the Prewitt edge detector. The methodology was evaluated on 1200 and 1748 fundus images from the publicly available MESSIDOR and MESSIDOR-2 databases, acquired from diabetic patients and thus being clinical cases of interest within the framework of automated diagnosis of retinal diseases associated to diabetes mellitus. This methodology proved highly accurate in OD-center location: average Euclidean distance between the methodology-provided and actual OD-center position was 6.08, 9.22 and 9.72 pixels for retinas of 910, 1380 and 1455 pixels in size, respectively. On the other hand, OD segmentation evaluation was performed in terms of Jaccard and Dice coefficients, as well as the mean average distance between estimated and actual OD boundaries. Comparison with the results reported by other reviewed OD segmentation methodologies shows our proposal renders better overall performance. Its effectiveness and robustness make this proposed automated OD location and segmentation method a suitable tool to be integrated into a complete prescreening system for early diagnosis of retinal diseases. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. A methodology for collecting valid software engineering data

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Weiss, David M.

    1983-01-01

    An effective data collection method for evaluating software development methodologies and for studying the software development process is described. The method uses goal-directed data collection to evaluate methodologies with respect to the claims made for them. Such claims are used as a basis for defining the goals of the data collection, establishing a list of questions of interest to be answered by data analysis, defining a set of data categorization schemes, and designing a data collection form. The data to be collected are based on the changes made to the software during development, and are obtained when the changes are made. To insure accuracy of the data, validation is performed concurrently with software development and data collection. Validation is based on interviews with those people supplying the data. Results from using the methodology show that data validation is a necessary part of change data collection. Without it, as much as 50% of the data may be erroneous. Feasibility of the data collection methodology was demonstrated by applying it to five different projects in two different environments. The application showed that the methodology was both feasible and useful.

  9. Effectiveness evaluation of objective and subjective weighting methods for aquifer vulnerability assessment in urban context

    NASA Astrophysics Data System (ADS)

    Sahoo, Madhumita; Sahoo, Satiprasad; Dhar, Anirban; Pradhan, Biswajeet

    2016-10-01

    Groundwater vulnerability assessment has been an accepted practice to identify the zones with relatively increased potential for groundwater contamination. DRASTIC is the most popular secondary information-based vulnerability assessment approach. Original DRASTIC approach considers relative importance of features/sub-features based on subjective weighting/rating values. However variability of features at a smaller scale is not reflected in this subjective vulnerability assessment process. In contrast to the subjective approach, the objective weighting-based methods provide flexibility in weight assignment depending on the variation of the local system. However experts' opinion is not directly considered in the objective weighting-based methods. Thus effectiveness of both subjective and objective weighting-based approaches needs to be evaluated. In the present study, three methods - Entropy information method (E-DRASTIC), Fuzzy pattern recognition method (F-DRASTIC) and Single parameter sensitivity analysis (SA-DRASTIC), were used to modify the weights of the original DRASTIC features to include local variability. Moreover, a grey incidence analysis was used to evaluate the relative performance of subjective (DRASTIC and SA-DRASTIC) and objective (E-DRASTIC and F-DRASTIC) weighting-based methods. The performance of the developed methodology was tested in an urban area of Kanpur City, India. Relative performance of the subjective and objective methods varies with the choice of water quality parameters. This methodology can be applied without/with suitable modification. These evaluations establish the potential applicability of the methodology for general vulnerability assessment in urban context.

  10. Statistical evaluation of vibration analysis techniques

    NASA Technical Reports Server (NTRS)

    Milner, G. Martin; Miller, Patrice S.

    1987-01-01

    An evaluation methodology is presented for a selection of candidate vibration analysis techniques applicable to machinery representative of the environmental control and life support system of advanced spacecraft; illustrative results are given. Attention is given to the statistical analysis of small sample experiments, the quantification of detection performance for diverse techniques through the computation of probability of detection versus probability of false alarm, and the quantification of diagnostic performance.

  11. An evaluation of NASA's program in human factors research: Aircrew-vehicle system interaction

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Research in human factors in the aircraft cockpit and a proposed program augmentation were reviewed. The dramatic growth of microprocessor technology makes it entirely feasible to automate increasingly more functions in the aircraft cockpit; the promise of improved vehicle performance, efficiency, and safety through automation makes highly automated flight inevitable. An organized data base and validated methodology for predicting the effects of automation on human performance and thus on safety are lacking and without such a data base and validated methodology for analyzing human performance, increased automation may introduce new risks. Efforts should be concentrated on developing methods and techniques for analyzing man machine interactions, including human workload and prediction of performance.

  12. Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction.

    PubMed

    Park, Seong Ho; Han, Kyunghwa

    2018-03-01

    The use of artificial intelligence in medicine is currently an issue of great interest, especially with regard to the diagnostic or predictive analysis of medical images. Adoption of an artificial intelligence tool in clinical practice requires careful confirmation of its clinical utility. Herein, the authors explain key methodology points involved in a clinical evaluation of artificial intelligence technology for use in medicine, especially high-dimensional or overparameterized diagnostic or predictive models in which artificial deep neural networks are used, mainly from the standpoints of clinical epidemiology and biostatistics. First, statistical methods for assessing the discrimination and calibration performances of a diagnostic or predictive model are summarized. Next, the effects of disease manifestation spectrum and disease prevalence on the performance results are explained, followed by a discussion of the difference between evaluating the performance with use of internal and external datasets, the importance of using an adequate external dataset obtained from a well-defined clinical cohort to avoid overestimating the clinical performance as a result of overfitting in high-dimensional or overparameterized classification model and spectrum bias, and the essentials for achieving a more robust clinical evaluation. Finally, the authors review the role of clinical trials and observational outcome studies for ultimate clinical verification of diagnostic or predictive artificial intelligence tools through patient outcomes, beyond performance metrics, and how to design such studies. © RSNA, 2018.

  13. Systematic review of the methodological quality of controlled trials evaluating Chinese herbal medicine in patients with rheumatoid arthritis

    PubMed Central

    Pan, Xin; Lopez-Olivo, Maria A; Song, Juhee; Pratt, Gregory; Suarez-Almazor, Maria E

    2017-01-01

    Objectives We appraised the methodological and reporting quality of randomised controlled clinical trials (RCTs) evaluating the efficacy and safety of Chinese herbal medicine (CHM) in patients with rheumatoid arthritis (RA). Design For this systematic review, electronic databases were searched from inception until June 2015. The search was limited to humans and non-case report studies, but was not limited by language, year of publication or type of publication. Two independent reviewers selected RCTs, evaluating CHM in RA (herbals and decoctions). Descriptive statistics were used to report on risk of bias and their adherence to reporting standards. Multivariable logistic regression analysis was performed to determine study characteristics associated with high or unclear risk of bias. Results Out of 2342 unique citations, we selected 119 RCTs including 18 919 patients: 10 108 patients received CHM alone and 6550 received one of 11 treatment combinations. A high risk of bias was observed across all domains: 21% had a high risk for selection bias (11% from sequence generation and 30% from allocation concealment), 85% for performance bias, 89% for detection bias, 4% for attrition bias and 40% for reporting bias. In multivariable analysis, fewer authors were associated with selection bias (allocation concealment), performance bias and attrition bias, and earlier year of publication and funding source not reported or disclosed were associated with selection bias (sequence generation). Studies published in non-English language were associated with reporting bias. Poor adherence to recommended reporting standards (<60% of the studies not providing sufficient information) was observed in 11 of the 23 sections evaluated. Limitations Study quality and data extraction were performed by one reviewer and cross-checked by a second reviewer. Translation to English was performed by one reviewer in 85% of the included studies. Conclusions Studies evaluating CHM often fail to meet expected methodological criteria, and high-quality evidence is lacking. PMID:28249848

  14. Performance and cost evaluation of health information systems using micro-costing and discrete-event simulation.

    PubMed

    Rejeb, Olfa; Pilet, Claire; Hamana, Sabri; Xie, Xiaolan; Durand, Thierry; Aloui, Saber; Doly, Anne; Biron, Pierre; Perrier, Lionel; Augusto, Vincent

    2018-06-01

    Innovation and health-care funding reforms have contributed to the deployment of Information and Communication Technology (ICT) to improve patient care. Many health-care organizations considered the application of ICT as a crucial key to enhance health-care management. The purpose of this paper is to provide a methodology to assess the organizational impact of high-level Health Information System (HIS) on patient pathway. We propose an integrated performance evaluation of HIS approach through the combination of formal modeling using the Architecture of Integrated Information Systems (ARIS) models, a micro-costing approach for cost evaluation, and a Discrete-Event Simulation (DES) approach. The methodology is applied to the consultation for cancer treatment process. Simulation scenarios are established to conclude about the impact of HIS on patient pathway. We demonstrated that although high level HIS lengthen the consultation, occupation rate of oncologists are lower and quality of service is higher (through the number of available information accessed during the consultation to formulate the diagnostic). The provided method allows also to determine the most cost-effective ICT elements to improve the care process quality while minimizing costs. The methodology is flexible enough to be applied to other health-care systems.

  15. Evaluating the trade-off between mechanical and electrochemical performance of separators for lithium-ion batteries: Methodology and application

    NASA Astrophysics Data System (ADS)

    Plaimer, Martin; Breitfuß, Christoph; Sinz, Wolfgang; Heindl, Simon F.; Ellersdorfer, Christian; Steffan, Hermann; Wilkening, Martin; Hennige, Volker; Tatschl, Reinhard; Geier, Alexander; Schramm, Christian; Freunberger, Stefan A.

    2016-02-01

    Lithium-ion batteries are in widespread use in electric vehicles and hybrid vehicles. Besides features like energy density, cost, lifetime, and recyclability the safety of a battery system is of prime importance. The separator material impacts all these properties and requires therefore an informed selection. The interplay between the mechanical and electrochemical properties as key selection criteria is investigated. Mechanical properties were investigated using tensile and puncture penetration tests at abuse relevant conditions. To investigate the electrochemical performance in terms of effective conductivity a method based on impedance spectroscopy was introduced. This methodology is applied to evaluate ten commercial separators which allows for a trade-off analysis of mechanical versus electrochemical performance. Based on the results, and in combination with other factors, this offers an effective approach to select suitable separators for automotive applications.

  16. Increasing the applicability of wind power projects via a multi-criteria approach: methodology and case study

    NASA Astrophysics Data System (ADS)

    Polatidis, Heracles; Morales, Jan Borràs

    2016-11-01

    In this paper a methodological framework for increasing the actual applicability of wind farms is developed and applied. The framework is based on multi-criteria decision aid techniques that perform an integrated technical and societal evaluation of a number of potential wind power projects that are a variation of a pre-existing actual proposal that faces implementation difficulties. A number of evaluation criteria are established and assessed via particular related software or are comparatively evaluated among each other on a semi-qualitative basis. The preference of a diverse audience of pertinent stakeholders can be also incorporated in the overall analysis. The result of the process is an identification of a new project that will exhibit increased actual implementation potential compared with the original proposal. The methodology is tested in a case study of a wind farm in the UK and relevant conclusions are drawn.

  17. Experimental Validation of an Integrated Controls-Structures Design Methodology

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Walz, Joseph E.

    1996-01-01

    The first experimental validation of an integrated controls-structures design methodology for a class of large order, flexible space structures is described. Integrated redesign of the controls-structures-interaction evolutionary model, a laboratory testbed at NASA Langley, was described earlier. The redesigned structure was fabricated, assembled in the laboratory, and experimentally tested against the original structure. Experimental results indicate that the structure redesigned using the integrated design methodology requires significantly less average control power than the nominal structure with control-optimized designs, while maintaining the required line-of-sight pointing performance. Thus, the superiority of the integrated design methodology over the conventional design approach is experimentally demonstrated. Furthermore, amenability of the integrated design structure to other control strategies is evaluated, both analytically and experimentally. Using Linear-Quadratic-Guassian optimal dissipative controllers, it is observed that the redesigned structure leads to significantly improved performance with alternate controllers as well.

  18. An Effective Modal Approach to the Dynamic Evaluation of Fracture Toughness of Quasi-Brittle Materials

    NASA Astrophysics Data System (ADS)

    Ferreira, L. E. T.; Vareda, L. V.; Hanai, J. B.; Sousa, J. L. A. O.; Silva, A. I.

    2017-05-01

    A modal dynamic analysis is used as the tool to evaluate the fracture toughness of concrete from the results of notched-through beam tests. The dimensionless functions describing the relation between the frequencies and specimen geometry used for identifying the variation in the natural frequency as a function of crack depth is first determined for a 150 × 150 × 500-mm notched-through specimen. The frequency decrease resulting from the propagating crack is modeled through a modal/fracture mechanics approach, leading to determination of an effective crack length. This length, obtained numerically, is used to evaluate the fracture toughness of concrete, the critical crack mouth opening displacements, and the brittleness index proposed. The methodology is applied to tests performed on high-strength concrete specimens. The frequency response for each specimen is evaluated before and after each crack propagation step. The methodology is then validated by comparison with results from the application of other methodologies described in the literature and suggested by RILEM.

  19. Clinical trials in palliative care: a systematic review of their methodological characteristics and of the quality of their reporting.

    PubMed

    Bouça-Machado, Raquel; Rosário, Madalena; Alarcão, Joana; Correia-Guedes, Leonor; Abreu, Daisy; Ferreira, Joaquim J

    2017-01-25

    Over the past decades there has been a significant increase in the number of published clinical trials in palliative care. However, empirical evidence suggests that there are methodological problems in the design and conduct of studies, which raises questions about the validity and generalisability of the results and of the strength of the available evidence. We sought to evaluate the methodological characteristics and assess the quality of reporting of clinical trials in palliative care. We performed a systematic review of published clinical trials assessing therapeutic interventions in palliative care. Trials were identified using MEDLINE (from its inception to February 2015). We assessed methodological characteristics and describe the quality of reporting using the Cochrane Risk of Bias tool. We retrieved 107 studies. The most common medical field studied was oncology, and 43.9% of trials evaluated pharmacological interventions. Symptom control and physical dimensions (e.g. intervention on pain, breathlessness, nausea) were the palliative care-specific issues most studied. We found under-reporting of key information in particular on random sequence generation, allocation concealment, and blinding. While the number of clinical trials in palliative care has increased over time, methodological quality remains suboptimal. This compromises the quality of studies. Therefore, a greater effort is needed to enable the appropriate performance of future studies and increase the robustness of evidence-based medicine in this important field.

  20. Advanced biosensing methodologies developed for evaluating performance quality and safety of emerging biophotonics technologies and medical devices (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ilev, Ilko K.; Walker, Bennett; Calhoun, William; Hassan, Moinuddin

    2016-03-01

    Biophotonics is an emerging field in modern biomedical technology that has opened up new horizons for transfer of state-of-the-art techniques from the areas of lasers, fiber optics and biomedical optics to the life sciences and medicine. This field continues to vastly expand with advanced developments across the entire spectrum of biomedical applications ranging from fundamental "bench" laboratory studies to clinical patient "bedside" diagnostics and therapeutics. However, in order to translate these technologies to clinical device applications, the scientific and industrial community, and FDA are facing the requirement for a thorough evaluation and review of laser radiation safety and efficacy concerns. In many cases, however, the review process is complicated due the lack of effective means and standard test methods to precisely analyze safety and effectiveness of some of the newly developed biophotonics techniques and devices. There is, therefore, an immediate public health need for new test protocols, guidance documents and standard test methods to precisely evaluate fundamental characteristics, performance quality and safety of these technologies and devices. Here, we will overview our recent developments of novel test methodologies for safety and efficacy evaluation of some emerging biophotonics technologies and medical devices. These methodologies are based on integrating the advanced features of state-of-the-art optical sensor technologies and approaches such as high-resolution fiber-optic sensing, confocal and optical coherence tomography imaging, and infrared spectroscopy. The presentation will also illustrate some methodologies developed and implemented for testing intraocular lens implants, biochemical contaminations of medical devices, ultrahigh-resolution nanoscopy, and femtosecond laser therapeutics.

  1. Evaluating the methodology and performance of jetting and flooding of granular backfill materials.

    DOT National Transportation Integrated Search

    2014-11-01

    Compaction of backfill in confined spaces on highway projects is often performed with small vibratory plates, based : solely on the experience of the contractor, leading to inadequate compaction. As a result, the backfill is prone to : erosion and of...

  2. Performance and Perception in the Flipped Learning Model: An Initial Approach to Evaluate the Effectiveness of a New Teaching Methodology in a General Science Classroom

    ERIC Educational Resources Information Center

    González-Gómez, David; Jeong, Jin Su; Airado Rodríguez, Diego; Cañada-Cañada, Florentina

    2016-01-01

    "Flipped classroom" teaching methodology is a type of blended learning in which the traditional class setting is inverted. Lecture is shifted outside of class, while the classroom time is employed to solve problems or doing practical works through the discussion/peer collaboration of students and instructors. This relatively new…

  3. A Maritime Phase Zero Force for the Year 2020

    DTIC Science & Technology

    2009-06-01

    mind, the team constructed maritime forces and then evaluated them against the same scenarios to determine which ones performed better. The...120  1.  Project Methodology and Choice of Missions ...............................120  2.  Missions and Scenarios construction methodology... projects are designed to build tools that students in the Systems Engineering Analysis curriculum have learned over the 18 month enrollment in the program

  4. 2008 Post-Election Voting Survey of Overseas Citizens: Statistical Methodology Report

    DTIC Science & Technology

    2009-08-01

    Gorsak. Westat performed data collection and editing. DMDC’s Survey Technology Branch, under the guidance of Frederick Licari, Branch Chief, is...POST-ELECTION VOTING SURVEY OF OVERSEAS CITIZENS: STATISTICAL METHODOLOGY REPORT Executive Summary The Uniformed and Overseas Citizens Absentee ...ease the process of voting absentee , (3) to evaluate other progress made to facilitate voting participation, and (4) to identify any remaining

  5. One Controller at a Time (1-CAT): A mimo design methodology

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.; Lucas, J. C.

    1987-01-01

    The One Controller at a Time (1-CAT) methodology for designing digital controllers for Large Space Structures (LSS's) is introduced and illustrated. The flexible mode problem is first discussed. Next, desirable features of a LSS control system design methodology are delineated. The 1-CAT approach is presented, along with an analytical technique for carrying out the 1-CAT process. Next, 1-CAT is used to design digital controllers for the proposed Space Based Laser (SBL). Finally, the SBL design is evaluated for dynamical performance, noise rejection, and robustness.

  6. A Perspective on Computational Human Performance Models as Design Tools

    NASA Technical Reports Server (NTRS)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  7. Evaluation of innovative devices to control traffic entering from low-volume access points within a land closure.

    DOT National Transportation Integrated Search

    2014-04-01

    This report describes the methodology and results of analyses performed to identify and evaluate : alternative methods to control traffic entering a lane closure on a two-lane, two-way road from low-volume : access points. Researchers documented the ...

  8. From Determinism and Probability to Chaos: Chaotic Evolution towards Philosophy and Methodology of Chaotic Optimization

    PubMed Central

    2015-01-01

    We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed. PMID:25879067

  9. From determinism and probability to chaos: chaotic evolution towards philosophy and methodology of chaotic optimization.

    PubMed

    Pei, Yan

    2015-01-01

    We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed.

  10. Are normative sonographic values of kidney size in children valid and reliable? A systematic review of the methodological quality of ultrasound studies using the Anatomical Quality Assessment (AQUA) tool.

    PubMed

    Chhapola, Viswas; Tiwari, Soumya; Deepthi, Bobbity; Henry, Brandon Michael; Brar, Rekha; Kanwal, Sandeep Kumar

    2018-06-01

    A plethora of research is available on ultrasonographic kidney size standards. We performed a systematic review of methodological quality of ultrasound studies aimed at developing normative renal parameters in healthy children, by evaluating the risk of bias (ROB) using the 'Anatomical Quality Assessment (AQUA)' tool. We searched Medline, Scopus, CINAHL, and Google Scholar on June 04 2018, and observational studies measuring kidney size by ultrasonography in healthy children (0-18 years) were included. The ROB of each study was evaluated in five domains using a 20 item coding scheme based on AQUA tool framework. Fifty-four studies were included. Domain 1 (subject characteristics) had a high ROB in 63% of studies due to the unclear description of age, sex, and ethnicity. The performance in Domain 2 (study design) was the best with 85% of studies having a prospective design. Methodological characterization (Domain 3) was poor across the studies (< 10% compliance), with suboptimal performance in the description of patient positioning, operator experience, and assessment of intra/inter-observer reliability. About three-fourth of the studies had a low ROB in Domain 4 (descriptive anatomy). Domain 5 (reporting of results) had a high ROB in approximately half of the studies, the majority reporting results in the form of central tendency measures. Significant deficiencies and heterogeneity were observed in the methodological quality of USG studies performed to-date for measurement of kidney size in children. We hereby provide a framework for the conducting such studies in future. PROSPERO (CRD42017071601).

  11. Evaluating Educational Programs. ETS R&D Scientific and Policy Contributions Series. ETS SPC-11-01. ETS Research Report No. RR-11-15

    ERIC Educational Resources Information Center

    Ball, Samuel

    2011-01-01

    Since its founding in 1947, ETS has conducted a significant and wide-ranging research program that has focused on, among other things, psychometric and statistical methodology; educational evaluation; performance assessment and scoring; large-scale assessment and evaluation; cognitive, developmental, personality, and social psychology; and…

  12. Nonlinear effects of stretch on the flame front propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halter, F.; Tahtouh, T.; Mounaim-Rousselle, C.

    2010-10-15

    In all experimental configurations, the flames are affected by stretch (curvature and/or strain rate). To obtain the unstretched flame speed, independent of the experimental configuration, the measured flame speed needs to be corrected. Usually, a linear relationship linking the flame speed to stretch is used. However, this linear relation is the result of several assumptions, which may be incorrected. The present study aims at evaluating the error in the laminar burning speed evaluation induced by using the traditional linear methodology. Experiments were performed in a closed vessel at atmospheric pressure for two different mixtures: methane/air and iso-octane/air. The initial temperaturesmore » were respectively 300 K and 400 K for methane and iso-octane. Both methodologies (linear and nonlinear) are applied and results in terms of laminar speed and burned gas Markstein length are compared. Methane and iso-octane were chosen because they present opposite evolutions in their Markstein length when the equivalence ratio is increased. The error induced by the linear methodology is evaluated, taking the nonlinear methodology as the reference. It is observed that the use of the linear methodology starts to induce substantial errors after an equivalence ratio of 1.1 for methane/air mixtures and before an equivalence ratio of 1 for iso-octane/air mixtures. One solution to increase the accuracy of the linear methodology for these critical cases consists in reducing the number of points used in the linear methodology by increasing the initial flame radius used. (author)« less

  13. A Statistical Evaluation of the Diagnostic Performance of MEDAS-The Medical Emergency Decision Assistance System

    PubMed Central

    Georgakis, D. Christine; Trace, David A.; Naeymi-Rad, Frank; Evens, Martha

    1990-01-01

    Medical expert systems require comprehensive evaluation of their diagnostic accuracy. The usefulness of these systems is limited without established evaluation methods. We propose a new methodology for evaluating the diagnostic accuracy and the predictive capacity of a medical expert system. We have adapted to the medical domain measures that have been used in the social sciences to examine the performance of human experts in the decision making process. Thus, in addition to the standard summary measures, we use measures of agreement and disagreement, and Goodman and Kruskal's λ and τ measures of predictive association. This methodology is illustrated by a detailed retrospective evaluation of the diagnostic accuracy of the MEDAS system. In a study using 270 patients admitted to the North Chicago Veterans Administration Hospital, diagnoses produced by MEDAS are compared with the discharge diagnoses of the attending physicians. The results of the analysis confirm the high diagnostic accuracy and predictive capacity of the MEDAS system. Overall, the agreement of the MEDAS system with the “gold standard” diagnosis of the attending physician has reached a 90% level.

  14. Evaluating Innovations in Home Care for Performance Accountability.

    PubMed

    Collister, Barbara; Gutscher, Abram; Ambrogiano, Jana

    2016-01-01

    Concerns about rising costs and the sustainability of our healthcare system have led to a drive for innovative solutions and accountability for performance. Integrated Home Care, Calgary Zone, Alberta Health Services went beyond traditional accountability measures to use evaluation methodology to measure the progress of complex innovations to its organization structure and service delivery model. This paper focuses on the first two phases of a three-phase evaluation. The results of the first two phases generated learning about innovation adoption and sustainability, and performance accountability at the program-level of a large publicly funded healthcare organization.

  15. A methodology for the design and evaluation of user interfaces for interactive information systems. Ph.D. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Farooq, Mohammad U.

    1986-01-01

    The definition of proposed research addressing the development and validation of a methodology for the design and evaluation of user interfaces for interactive information systems is given. The major objectives of this research are: the development of a comprehensive, objective, and generalizable methodology for the design and evaluation of user interfaces for information systems; the development of equations and/or analytical models to characterize user behavior and the performance of a designed interface; the design of a prototype system for the development and administration of user interfaces; and the design and use of controlled experiments to support the research and test/validate the proposed methodology. The proposed design methodology views the user interface as a virtual machine composed of three layers: an interactive layer, a dialogue manager layer, and an application interface layer. A command language model of user system interactions is presented because of its inherent simplicity and structured approach based on interaction events. All interaction events have a common structure based on common generic elements necessary for a successful dialogue. It is shown that, using this model, various types of interfaces could be designed and implemented to accommodate various categories of users. The implementation methodology is discussed in terms of how to store and organize the information.

  16. Evaluating the performance of a soil moisture data assimilation system for agricultural drought monitoring

    USDA-ARS?s Scientific Manuscript database

    Despite considerable interest in the application of land surface data assimilation systems (LDAS) for agricultural drought applications, relatively little is known about the large-scale performance of such systems and, thus, the optimal methodological approach for implementing them. To address this ...

  17. Reactor safeguards system assessment and design. Volume I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varnado, G.B.; Ericson, D.M. Jr.; Daniel, S.L.

    1978-06-01

    This report describes the development and application of a methodology for evaluating the effectiveness of nuclear power reactor safeguards systems. Analytic techniques are used to identify the sabotage acts which could lead to release of radioactive material from a nuclear power plant, to determine the areas of a plant which must be protected to assure that significant release does not occur, to model the physical plant layout, and to evaluate the effectiveness of various safeguards systems. The methodology was used to identify those aspects of reactor safeguards systems which have the greatest effect on overall system performance and which, therefore,more » should be emphasized in the licensing process. With further refinements, the methodology can be used by the licensing reviewer to aid in assessing proposed or existing safeguards systems.« less

  18. To publish or not to publish? On the aggregation and drivers of research performance

    PubMed Central

    De Witte, Kristof

    2010-01-01

    This paper presents a methodology to aggregate multidimensional research output. Using a tailored version of the non-parametric Data Envelopment Analysis model, we account for the large heterogeneity in research output and the individual researcher preferences by endogenously weighting the various output dimensions. The approach offers three important advantages compared to the traditional approaches: (1) flexibility in the aggregation of different research outputs into an overall evaluation score; (2) a reduction of the impact of measurement errors and a-typical observations; and (3) a correction for the influences of a wide variety of factors outside the evaluated researcher’s control. As a result, research evaluations are more effective representations of actual research performance. The methodology is illustrated on a data set of all faculty members at a large polytechnic university in Belgium. The sample includes questionnaire items on the motivation and perception of the researcher. This allows us to explore whether motivation and background characteristics (such as age, gender, retention, etc.,) of the researchers explain variations in measured research performance. PMID:21057573

  19. Methodology development for evaluation of selective-fidelity rotorcraft simulation

    NASA Technical Reports Server (NTRS)

    Lewis, William D.; Schrage, D. P.; Prasad, J. V. R.; Wolfe, Daniel

    1992-01-01

    This paper addressed the initial step toward the goal of establishing performance and handling qualities acceptance criteria for realtime rotorcraft simulators through a planned research effort to quantify the system capabilities of 'selective fidelity' simulators. Within this framework the simulator is then classified based on the required task. The simulator is evaluated by separating the various subsystems (visual, motion, etc.) and applying corresponding fidelity constants based on the specific task. This methodology not only provides an assessment technique, but also provides a technique to determine the required levels of subsystem fidelity for a specific task.

  20. Experimental evaluation of the Continuous Risk Profile (CRP) approach to the current Caltrans methodology for high collision concentration location identification

    DOT National Transportation Integrated Search

    2012-03-31

    This report evaluates the performance of Continuous Risk Profile (CRP) compared with the : Sliding Window Method (SWM) and Peak Searching (PS) methods. These three network : screening methods all require the same inputs: traffic collision data and Sa...

  1. Experimental evaluation of the Continuous Risk Profile (CRP) approach to the current Caltrans methodology for high collision concentration location identification.

    DOT National Transportation Integrated Search

    2012-03-01

    This report evaluates the performance of Continuous Risk Profile (CRP) compared with the : Sliding Window Method (SWM) and Peak Searching (PS) methods. These three network : screening methods all require the same inputs: traffic collision data and Sa...

  2. Performance of forty-one microbial source tracking methods: A twenty-seven lab evaluation study

    EPA Science Inventory

    The last decade has seen development of numerous new microbial source tracking (MST) methodologies, but many of these have been tested in just a few laboratories with a limited number of fecal samples. This method evaluation study examined the specificity and sensitivity of 43 ...

  3. A Human Factors Evaluation of a Methodology for Pressurized Crew Module Acceptability for Zero-Gravity Ingress of Spacecraft

    NASA Technical Reports Server (NTRS)

    Sanchez, Merri J.

    2000-01-01

    This project aimed to develop a methodology for evaluating performance and acceptability characteristics of the pressurized crew module volume suitability for zero-gravity (g) ingress of a spacecraft and to evaluate the operational acceptability of the NASA crew return vehicle (CRV) for zero-g ingress of astronaut crew, volume for crew tasks, and general crew module and seat layout. No standard or methodology has been established for evaluating volume acceptability in human spaceflight vehicles. Volume affects astronauts'ability to ingress and egress the vehicle, and to maneuver in and perform critical operational tasks inside the vehicle. Much research has been conducted on aircraft ingress, egress, and rescue in order to establish military and civil aircraft standards. However, due to the extremely limited number of human-rated spacecraft, this topic has been un-addressed. The NASA CRV was used for this study. The prototype vehicle can return a 7-member crew from the International Space Station in an emergency. The vehicle's internal arrangement must be designed to facilitate rapid zero-g ingress, zero-g maneuverability, ease of one-g egress and rescue, and ease of operational tasks in multiple acceleration environments. A full-scale crew module mockup was built and outfitted with representative adjustable seats, crew equipment, and a volumetrically equivalent hatch. Human factors testing was conducted in three acceleration environments using ground-based facilities and the KC-135 aircraft. Performance and acceptability measurements were collected. Data analysis was conducted using analysis of variance and nonparametric techniques.

  4. Effectiveness evaluation of the R&D projects in organizations financed by the budget expenses

    NASA Astrophysics Data System (ADS)

    Yakovlev, D.; Yushkov, E.; Pryakhin, A.; Bogatyreova, M.

    2017-01-01

    The issues of R&D project performance and their prospects are closely concerned with knowledge management. In the initial stages of the project development, it is the quality of the project evaluation that is crucial for the result and generation of future knowledge. Currently there does not exist any common methodology for the evaluation of new R&D financed by the budget. Suffice it to say, the assessment of scientific and technical projects (ST projects) varies greatly depending on the type of customer - government or business structures. An extensive methodological groundwork was formed with respect to orders placed by business structures. It included “an internal administrative order” by the company management for the results of STA intended for its own ST divisions. Regretfully this is not the case with state orders in the field of STA although the issue requires state regulation and official methodological support. The article is devoted to methodological assessment of scientific and technical effectiveness of studies performed at the expense of budget funds, and suggests a new concept based on the definition of the cost-effectiveness index. Thus, the study reveals it necessary to extend the previous approach to projects of different levels - micro-, meso-, macro projects. The preliminary results of the research show that there must be a common methodological approach to underpin the financing of projects under government contracts within the framework of budget financing and stock financing. This should be developed as general guidelines as well as recommendations that reflect specific sectors of the public sector, various project levels and forms of financing, as well as different stages of project life cycle.

  5. Development of a Methodology to Conduct Usability Evaluation for Hand Tools that May Reduce the Amount of Small Parts that are Dropped During Installation while Processing Space Flight Hardware

    NASA Technical Reports Server (NTRS)

    Miller, Darcy

    2000-01-01

    Foreign object debris (FOD) is an important concern while processing space flight hardware. FOD can be defined as "The debris that is left in or around flight hardware, where it could cause damage to that flight hardware," (United Space Alliance, 2000). Just one small screw left unintentionally in the wrong place could delay a launch schedule while it is retrieved, increase the cost of processing, or cause a potentially fatal accident. At this time, there is not a single solution to help reduce the number of dropped parts such as screws, bolts, nuts, and washers during installation. Most of the effort is currently focused on training employees and on capturing the parts once they are dropped. Advances in ergonomics and hand tool design suggest that a solution may be possible, in the form of specialty hand tools, which secure the small parts while they are being handled. To assist in the development of these new advances, a test methodology was developed to conduct a usability evaluation of hand tools, while performing tasks with risk of creating FOD. The methodology also includes hardware in the form of a testing board and the small parts that can be installed onto the board during a test. The usability of new hand tools was determined based on efficiency and the number of dropped parts. To validate the methodology, participants were tested while performing a task that is representative of the type of work that may be done when processing space flight hardware. Test participants installed small parts using their hands and two commercially available tools. The participants were from three groups: (1) students, (2) engineers / managers and (3) technicians. The test was conducted to evaluate the differences in performance when using the three installation methods, as well as the difference in performance of the three participant groups.

  6. Assessing the need for an update of a probabilistic seismic hazard analysis using a SSHAC Level 1 study and the Seismic Hazard Periodic Reevaluation Methodology

    DOE PAGES

    Payne, Suzette J.; Coppersmith, Kevin J.; Coppersmith, Ryan; ...

    2017-08-23

    A key decision for nuclear facilities is evaluating the need for an update of an existing seismic hazard analysis in light of new data and information that has become available since the time that the analysis was completed. We introduce the newly developed risk-informed Seismic Hazard Periodic Review Methodology (referred to as the SHPRM) and present how a Senior Seismic Hazard Analysis Committee (SSHAC) Level 1 probabilistic seismic hazard analysis (PSHA) was performed in an implementation of this new methodology. The SHPRM offers a defensible and documented approach that considers both the changes in seismic hazard and engineering-based risk informationmore » of an existing nuclear facility to assess the need for an update of an existing PSHA. The SHPRM has seven evaluation criteria that are employed at specific analysis, decision, and comparison points which are applied to seismic design categories established for nuclear facilities in United States. The SHPRM is implemented using a SSHAC Level 1 study performed for the Idaho National Laboratory, USA. The implementation focuses on the first six of the seven evaluation criteria of the SHPRM which are all provided from the SSHAC Level 1 PSHA. Finally, to illustrate outcomes of the SHPRM that do not lead to the need for an update and those that do, the example implementations of the SHPRM are performed for nuclear facilities that have target performance goals expressed as the mean annual frequency of unacceptable performance at 1x10 -4, 4x10 -5 and 1x10 -5.« less

  7. Assessing the need for an update of a probabilistic seismic hazard analysis using a SSHAC Level 1 study and the Seismic Hazard Periodic Reevaluation Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Payne, Suzette J.; Coppersmith, Kevin J.; Coppersmith, Ryan

    A key decision for nuclear facilities is evaluating the need for an update of an existing seismic hazard analysis in light of new data and information that has become available since the time that the analysis was completed. We introduce the newly developed risk-informed Seismic Hazard Periodic Review Methodology (referred to as the SHPRM) and present how a Senior Seismic Hazard Analysis Committee (SSHAC) Level 1 probabilistic seismic hazard analysis (PSHA) was performed in an implementation of this new methodology. The SHPRM offers a defensible and documented approach that considers both the changes in seismic hazard and engineering-based risk informationmore » of an existing nuclear facility to assess the need for an update of an existing PSHA. The SHPRM has seven evaluation criteria that are employed at specific analysis, decision, and comparison points which are applied to seismic design categories established for nuclear facilities in United States. The SHPRM is implemented using a SSHAC Level 1 study performed for the Idaho National Laboratory, USA. The implementation focuses on the first six of the seven evaluation criteria of the SHPRM which are all provided from the SSHAC Level 1 PSHA. Finally, to illustrate outcomes of the SHPRM that do not lead to the need for an update and those that do, the example implementations of the SHPRM are performed for nuclear facilities that have target performance goals expressed as the mean annual frequency of unacceptable performance at 1x10 -4, 4x10 -5 and 1x10 -5.« less

  8. Evaluating Effectiveness of Modeling Motion System Feedback in the Enhanced Hess Structural Model of the Human Operator

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill; Cardullo, Frank; George, Gary; Kelly, Lon C.

    2009-01-01

    In order to use the Hess Structural Model to predict the need for certain cueing systems, George and Cardullo significantly expanded it by adding motion feedback to the model and incorporating models of the motion system dynamics, motion cueing algorithm and a vestibular system. This paper proposes a methodology to evaluate effectiveness of these innovations by performing a comparison analysis of the model performance with and without the expanded motion feedback. The proposed methodology is composed of two stages. The first stage involves fine-tuning parameters of the original Hess structural model in order to match the actual control behavior recorded during the experiments at NASA Visual Motion Simulator (VMS) facility. The parameter tuning procedure utilizes a new automated parameter identification technique, which was developed at the Man-Machine Systems Lab at SUNY Binghamton. In the second stage of the proposed methodology, an expanded motion feedback is added to the structural model. The resulting performance of the model is then compared to that of the original one. As proposed by Hess, metrics to evaluate the performance of the models include comparison against the crossover models standards imposed on the crossover frequency and phase margin of the overall man-machine system. Preliminary results indicate the advantage of having the model of the motion system and motion cueing incorporated into the model of the human operator. It is also demonstrated that the crossover frequency and the phase margin of the expanded model are well within the limits imposed by the crossover model.

  9. A review of the current state-of-the-art methodology for handling bias and uncertainty in performing criticality safety evaluations. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Disney, R.K.

    1994-10-01

    The methodology for handling bias and uncertainty when calculational methods are used in criticality safety evaluations (CSE`s) is a rapidly evolving technology. The changes in the methodology are driven by a number of factors. One factor responsible for changes in the methodology for handling bias and uncertainty in CSE`s within the overview of the US Department of Energy (DOE) is a shift in the overview function from a ``site`` perception to a more uniform or ``national`` perception. Other causes for change or improvement in the methodology for handling calculational bias and uncertainty are; (1) an increased demand for benchmark criticalsmore » data to expand the area (range) of applicability of existing data, (2) a demand for new data to supplement existing benchmark criticals data, (3) the increased reliance on (or need for) computational benchmarks which supplement (or replace) experimental measurements in critical assemblies, and (4) an increased demand for benchmark data applicable to the expanded range of conditions and configurations encountered in DOE site restoration and remediation.« less

  10. Using data from monitoring combined sewer overflows to assess, improve, and maintain combined sewer systems.

    PubMed

    Montserrat, A; Bosch, Ll; Kiser, M A; Poch, M; Corominas, Ll

    2015-02-01

    Using low-cost sensors, data can be collected on the occurrence and duration of overflows in each combined sewer overflow (CSO) structure in a combined sewer system (CSS). The collection and analysis of real data can be used to assess, improve, and maintain CSSs in order to reduce the number and impact of overflows. The objective of this study was to develop a methodology to evaluate the performance of CSSs using low-cost monitoring. This methodology includes (1) assessing the capacity of a CSS using overflow duration and rain volume data, (2) characterizing the performance of CSO structures with statistics, (3) evaluating the compliance of a CSS with government guidelines, and (4) generating decision tree models to provide support to managers for making decisions about system maintenance. The methodology is demonstrated with a case study of a CSS in La Garriga, Spain. The rain volume breaking point from which CSO structures started to overflow ranged from 0.6 mm to 2.8 mm. The structures with the best and worst performance in terms of overflow (overflow probability, order, duration and CSO ranking) were characterized. Most of the obtained decision trees to predict overflows from rain data had accuracies ranging from 70% to 83%. The results obtained from the proposed methodology can greatly support managers and engineers dealing with real-world problems, improvements, and maintenance of CSSs. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Meta-analysis of the technical performance of an imaging procedure: guidelines and statistical methodology.

    PubMed

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2015-02-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  12. Meta-analysis of the technical performance of an imaging procedure: Guidelines and statistical methodology

    PubMed Central

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2017-01-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test–retest repeatability data for illustrative purposes. PMID:24872353

  13. Impact of reconstruction strategies on system performance measures : maximizing safety and mobility while minimizing life-cycle costs : final report, December 8, 2008.

    DOT National Transportation Integrated Search

    2008-12-08

    The objective of this research is to develop a general methodological framework for planning and : evaluating the effectiveness of highway reconstruction strategies on the systems performance : measures, in particular safety, mobility, and the tot...

  14. Important Publications in the Area of Photovoltaic Performance |

    Science.gov Websites

    , 2011, DOI: 978-0-12-385934-1. Photoelectrochemical Water Splitting: Standards, Experimental Methods Energy Systems Testing, Solar Energy 73, 443-467 (2002). D.R. Myers, K. Emery, and C. Gueymard, Revising Performance Evaluation Methodologies for Energy Ratings," Proc. 24th IEEE Photovoltaic Specialists Conf

  15. Development Research of a Teachers' Educational Performance Support System: The Practices of Design, Development, and Evaluation

    ERIC Educational Resources Information Center

    Hung, Wei-Chen; Smith, Thomas J.; Harris, Marian S.; Lockard, James

    2010-01-01

    This study adopted design and development research methodology (Richey & Klein, "Design and development research: Methods, strategies, and issues," 2007) to systematically investigate the process of applying instructional design principles, human-computer interaction, and software engineering to a performance support system (PSS) for behavior…

  16. Ecological Development and Validation of a Music Performance Rating Scale for Five Instrument Families

    ERIC Educational Resources Information Center

    Wrigley, William J.; Emmerson, Stephen B.

    2013-01-01

    This study investigated ways to improve the quality of music performance evaluation in an effort to address the accountability imperative in tertiary music education. An enhanced scientific methodology was employed incorporating ecological validity and using recognized qualitative methods involving grounded theory and quantitative methods…

  17. The Interpersonal Challenges of Instructional Leadership: Principals' Effectiveness in Conversations about Performance Issues

    ERIC Educational Resources Information Center

    Le Fevre, Deidre M.; Robinson, Viviane M. J.

    2015-01-01

    Purpose: Principals commonly struggle to have effective conversations about staff performance issues, tending to tolerate, protect, and work around such issues rather than effectively addressing them. This article evaluates principals' effectiveness in having "difficult" conversations with parents and with teachers. Research Methodology:…

  18. Non-contact versus contact-based sensing methodologies for in-home upper arm robotic rehabilitation.

    PubMed

    Howard, Ayanna; Brooks, Douglas; Brown, Edward; Gebregiorgis, Adey; Chen, Yu-Ping

    2013-06-01

    In recent years, robot-assisted rehabilitation has gained momentum as a viable means for improving outcomes for therapeutic interventions. Such therapy experiences allow controlled and repeatable trials and quantitative evaluation of mobility metrics. Typically though these robotic devices have been focused on rehabilitation within a clinical setting. In these traditional robot-assisted rehabilitation studies, participants are required to perform goal-directed movements with the robot during a therapy session. This requires physical contact between the participant and the robot to enable precise control of the task, as well as a means to collect relevant performance data. On the other hand, non-contact means of robot interaction can provide a safe methodology for extracting the control data needed for in-home rehabilitation. As such, in this paper we discuss a contact and non-contact based method for upper-arm rehabilitation exercises that enables quantification of upper-arm movements. We evaluate our methodology on upper-arm abduction/adduction movements and discuss the advantages and limitations of each approach as applied to an in-home rehabilitation scenario.

  19. A systematic review of economic evaluations of treatments for patients with epilepsy.

    PubMed

    Wijnen, Ben F M; van Mastrigt, Ghislaine A P G; Evers, Silvia M A A; Gershuni, Olga; Lambrechts, Danielle A J E; Majoie, Marian H J M; Postulart, Debby; Aldenkamp, Bert A P; de Kinderen, Reina J A

    2017-05-01

    The increasing number of treatment options and the high costs associated with epilepsy have fostered the development of economic evaluations in epilepsy. It is important to examine the availability and quality of these economic evaluations and to identify potential research gaps. As well as looking at both pharmacologic (antiepileptic drugs [AEDs]) and nonpharmacologic (e.g., epilepsy surgery, ketogenic diet, vagus nerve stimulation) therapies, this review examines the methodologic quality of the full economic evaluations included. Literature search was performed in MEDLINE, EMBASE, NHS Economic Evaluation Database (NHS EED), Econlit, Web of Science, and CEA Registry. In addition, Cochrane Reviews, Cochrane DARE and Cochrane Health Technology Assessment Databases were used. To identify relevant studies, predefined clinical search strategies were combined with a search filter designed to identify health economic studies. Specific search strategies were devised for the following topics: (1) AEDs, (2) patients with cognitive deficits, (3) elderly patients, (4) epilepsy surgery, (5) ketogenic diet, (6) vagus nerve stimulation, and (7) treatment of (non)convulsive status epilepticus. A total of 40 publications were included in this review, 29 (73%) of which were articles about pharmacologic interventions. Mean quality score of all articles on the Consensus Health Economic Criteria (CHEC)-extended was 81.8%, the lowest quality score being 21.05%, whereas five studies had a score of 100%. Looking at the Consolidated Health Economic Evaluation Reporting Standards (CHEERS), the average quality score was 77.0%, the lowest being 22.7%, and four studies rated as 100%. There was a substantial difference in methodology in all included articles, which hampered the attempt to combine information meaningfully. Overall, the methodologic quality was acceptable; however, some studies performed significantly worse than others. The heterogeneity between the studies stresses the need to define a reference case (e.g., how should an economic evaluation within epilepsy be performed) and to derive consensus on what constitutes "standard optimal care." Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  20. Identifying Items to Assess Methodological Quality in Physical Therapy Trials: A Factor Analysis

    PubMed Central

    Cummings, Greta G.; Fuentes, Jorge; Saltaji, Humam; Ha, Christine; Chisholm, Annabritt; Pasichnyk, Dion; Rogers, Todd

    2014-01-01

    Background Numerous tools and individual items have been proposed to assess the methodological quality of randomized controlled trials (RCTs). The frequency of use of these items varies according to health area, which suggests a lack of agreement regarding their relevance to trial quality or risk of bias. Objective The objectives of this study were: (1) to identify the underlying component structure of items and (2) to determine relevant items to evaluate the quality and risk of bias of trials in physical therapy by using an exploratory factor analysis (EFA). Design A methodological research design was used, and an EFA was performed. Methods Randomized controlled trials used for this study were randomly selected from searches of the Cochrane Database of Systematic Reviews. Two reviewers used 45 items gathered from 7 different quality tools to assess the methodological quality of the RCTs. An exploratory factor analysis was conducted using the principal axis factoring (PAF) method followed by varimax rotation. Results Principal axis factoring identified 34 items loaded on 9 common factors: (1) selection bias; (2) performance and detection bias; (3) eligibility, intervention details, and description of outcome measures; (4) psychometric properties of the main outcome; (5) contamination and adherence to treatment; (6) attrition bias; (7) data analysis; (8) sample size; and (9) control and placebo adequacy. Limitation Because of the exploratory nature of the results, a confirmatory factor analysis is needed to validate this model. Conclusions To the authors' knowledge, this is the first factor analysis to explore the underlying component items used to evaluate the methodological quality or risk of bias of RCTs in physical therapy. The items and factors represent a starting point for evaluating the methodological quality and risk of bias in physical therapy trials. Empirical evidence of the association among these items with treatment effects and a confirmatory factor analysis of these results are needed to validate these items. PMID:24786942

  1. Identifying items to assess methodological quality in physical therapy trials: a factor analysis.

    PubMed

    Armijo-Olivo, Susan; Cummings, Greta G; Fuentes, Jorge; Saltaji, Humam; Ha, Christine; Chisholm, Annabritt; Pasichnyk, Dion; Rogers, Todd

    2014-09-01

    Numerous tools and individual items have been proposed to assess the methodological quality of randomized controlled trials (RCTs). The frequency of use of these items varies according to health area, which suggests a lack of agreement regarding their relevance to trial quality or risk of bias. The objectives of this study were: (1) to identify the underlying component structure of items and (2) to determine relevant items to evaluate the quality and risk of bias of trials in physical therapy by using an exploratory factor analysis (EFA). A methodological research design was used, and an EFA was performed. Randomized controlled trials used for this study were randomly selected from searches of the Cochrane Database of Systematic Reviews. Two reviewers used 45 items gathered from 7 different quality tools to assess the methodological quality of the RCTs. An exploratory factor analysis was conducted using the principal axis factoring (PAF) method followed by varimax rotation. Principal axis factoring identified 34 items loaded on 9 common factors: (1) selection bias; (2) performance and detection bias; (3) eligibility, intervention details, and description of outcome measures; (4) psychometric properties of the main outcome; (5) contamination and adherence to treatment; (6) attrition bias; (7) data analysis; (8) sample size; and (9) control and placebo adequacy. Because of the exploratory nature of the results, a confirmatory factor analysis is needed to validate this model. To the authors' knowledge, this is the first factor analysis to explore the underlying component items used to evaluate the methodological quality or risk of bias of RCTs in physical therapy. The items and factors represent a starting point for evaluating the methodological quality and risk of bias in physical therapy trials. Empirical evidence of the association among these items with treatment effects and a confirmatory factor analysis of these results are needed to validate these items. © 2014 American Physical Therapy Association.

  2. TACCDAS Testbed Human Factors Evaluation Methodology,

    DTIC Science & Technology

    1980-03-01

    3 TEST METHOD Development of performance criteria................... 8 Test participant identification ...................... 8 Control of...major milestones involved in the evaluation process leading up to the evaluation of the complete testbed in the field are identified. Test methods and...inevitably will be different in several ways from the intended system as foreseen by the system designers. The system users provide insights into these

  3. Development of Innovative Nondestructive Evaluation Technologies for the Inspection of Cracking and Corrosion Under Coatings

    NASA Astrophysics Data System (ADS)

    Lipetzky, Kirsten G.; Novack, Michele R.; Perez, Ignacio; Davis, William R.

    2001-11-01

    Three different innovative nondestructive evaluation technologies were developed and evaluated for the ability to detect fatigue cracks and corrosion hidden under painted aluminum panels. The three technologies included real-time ultrasound imaging, thermal imaging, and near-field microwave imaging. With each of these nondestructive inspection methods, subtasks were performed in order to optimize each methodology.

  4. A Critical Meta-Analysis of All Evaluations of State-Funded Preschool from 1977 to 1998: Implications for Policy, Service Delivery and Program Evaluation.

    ERIC Educational Resources Information Center

    Gilliam, Walter S.; Zigler, Edward F.

    2000-01-01

    Presents a meta-analytic review of evaluations of state-funded preschool programs over 20 years. Identifies several methodological flaws but also suggests that pattern of findings offers modest support for positive impact in improving children's developmental competence, improving later school attendance and performance, and reducing subsequent…

  5. How can activity-based costing methodology be performed as a powerful tool to calculate costs and secure appropriate patient care?

    PubMed

    Lin, Blossom Yen-Ju; Chao, Te-Hsin; Yao, Yuh; Tu, Shu-Min; Wu, Chun-Ching; Chern, Jin-Yuan; Chao, Shiu-Hsiung; Shaw, Keh-Yuong

    2007-04-01

    Previous studies have shown the advantages of using activity-based costing (ABC) methodology in the health care industry. The potential values of ABC methodology in health care are derived from the more accurate cost calculation compared to the traditional step-down costing, and the potentials to evaluate quality or effectiveness of health care based on health care activities. This project used ABC methodology to profile the cost structure of inpatients with surgical procedures at the Department of Colorectal Surgery in a public teaching hospital, and to identify the missing or inappropriate clinical procedures. We found that ABC methodology was able to accurately calculate costs and to identify several missing pre- and post-surgical nursing education activities in the course of treatment.

  6. Evaluation of complex community-based childhood obesity prevention interventions.

    PubMed

    Karacabeyli, D; Allender, S; Pinkney, S; Amed, S

    2018-05-16

    Multi-setting, multi-component community-based interventions have shown promise in preventing childhood obesity; however, evaluation of these complex interventions remains a challenge. The objective of the study is to systematically review published methodological approaches to outcome evaluation for multi-setting community-based childhood obesity prevention interventions and synthesize a set of pragmatic recommendations. MEDLINE, CINAHL and PsycINFO were searched from inception to 6 July 2017. Papers were included if the intervention targeted children ≤18 years, engaged at least two community sectors and described their outcome evaluation methodology. A single reviewer conducted title and abstract scans, full article review and data abstraction. Directed content analysis was performed by three reviewers to identify prevailing themes. Thirty-three studies were included, and of these, 26 employed a quasi-experimental design; the remaining were randomized control trials. Body mass index was the most commonly measured outcome, followed by health behaviour change and psychosocial outcomes. Six themes emerged, highlighting advantages and disadvantages of active vs. passive consent, quasi-experimental vs. randomized control trials, longitudinal vs. repeat cross-sectional designs and the roles of process evaluation and methodological flexibility in evaluating complex interventions. Selection of study designs and outcome measures compatible with community infrastructure, accompanied by process evaluation, may facilitate successful outcome evaluation. © 2018 World Obesity Federation.

  7. Methods for systematic reviews of health economic evaluations: a systematic review, comparison, and synthesis of method literature.

    PubMed

    Mathes, Tim; Walgenbach, Maren; Antoine, Sunya-Lee; Pieper, Dawid; Eikermann, Michaela

    2014-10-01

    The quality of systematic reviews of health economic evaluations (SR-HE) is often limited because of methodological shortcomings. One reason for this poor quality is that there are no established standards for the preparation of SR-HE. The objective of this study is to compare existing methods and suggest best practices for the preparation of SR-HE. To identify the relevant methodological literature on SR-HE, a systematic literature search was performed in Embase, Medline, the National Health System Economic Evaluation Database, the Health Technology Assessment Database, and the Cochrane methodology register, and webpages of international health technology assessment agencies were searched. The study selection was performed independently by 2 reviewers. Data were extracted by one reviewer and verified by a second reviewer. On the basis of the overlaps in the recommendations for the methods of SR-HE in the included papers, suggestions for best practices for the preparation of SR-HE were developed. Nineteen relevant publications were identified. The recommendations within them often differed. However, for most process steps there was some overlap between recommendations for the methods of preparation. The overlaps were taken as basis on which to develop suggestions for the following process steps of preparation: defining the research question, developing eligibility criteria, conducting a literature search, selecting studies, assessing the methodological study quality, assessing transferability, and synthesizing data. The differences in the proposed recommendations are not always explainable by the focus on certain evaluation types, target audiences, or integration in the decision process. Currently, there seem to be no standard methods for the preparation of SR-HE. The suggestions presented here can contribute to the harmonization of methods for the preparation of SR-HE. © The Author(s) 2014.

  8. 76 FR 45804 - Agency Information Collection Request; 60-Day Public Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-01

    ... an algorithm that enables reliable prediction of a certain event. A responder could submit the correct algorithm, but without the methodology, the evaluation process could not be adequately performed...

  9. Context-specific selection of algorithms for recursive feature tracking in endoscopic image using a new methodology.

    PubMed

    Selka, F; Nicolau, S; Agnus, V; Bessaid, A; Marescaux, J; Soler, L

    2015-03-01

    In minimally invasive surgery, the tracking of deformable tissue is a critical component for image-guided applications. Deformation of the tissue can be recovered by tracking features using tissue surface information (texture, color,...). Recent work in this field has shown success in acquiring tissue motion. However, the performance evaluation of detection and tracking algorithms on such images are still difficult and are not standardized. This is mainly due to the lack of ground truth data on real data. Moreover, in order to avoid supplementary techniques to remove outliers, no quantitative work has been undertaken to evaluate the benefit of a pre-process based on image filtering, which can improve feature tracking robustness. In this paper, we propose a methodology to validate detection and feature tracking algorithms, using a trick based on forward-backward tracking that provides an artificial ground truth data. We describe a clear and complete methodology to evaluate and compare different detection and tracking algorithms. In addition, we extend our framework to propose a strategy to identify the best combinations from a set of detector, tracker and pre-process algorithms, according to the live intra-operative data. Experimental results have been performed on in vivo datasets and show that pre-process can have a strong influence on tracking performance and that our strategy to find the best combinations is relevant for a reasonable computation cost. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Using Team-Based Learning to Teach Grade 7 Biology: Student Satisfaction and Improved Performance

    ERIC Educational Resources Information Center

    Jarjoura, Christiane; Tayeh, Paula Abou; Zgheib, Nathalie K.

    2015-01-01

    Team-based learning (TBL) is an innovative form of collaborative learning. The purpose of this study was to evaluate TBL's effect on the performance and satisfaction of grade 7 students in biology in a private school in Lebanon, as well as teachers' willingness to implement this new methodology. An exploratory study was performed whereby two…

  11. New Methodologies To Evaluate the Memory Strategies of Deaf Individuals.

    ERIC Educational Resources Information Center

    Clark, Diane

    Prior studies have often confounded linguistic and perceptual performance when evaluating deaf subjects' skills, a confusion that may be responsible for results indicating lesser recall ability among the deaf. In this series of studies this linguistic/perceptual confound was investigated in both the iconic and short term memory of deaf…

  12. On Information Retrieval (IR) Systems: Revisiting Their Development, Evaluation Methodologies, and Assumptions (SIGs LAN, ED).

    ERIC Educational Resources Information Center

    Stirling, Keith

    2000-01-01

    Describes a session on information retrieval systems that planned to discuss relevance measures with Web-based information retrieval; retrieval system performance and evaluation; probabilistic independence of index terms; vector-based models; metalanguages and digital objects; how users assess the reliability, timeliness and bias of information;…

  13. Regulating the economic evaluation of pharmaceuticals and medical devices: a European perspective.

    PubMed

    Cookson, Richard; Hutton, John

    2003-02-01

    Throughout the developed world, economic evaluation of costly new pharmaceuticals and medical devices became increasingly widespread and systematic during the 1990s. However, serious concerns remain about the validity and relevance of this economic evidence, and about the transparency and accountability of its use in public sector reimbursement decisions. In this article, we summarise current concerns in Europe, based on interviews with European health economists from industry, universities, research institutes and consulting firms. We identify five challenges for European policy-makers, and conclude that there is considerable scope for improving decision-making without damaging incentives to innovate. The challenges are: (1). full publication of the economic evidence used in reimbursement decisions; (2). the redesign of licensing laws to improve the relevance of economic data available at product launch; (3). harmonisation of economic evaluation methodologies; (4). development of methodologies for evaluation of health inequality impacts; and (5). negotiation of price-performance deals to facilitate the use of economic evidence in post-launch pricing review decisions, as information is gathered from studies of product performance in routine use.

  14. Evaluation of glucose controllers in virtual environment: methodology and sample application.

    PubMed

    Chassin, Ludovic J; Wilinska, Malgorzata E; Hovorka, Roman

    2004-11-01

    Adaptive systems to deliver medical treatment in humans are safety-critical systems and require particular care in both the testing and the evaluation phase, which are time-consuming, costly, and confounded by ethical issues. The objective of the present work is to develop a methodology to test glucose controllers of an artificial pancreas in a simulated (virtual) environment. A virtual environment comprising a model of the carbohydrate metabolism and models of the insulin pump and the glucose sensor is employed to simulate individual glucose excursions in subjects with type 1 diabetes. The performance of the control algorithm within the virtual environment is evaluated by considering treatment and operational scenarios. The developed methodology includes two dimensions: testing in relation to specific life style conditions, i.e. fasting, post-prandial, and life style (metabolic) disturbances; and testing in relation to various operating conditions, i.e. expected operating conditions, adverse operating conditions, and system failure. We define safety and efficacy criteria and describe the measures to be taken prior to clinical testing. The use of the methodology is exemplified by tuning and evaluating a model predictive glucose controller being developed for a wearable artificial pancreas focused on fasting conditions. Our methodology to test glucose controllers in a virtual environment is instrumental in anticipating the results of real clinical tests for different physiological conditions and for different operating conditions. The thorough testing in the virtual environment reduces costs and speeds up the development process.

  15. Assessing the Fire Risk for a Historic Hangar

    NASA Technical Reports Server (NTRS)

    Datta, Koushik; Morrison, Richard S.

    2010-01-01

    NASA Ames Research Center (ARC) is evaluating options of reuse of its historic Hangar 1. As a part of this evaluation, a qualitative fire risk assessment study was performed to evaluate the potential threat of combustion of the historic hangar. The study focused on the fire risk trade-off of either installing or not installing a Special Hazard Fire Suppression System in the Hangar 1 deck areas. The assessment methodology was useful in discussing the important issues among various groups within the Center. Once the methodology was deemed acceptable, the results were assessed. The results showed that the risk remained in the same risk category, whether Hangar 1 does or does not have a Special Hazard Fire Suppression System. Note that the methodology assessed the risk to Hangar 1 and not the risk to an aircraft in the hangar. If one had a high value aircraft, the aircraft risk analysis could potentially show a different result. The assessed risk results were then communicated to management and other stakeholders.

  16. Electric Propulsion Test and Evaluation Methodologies for Plasma in the Environments of Space and Testing (EP TEMPEST)

    DTIC Science & Technology

    2016-04-14

    Swanson AEDC Path 1: Magnetized electron transport impeded across magnetic field lines; transport via electron-particle collisions Path 2*: Electron...T&E (higher pressure, metallic walls) → Impacts stability, performance, plume properties, thruster lifetime Magnetic Field Lines Plasma Plume...Development of T&E Methodologies • Current-Voltage- Magnetic Field (I-V-B) Mapping • Facility Interaction Studies • Background Pressure • Plasma Wall

  17. 2008 Post-Election Voting Survey of Uniformed Service Members: Stastical Methodology Report

    DTIC Science & Technology

    2009-08-01

    Research Fellow assisted in formatting this report. Data Recognition Corporation (DRC) performed data collection and editing. DMDC’s Survey...METHODOLOGY REPORT Executive Summary The Uniformed and Overseas Citizens Absentee Voting Act of 1986 (UOCAVA), 42 USC 1973ff, permits members of...citizens covered by UOCAVA, (2) to assess the impact of the FVAP’s efforts to simplify and ease the process of voting absentee , (3) to evaluate other

  18. Instruments evaluating the quality of the clinical learning environment in nursing education: A systematic review of psychometric properties.

    PubMed

    Mansutti, Irene; Saiani, Luisa; Grassetti, Luca; Palese, Alvisa

    2017-03-01

    The clinical learning environment is fundamental to nursing education paths, capable of affecting learning processes and outcomes. Several instruments have been developed in nursing education, aimed at evaluating the quality of the clinical learning environments; however, no systematic review of the psychometric properties and methodological quality of these studies has been performed to date. The aims of the study were: 1) to identify validated instruments evaluating the clinical learning environments in nursing education; 2) to evaluate critically the methodological quality of the psychometric property estimation used; and 3) to compare psychometric properties across the instruments available. A systematic review of the literature (using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines) and an evaluation of the methodological quality of psychometric properties (using the COnsensus-based Standards for the selection of health Measurement INstruments guidelines). The Medline and CINAHL databases were searched. Eligible studies were those that satisfied the following criteria: a) validation studies of instruments evaluating the quality of clinical learning environments; b) in nursing education; c) published in English or Italian; d) before April 2016. The included studies were evaluated for the methodological quality of the psychometric properties measured and then compared in terms of both the psychometric properties and the methodological quality of the processes used. The search strategy yielded a total of 26 studies and eight clinical learning environment evaluation instruments. A variety of psychometric properties have been estimated for each instrument, with differing qualities in the methodology used. Concept and construct validity were poorly assessed in terms of their significance and rarely judged by the target population (nursing students). Some properties were rarely considered (e.g., reliability, measurement error, criterion validity), whereas others were frequently estimated, but using different coefficients and statistical analyses (e.g., internal consistency, structural validity), thus rendering comparison across instruments difficult. Moreover, the methodological quality adopted in the property assessments was poor or fair in most studies, compromising the goodness of the psychometric values estimated. Clinical learning placements represent the key strategies in educating the future nursing workforce: instruments evaluating the quality of the settings, as well as their capacity to promote significant learning, are strongly recommended. Studies estimating psychometric properties, using an increased quality of research methodologies are needed in order to support nursing educators in the process of clinical placements accreditation and quality improvement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Evaluating the statistical methodology of randomized trials on dentin hypersensitivity management.

    PubMed

    Matranga, Domenica; Matera, Federico; Pizzo, Giuseppe

    2017-12-27

    The present study aimed to evaluate the characteristics and quality of statistical methodology used in clinical studies on dentin hypersensitivity management. An electronic search was performed for data published from 2009 to 2014 by using PubMed, Ovid/MEDLINE, and Cochrane Library databases. The primary search terms were used in combination. Eligibility criteria included randomized clinical trials that evaluated the efficacy of desensitizing agents in terms of reducing dentin hypersensitivity. A total of 40 studies were considered eligible for assessment of quality statistical methodology. The four main concerns identified were i) use of nonparametric tests in the presence of large samples, coupled with lack of information about normality and equality of variances of the response; ii) lack of P-value adjustment for multiple comparisons; iii) failure to account for interactions between treatment and follow-up time; and iv) no information about the number of teeth examined per patient and the consequent lack of cluster-specific approach in data analysis. Owing to these concerns, statistical methodology was judged as inappropriate in 77.1% of the 35 studies that used parametric methods. Additional studies with appropriate statistical analysis are required to obtain appropriate assessment of the efficacy of desensitizing agents.

  20. Systematic review of the methodological quality of controlled trials evaluating Chinese herbal medicine in patients with rheumatoid arthritis.

    PubMed

    Pan, Xin; Lopez-Olivo, Maria A; Song, Juhee; Pratt, Gregory; Suarez-Almazor, Maria E

    2017-03-01

    We appraised the methodological and reporting quality of randomised controlled clinical trials (RCTs) evaluating the efficacy and safety of Chinese herbal medicine (CHM) in patients with rheumatoid arthritis (RA). For this systematic review, electronic databases were searched from inception until June 2015. The search was limited to humans and non-case report studies, but was not limited by language, year of publication or type of publication. Two independent reviewers selected RCTs, evaluating CHM in RA (herbals and decoctions). Descriptive statistics were used to report on risk of bias and their adherence to reporting standards. Multivariable logistic regression analysis was performed to determine study characteristics associated with high or unclear risk of bias. Out of 2342 unique citations, we selected 119 RCTs including 18 919 patients: 10 108 patients received CHM alone and 6550 received one of 11 treatment combinations. A high risk of bias was observed across all domains: 21% had a high risk for selection bias (11% from sequence generation and 30% from allocation concealment), 85% for performance bias, 89% for detection bias, 4% for attrition bias and 40% for reporting bias. In multivariable analysis, fewer authors were associated with selection bias (allocation concealment), performance bias and attrition bias, and earlier year of publication and funding source not reported or disclosed were associated with selection bias (sequence generation). Studies published in non-English language were associated with reporting bias. Poor adherence to recommended reporting standards (<60% of the studies not providing sufficient information) was observed in 11 of the 23 sections evaluated. Study quality and data extraction were performed by one reviewer and cross-checked by a second reviewer. Translation to English was performed by one reviewer in 85% of the included studies. Studies evaluating CHM often fail to meet expected methodological criteria, and high-quality evidence is lacking. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  1. Design and implementation of a controlled clinical trial to evaluate the effectiveness and efficiency of routine opt-out rapid human immunodeficiency virus screening in the emergency department.

    PubMed

    Haukoos, Jason S; Hopkins, Emily; Byyny, Richard L; Conroy, Amy A; Silverman, Morgan; Eisert, Sheri; Thrun, Mark; Wilson, Michael; Boyett, Brian; Heffelfinger, James D

    2009-08-01

    In 2006, the Centers for Disease Control and Prevention (CDC) released revised recommendations for performing human immunodeficiency virus (HIV) testing in health care settings, including implementing routine rapid HIV screening, the use of an integrated opt-out consent, and limited prevention counseling. Emergency departments (EDs) have been a primary focus of these efforts. These revised CDC recommendations were primarily based on feasibility studies and have not been evaluated through the application of rigorous research methods. This article describes the design and implementation of a large prospective controlled clinical trial to evaluate the CDC's recommendations in an ED setting. From April 15, 2007, through April 15, 2009, a prospective quasi-experimental equivalent time-samples clinical trial was performed to compare the clinical effectiveness and efficiency of routine (nontargeted) opt-out rapid HIV screening (intervention) to physician-directed diagnostic rapid HIV testing (control) in a high-volume urban ED. In addition, three nested observational studies were performed to evaluate the cost-effectiveness and patient and staff acceptance of the two rapid HIV testing methods. This article describes the rationale, methodologies, and study design features of this program evaluation clinical trial. It also provides details regarding the integration of the principal clinical trial and its nested observational studies. Such ED-based trials are rare, but serve to provide valid comparisons between testing approaches. Investigators should consider similar methodology when performing future ED-based health services research.

  2. Systematic content evaluation and review of measurement properties of questionnaires for measuring self-reported fatigue among older people.

    PubMed

    Egerton, Thorlene; Riphagen, Ingrid I; Nygård, Arnhild J; Thingstad, Pernille; Helbostad, Jorunn L

    2015-09-01

    The assessment of fatigue in older people requires simple and user-friendly questionnaires that capture the phenomenon, yet are free from items indistinguishable from other disorders and experiences. This study aimed to evaluate the content, and systematically review and rate the measurement properties of self-report questionnaires for measuring fatigue, in order to identify the most suitable questionnaires for older people. This study firstly involved identification of questionnaires that purport to measure self-reported fatigue, and evaluation of the content using a rating scale developed for the purpose from contemporary understanding of the construct. Secondly, for the questionnaires that had acceptable content, we identified studies reporting measurement properties and rated the methodological quality of those studies according to the COSMIN system. Finally, we extracted and synthesised the results of the studies to give an overall rating for each questionnaire for each measurement property. The protocol was registered with PROSPERO (CRD42013005589). Of the 77 identified questionnaires, twelve were selected for review after content evaluation. Methodological quality varied, and there was a lack of information on measurement error and responsiveness. The PROMIS-Fatigue item bank and short forms perform the best. The FACIT-Fatigue scale, Parkinsons Fatigue Scale, Perform Questionnaire, and Uni-dimensional Fatigue Impact Scale also perform well and can be recommended. Minor modifications to improve performance are suggested. Further evaluation of unresolved measurement properties, particularly with samples including older people, is needed for all the recommended questionnaires.

  3. Relating seed treatments to nursery performance: Experience with southern pines

    Treesearch

    James P. Barnett

    2008-01-01

    Producing good quality seeds that perform well in the nursery continues to be challenging. High quality conifer seeds are obtained by optimizing collecting, processing, storing, and treating methodologies, and such quality is needed to consistently produce uniform nursery crops. Although new technologies are becoming available to evaluate seed quality, they have not...

  4. Context Matters: Principals' Sensemaking of Teacher Hiring and On-the-Job Performance

    ERIC Educational Resources Information Center

    Ingle, Kyle; Rutledge, Stacey; Bishop, Jennifer

    2011-01-01

    Purpose: School principals make sense of multiple messages, policies, and contexts within their school environments. The purpose of this paper is to examine specifically how school leaders make sense of hiring and subjective evaluation of on-the-job teacher performance. Design/methodology/approach: This qualitative study drew from 42 interviews…

  5. The role of extractives in naturally durable wood species

    Treesearch

    G.T. Kirker; A.B. Blodgett; R.A. Arango; P.K. Lebow; C.A. Clausen

    2013-01-01

    There are numerous examples of wood species that naturally exhibit enhanced performance and longevity in outside exposure independent of preservative treatment. Wood extractives are largely considered to be the contributing factor when evaluating and predicting the performance of a naturally durable wood species. However, little test methodology exists that focuses on...

  6. Safeguards Technology Development Program 1st Quarter FY 2018 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, Manoj K.

    LLNL will evaluate the performance of a stilbene-based scintillation detector array for IAEA neutron multiplicity counting (NMC) applications. This effort will combine newly developed modeling methodologies and recently acquired high-efficiency stilbene detector units to quantitatively compare the prototype system performance with the conventional He-3 counters and liquid scintillator alternatives.

  7. A Methodology for Making Early Comparative Architecture Performance Evaluations

    ERIC Educational Resources Information Center

    Doyle, Gerald S.

    2010-01-01

    Complex and expensive systems' development suffers from a lack of method for making good system-architecture-selection decisions early in the development process. Failure to make a good system-architecture-selection decision increases the risk that a development effort will not meet cost, performance and schedule goals. This research provides a…

  8. Performance modeling of automated manufacturing systems

    NASA Astrophysics Data System (ADS)

    Viswanadham, N.; Narahari, Y.

    A unified and systematic treatment is presented of modeling methodologies and analysis techniques for performance evaluation of automated manufacturing systems. The book is the first treatment of the mathematical modeling of manufacturing systems. Automated manufacturing systems are surveyed and three principal analytical modeling paradigms are discussed: Markov chains, queues and queueing networks, and Petri nets.

  9. Evaluation of Ohio work zone speed zones process.

    DOT National Transportation Integrated Search

    2014-06-01

    This report describes the methodology and results of analyses performed to determine the effectiveness of Ohio Department of Transportation processes for establishing work zone speed zones. Researchers observed motorists speed choice upstream of a...

  10. Simulation modeling of route guidance concept

    DOT National Transportation Integrated Search

    1997-01-01

    The methodology of a simulation model developed at the University of New South Wales, Australia, for the evaluation of performance of Dynamic Route Guidance Systems (DRGS) is described. The microscopic simulation model adopts the event update simulat...

  11. A Safety Index and Method for Flightdeck Evaluation

    NASA Technical Reports Server (NTRS)

    Latorella, Kara A.

    2000-01-01

    If our goal is to improve safety through machine, interface, and training design, then we must define a metric of flightdeck safety that is usable in the design process. Current measures associated with our notions of "good" pilot performance and ultimate safety of flightdeck performance fail to provide an adequate index of safe flightdeck performance for design evaluation purposes. The goal of this research effort is to devise a safety index and method that allows us to evaluate flightdeck performance holistically and in a naturalistic experiment. This paper uses Reason's model of accident causation (1990) as a basis for measuring safety, and proposes a relational database system and method for 1) defining a safety index of flightdeck performance, and 2) evaluating the "safety" afforded by flightdeck performance for the purpose of design iteration. Methodological considerations, limitations, and benefits are discussed as well as extensions to this work.

  12. Defining the "proven technology" technical criterion in the reactor technology assessment for Malaysia's nuclear power program

    NASA Astrophysics Data System (ADS)

    Anuar, Nuraslinda; Kahar, Wan Shakirah Wan Abdul; Manan, Jamal Abdul Nasir Abd

    2015-04-01

    Developing countries that are considering the deployment of nuclear power plants (NPPs) in the near future need to perform reactor technology assessment (RTA) in order to select the most suitable reactor design. The International Atomic Energy Agency (IAEA) reported in the Common User Considerations (CUC) document that "proven technology" is one of the most important technical criteria for newcomer countries in performing the RTA. The qualitative description of five desired features for "proven technology" is relatively broad and only provides a general guideline to its characterization. This paper proposes a methodology to define the "proven technology" term according to a specific country's requirements using a three-stage evaluation process. The first evaluation stage screens the available technologies in the market against a predefined minimum Technology Readiness Level (TRL) derived as a condition based on national needs and policy objectives. The result is a list of technology options, which are then assessed in the second evaluation stage against quantitative definitions of CUC desired features for proven technology. The potential technology candidates produced from this evaluation is further narrowed down to obtain a list of proven technology candidates by assessing them against selected risk criteria and the established maximum allowable total score using a scoring matrix. The outcome of this methodology is the proven technology candidates selected using an accurate definition of "proven technology" that fulfills the policy objectives, national needs and risk, and country-specific CUC desired features of the country that performs this assessment. A simplified assessment for Malaysia is carried out to demonstrate and suggest the use of the proposed methodology. In this exercise, ABWR, AP1000, APR1400 and EPR designs assumed the top-ranks of proven technology candidates according to Malaysia's definition of "proven technology".

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anuar, Nuraslinda, E-mail: nuraslinda@uniten.edu.my; Kahar, Wan Shakirah Wan Abdul, E-mail: shakirah@tnb.com.my; Manan, Jamal Abdul Nasir Abd

    Developing countries that are considering the deployment of nuclear power plants (NPPs) in the near future need to perform reactor technology assessment (RTA) in order to select the most suitable reactor design. The International Atomic Energy Agency (IAEA) reported in the Common User Considerations (CUC) document that “proven technology” is one of the most important technical criteria for newcomer countries in performing the RTA. The qualitative description of five desired features for “proven technology” is relatively broad and only provides a general guideline to its characterization. This paper proposes a methodology to define the “proven technology” term according to amore » specific country’s requirements using a three-stage evaluation process. The first evaluation stage screens the available technologies in the market against a predefined minimum Technology Readiness Level (TRL) derived as a condition based on national needs and policy objectives. The result is a list of technology options, which are then assessed in the second evaluation stage against quantitative definitions of CUC desired features for proven technology. The potential technology candidates produced from this evaluation is further narrowed down to obtain a list of proven technology candidates by assessing them against selected risk criteria and the established maximum allowable total score using a scoring matrix. The outcome of this methodology is the proven technology candidates selected using an accurate definition of “proven technology” that fulfills the policy objectives, national needs and risk, and country-specific CUC desired features of the country that performs this assessment. A simplified assessment for Malaysia is carried out to demonstrate and suggest the use of the proposed methodology. In this exercise, ABWR, AP1000, APR1400 and EPR designs assumed the top-ranks of proven technology candidates according to Malaysia’s definition of “proven technology”.« less

  14. Product vs corporate carbon footprint: Some methodological issues. A case study and review on the wine sector.

    PubMed

    Navarro, Alejandra; Puig, Rita; Fullana-I-Palmer, Pere

    2017-03-01

    Carbon footprint (CF) is nowadays one of the most widely used environmental indicators. The scope of the CF assessment could be corporate (when all production processes of a company are evaluated, together with upstream and downstream processes following a life cycle approach) or product (when one of the products is evaluated throughout its life cycle). Our hypothesis was that usually product CF studies (PCF) collect corporate data, because it is easier for companies to obtain them than product data. Six main methodological issues to take into account when collecting corporate data to be used for PCF studies were postulated and discussed in the present paper: fugitive emissions, credits from waste recycling, use of "equivalent factors", reference flow definition, accumulation and allocation of corporate values to minor products. A big project with 18 wineries, being wine one of the most important agri-food products assessed through CF methodologies, was used to study and to exemplify these 6 methodological issues. One of the main conclusions was that indeed, it is possible to collect corporate inventory data in a per year basis to perform a PCF, but having in mind the 6 methodological issues described here. In the literature, most of the papers are presenting their results as a PCF, while they collected company data and obtained, in fact, a "key performance indicator" (ie., CO 2 eq emissions per unit of product produced), which is then used as a product environmental impact figure. The methodology discussed in this paper for the wine case study is widely applicable to any other product or industrial activity. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. BATSE gamma-ray burst line search. 2: Bayesian consistency methodology

    NASA Technical Reports Server (NTRS)

    Band, D. L.; Ford, L. A.; Matteson, J. L.; Briggs, M.; Paciesas, W.; Pendleton, G.; Preece, R.; Palmer, D.; Teegarden, B.; Schaefer, B.

    1994-01-01

    We describe a Bayesian methodology to evaluate the consistency between the reported Ginga and Burst and Transient Source Experiment (BATSE) detections of absorption features in gamma-ray burst spectra. Currently no features have been detected by BATSE, but this methodology will still be applicable if and when such features are discovered. The Bayesian methodology permits the comparison of hypotheses regarding the two detectors' observations and makes explicit the subjective aspects of our analysis (e.g., the quantification of our confidence in detector performance). We also present non-Bayesian consistency statistics. Based on preliminary calculations of line detectability, we find that both the Bayesian and non-Bayesian techniques show that the BATSE and Ginga observations are consistent given our understanding of these detectors.

  16. [Methodology for the comprehensive evaluation of the quality of performance of activities of medical and social experts].

    PubMed

    Moskalenko, V F; Gorban', Ie M; Marunich, V V; Ipatov, A V; Sergiieni, O V

    2001-01-01

    The paper scientifically substantiates methodology, approaches, criteria, and control indices for assessment of activities of establishments of medical-and-social performance. Most indices for efficiency and certain indices for week points in the work of establishments of the service depend on interaction thereof with curative- and prophylactic institutions; the best results with the problem of prevention of disability and rehabilitation of invalids are supposed to be achieved through collaborative efforts. Other criteria and intermediate indices having an effect on the quality of activities reflect the resource- and trained personnel supplies of establishments of the service, amount of work, organizational measures designed to raise the quality of medical-and-social expert performance.

  17. Integrated Aero-Propulsion CFD Methodology for the Hyper-X Flight Experiment

    NASA Technical Reports Server (NTRS)

    Cockrell, Charles E., Jr.; Engelund, Walter C.; Bittner, Robert D.; Dilley, Arthur D.; Jentink, Tom N.; Frendi, Abdelkader

    2000-01-01

    Computational fluid dynamics (CFD) tools have been used extensively in the analysis and development of the X-43A Hyper-X Research Vehicle (HXRV). A significant element of this analysis is the prediction of integrated vehicle aero-propulsive performance, which includes an integration of aerodynamic and propulsion flow fields. This paper describes analysis tools used and the methodology for obtaining pre-flight predictions of longitudinal performance increments. The use of higher-fidelity methods to examine flow-field characteristics and scramjet flowpath component performance is also discussed. Limited comparisons with available ground test data are shown to illustrate the approach used to calibrate methods and assess solution accuracy. Inviscid calculations to evaluate lateral-directional stability characteristics are discussed. The methodology behind 3D tip-to-tail calculations is described and the impact of 3D exhaust plume expansion in the afterbody region is illustrated. Finally, future technology development needs in the area of hypersonic propulsion-airframe integration analysis are discussed.

  18. Evaluating a collaborative IT based research and development project.

    PubMed

    Khan, Zaheer; Ludlow, David; Caceres, Santiago

    2013-10-01

    In common with all projects, evaluating an Information Technology (IT) based research and development project is necessary in order to discover whether or not the outcomes of the project are successful. However, evaluating large-scale collaborative projects is especially difficult as: (i) stakeholders from different countries are involved who, almost inevitably, have diverse technological and/or application domain backgrounds and objectives; (ii) multiple and sometimes conflicting application specific and user-defined requirements exist; and (iii) multiple and often conflicting technological research and development objectives are apparent. In this paper, we share our experiences based on the large-scale integrated research project - The HUMBOLDT project - with project duration of 54 months, involving contributions from 27 partner organisations, plus 4 sub-contractors from 14 different European countries. In the HUMBOLDT project, a specific evaluation methodology was defined and utilised for the user evaluation of the project outcomes. The user evaluation performed on the HUMBOLDT Framework and its associated nine application scenarios from various application domains, resulted in not only an evaluation of the integrated project, but also revealed the benefits and disadvantages of the evaluation methodology. This paper presents the evaluation methodology, discusses in detail the process of applying it to the HUMBOLDT project and provides an in-depth analysis of the results, which can be usefully applied to other collaborative research projects in a variety of domains. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Going beyond a First Reader: A Machine Learning Methodology for Optimizing Cost and Performance in Breast Ultrasound Diagnosis.

    PubMed

    Venkatesh, Santosh S; Levenback, Benjamin J; Sultan, Laith R; Bouzghar, Ghizlane; Sehgal, Chandra M

    2015-12-01

    The goal of this study was to devise a machine learning methodology as a viable low-cost alternative to a second reader to help augment physicians' interpretations of breast ultrasound images in differentiating benign and malignant masses. Two independent feature sets consisting of visual features based on a radiologist's interpretation of images and computer-extracted features when used as first and second readers and combined by adaptive boosting (AdaBoost) and a pruning classifier resulted in a very high level of diagnostic performance (area under the receiver operating characteristic curve = 0.98) at a cost of pruning a fraction (20%) of the cases for further evaluation by independent methods. AdaBoost also improved the diagnostic performance of the individual human observers and increased the agreement between their analyses. Pairing AdaBoost with selective pruning is a principled methodology for achieving high diagnostic performance without the added cost of an additional reader for differentiating solid breast masses by ultrasound. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  20. Evaluation of operational, economic, and environmental performance of mixed and selective collection of municipal solid waste: Porto case study.

    PubMed

    Teixeira, Carlos A; Russo, Mário; Matos, Cristina; Bentes, Isabel

    2014-12-01

    This article describes an accurate methodology for an operational, economic, and environmental assessment of municipal solid waste collection. The proposed methodological tool uses key performance indicators to evaluate independent operational and economic efficiency and performance of municipal solid waste collection practices. These key performance indicators are then used in life cycle inventories and life cycle impact assessment. Finally, the life cycle assessment environmental profiles provide the environmental assessment. We also report a successful application of this tool through a case study in the Portuguese city of Porto. Preliminary results demonstrate the applicability of the methodological tool to real cases. Some of the findings focus a significant difference between average mixed and selective collection effective distance (2.14 km t(-1); 16.12 km t(-1)), fuel consumption (3.96 L t(-1); 15.37 L t(-1)), crew productivity (0.98 t h(-1) worker(-1); 0.23 t h(-1) worker(-1)), cost (45.90 € t(-1); 241.20 € t(-1)), and global warming impact (19.95 kg CO2eq t(-1); 57.47 kg CO2eq t(-1)). Preliminary results consistently indicate: (a) higher global performance of mixed collection as compared with selective collection; (b) dependency of collection performance, even in urban areas, on the waste generation rate and density; (c) the decline of selective collection performances with decreasing source-separated material density and recycling collection rate; and (d) that the main threats to collection route efficiency are the extensive collection distances, high fuel consumption vehicles, and reduced crew productivity. © The Author(s) 2014.

  1. Energy saving by using asymmetric aftbodies for merchant ships-design methodology, numerical simulation and validation

    NASA Astrophysics Data System (ADS)

    Dang, Jie; Chen, Hao

    2016-12-01

    The methodology and procedures are discussed on designing merchant ships to achieve fully-integrated and optimized hull-propulsion systems by using asymmetric aftbodies. Computational fluid dynamics (CFD) has been used to evaluate the powering performance through massive calculations with automatic deformation algorisms for the hull forms and the propeller blades. Comparative model tests of the designs to the optimized symmetric hull forms have been carried out to verify the efficiency gain. More than 6% improvement on the propulsive efficiency of an oil tanker has been measured during the model tests. Dedicated sea-trials show good agreement with the predicted performance from the test results.

  2. [Field investigations of the air pollution level of populated territories].

    PubMed

    Vinokurov, M V

    2014-01-01

    The assessment and management of air quality of settlements is one of the priorities in the field of environmental protection. In the management of air quality the backbone factor is the methodology of the organization, performance and interpretation of data of field investigations. The present article is devoted to the analysis of the existing methodological approaches and practical aspects of their application in the organization and performance of field investigations with the aim to confirm the adequacy of the boundaries of the sanitary protection zone in the old industrial regions, hygienic evaluation of the data of field investigations of the air pollution level.

  3. Head-camera video recordings of trauma core competency procedures can evaluate surgical resident's technical performance as well as colocated evaluators.

    PubMed

    Mackenzie, Colin F; Pasley, Jason; Garofalo, Evan; Shackelford, Stacy; Chen, Hegang; Longinaker, Nyaradzo; Granite, Guinevere; Pugh, Kristy; Hagegeorge, George; Tisherman, Samuel A

    2017-07-01

    Unbiased evaluation of trauma core competency procedures is necessary to determine if residency and predeployment training courses are useful. We tested whether a previously validated individual procedure score (IPS) for individual procedure vascular exposure and fasciotomy (FAS) performance skills could discriminate training status by comparing IPS of evaluators colocated with surgeons to blind video evaluations. Performance of axillary artery (AA), brachial artery (BA), and femoral artery (FA) vascular exposures and lower extremity FAS on fresh cadavers by 40 PGY-2 to PGY-6 residents was video-recorded from head-mounted cameras. Two colocated trained evaluators assessed IPS before and after training. One surgeon in each pretraining tertile of IPS for each procedure was randomly identified for blind video review. The same 12 surgeons were video-recorded repeating the procedures less than 4 weeks after training. Five evaluators independently reviewed all 96 randomly arranged deidentified videos. Inter-rater reliability/consistency, intraclass correlation coefficients were compared by colocated versus video review of IPS, and errors. Study methodology and bias were judged by Medical Education Research Study Quality Instrument and the Quality Assessment of Diagnostic Accuracy Studies criteria. There were no differences (p ≥ 0.5) in IPS for AA, FA, FAS, whether evaluators were colocated or reviewed video recordings. Evaluator consistency was 0.29 (BA) - 0.77 (FA). Video and colocated evaluators were in total agreement (p = 1.0) for error recognition. Intraclass correlation coefficient was 0.73 to 0.92, dependent on procedure. Correlations video versus colocated evaluations were 0.5 to 0.9. Except for BA, blinded video evaluators discriminated (p < 0.002) whether procedures were performed before training versus after training. Study methodology by Medical Education Research Study Quality Instrument criteria scored 15.5/19, Quality Assessment of Diagnostic Accuracy Studies 2 showed low bias risk. Video evaluations of AA, FA, and FAS procedures with IPS are unbiased, valid, and have potential for formative assessments of competency. Prognostic study, level II.

  4. Appraisal of systematic reviews on the management of peri-implant diseases with two methodological tools.

    PubMed

    Faggion, Clovis Mariano; Monje, Alberto; Wasiak, Jason

    2018-06-01

    This study aimed to evaluate and compare the performance of two methodological instruments to appraise systematic reviews and to identify potential disagreements of systematic review authors regarding risk of bias (RoB) evaluation of randomized controlled trials (RCTs) included in systematic reviews on peri-implant diseases. We searched Medline, Web of Science, Cochrane Library, PubMed Central, and Google Scholar for systematic reviews on peri-implant diseases published before July 11, 2017. Two authors independently evaluated the RoB and methodological quality of the systematic reviews by applying the Risk of Bias in Systematic Reviews (ROBIS) tool and Assessing the Methodological Quality of Systematic Reviews (AMSTAR) checklist, respectively. We assessed the RoB scores of the same RCTs published in different systematic reviews. Of the 32 systematic reviews identified, 23 reviews addressed the clinical topic of peri-implantitis. A high RoB was detected for most systematic reviews (n=25) using ROBIS, whilst five systematic reviews displayed low methodological quality by AMSTAR. Almost 30% of the RoB comparisons (for the same RCTs) had different RoB ratings across systematic reviews. The ROBIS tool appears to provide more conservative results than AMSTAR checklist. Considerable disagreement was found among systematic review authors rating the same RCT included in different systematic reviews. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. Opportunities for Improved Management Efficiency of the Head Start Program: Performance Evaluation and High Risk Determination.

    ERIC Educational Resources Information Center

    Gall, Mary Sheila

    This report provides results of a review of the methodology used by the Office of Human Development Services (HDS) to measure Head Start performance and to control high risk Head Start agencies. The review was performed at HDS headquarters and regional locations nationwide. The review was based on a sample of 200 Head Start agencies and focused on…

  6. [The influence of intellectual capital in performance evaluation: a case-study in the hospital sector].

    PubMed

    Bonacim, Carlos Alberto Grespan; Araújo, Adriana Maria Procópio de

    2010-06-01

    This paper contributes to public institutions with the adaptation of a performance evaluation tool based on private companies. The objective is to demonstrate how the impact of an educational activity might be measured in the economic value added for the society of a public university hospital. The paper was divided in four parts, despite the introductory and methodological aspects and the final remarks. First, the hospital sector is explained, specifically in the context of the public university hospitals. Then, the definitions, the nature and measure of the intellectual capital are presented, followed by the disclosure of the main economic performance evaluation models. Finally, an adapted model is presented, under the approach of the value based management, considering adjustments of the return and the respective investment measures, showing the impacts of the intellectual capital management and the education activity on the economic result of those institutions. The study was developed based on a methodology supported by a bibliographical research, using a comparative method procedure in the descriptive modality. At last, it is highlighted the importance of accountability for the society regarding the use of public resources and how this study can help in this way.

  7. The ACVD task force on canine atopic dermatitis (XVI): laboratory evaluation of dogs with atopic dermatitis with serum-based "allergy" tests.

    PubMed

    DeBoer, D J; Hillier, A

    2001-09-20

    Serum-based in vitro "allergy tests" are commercially available to veterinarians, and are widely used in diagnostic evaluation of a canine atopic patient. Following initial clinical diagnosis, panels of allergen-specific IgE measurements may be performed in an attempt to identify to which allergens the atopic dog is hypersensitive. Methodology for these tests varies by laboratory; few critical studies have evaluated performance of these tests, and current inter-laboratory standardization and quality control measures are inadequate. Other areas where information is critically limited include the usefulness of these tests in diagnosis of food allergy, the effect of extrinsic factors such as season of the year on results, and the influence of corticosteroid treatment on test results. Allergen-specific IgE serological tests are never completely sensitive, nor completely specific. There is only partial correlation between the serum tests and intradermal testing; however, the significance of discrepant results is unknown and unstudied. Variation in test methodologies along with the absence of universal standardization and reporting procedures have created confusion, varying study results, and an inability to compare between studies performed by different investigators.

  8. RECOVERY OF DNA FROM SOILS AND SEDIMENTS

    EPA Science Inventory

    Experiments were performed to evaluate the effectiveness of different methodological approaches for recovering DNA from soil and sediment bacterial communities; cell extraction followed by lysis and DNA recovery (cell extraction method) versus direct cell lysis and alkaline extra...

  9. Assessment and Evaluation.

    ERIC Educational Resources Information Center

    Bachman, Lyle F.

    1989-01-01

    Applied linguistics and psychometrics have influenced language testing, providing additional tools for investigating factors affecting language test performance and assuring measurement reliability. An examination is presented of language testing, including the theoretical issues involved, the methodological advances, language test development,…

  10. 76 FR 70768 - Biweekly Notice; Applications and Amendments to Facility Operating Licenses Involving No...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-15

    ... perform a probabilistic risk evaluation using the guidance contained in NRC approved NEI [Nuclear Energy... Issue Summary 2003-18, Supplement 2, ``Use of Nuclear Energy Institute (NEI) 99-01, Methodology for...

  11. Seasat-A ASVT: Commercial demonstration experiments. Results analysis methodology for the Seasat-A case studies

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The SEASAT-A commercial demonstration program ASVT is described. The program consists of a set of experiments involving the evaluation of a real time data distributions system, the SEASAT-A user data distribution system, that provides the capability for near real time dissemination of ocean conditions and weather data products from the U.S. Navy Fleet Numerical Weather Central to a selected set of commercial and industrial users and case studies, performed by commercial and industrial users, using the data gathered by SEASAT-A during its operational life. The impact of the SEASAT-A data on business operations is evaluated by the commercial and industrial users. The approach followed in the performance of the case studies, and the methodology used in the analysis and integration of the case study results to estimate the actual and potential economic benefits of improved ocean condition and weather forecast data are described.

  12. Accuracy in planar cutting of bones: an ISO-based evaluation.

    PubMed

    Cartiaux, Olivier; Paul, Laurent; Docquier, Pierre-Louis; Francq, Bernard G; Raucent, Benoît; Dombre, Etienne; Banse, Xavier

    2009-03-01

    Computer- and robot-assisted technologies are capable of improving the accuracy of planar cutting in orthopaedic surgery. This study is a first step toward formulating and validating a new evaluation methodology for planar bone cutting, based on the standards from the International Organization for Standardization. Our experimental test bed consisted of a purely geometrical model of the cutting process around a simulated bone. Cuts were performed at three levels of surgical assistance: unassisted, computer-assisted and robot-assisted. We measured three parameters of the standard ISO1101:2004: flatness, parallelism and location of the cut plane. The location was the most relevant parameter for assessing cutting errors. The three levels of assistance were easily distinguished using the location parameter. Our ISO methodology employs the location to obtain all information about translational and rotational cutting errors. Location may be used on any osseous structure to compare the performance of existing assistance technologies.

  13. 2008 Post-Election Voting Survey of Department of State Voting Assistance Officers: Statistical Methodology Report

    DTIC Science & Technology

    2009-08-01

    Mike Wilson, Westat, Inc. developed weights for this survey. Westat performed data collection and editing. DMDC’s Survey Technology Branch, under...STATISTICAL METHODOLOGY REPORT Executive Summary The Uniformed and Overseas Citizens Absentee Voting Act of 1986 (UOCAVA), 42 USC 1973ff, permits members of...citizens covered by UOCAVA, (2) to assess the impact of the FVAP’s efforts to simplify and ease the process of voting absentee , (3) to evaluate other

  14. Multiscale Thermo-Mechanical Design and Analysis of High Frequency and High Power Vacuum Electron Devices

    NASA Astrophysics Data System (ADS)

    Gamzina, Diana

    Diana Gamzina March 2016 Mechanical and Aerospace Engineering Multiscale Thermo-Mechanical Design and Analysis of High Frequency and High Power Vacuum Electron Devices Abstract A methodology for performing thermo-mechanical design and analysis of high frequency and high average power vacuum electron devices is presented. This methodology results in a "first-pass" engineering design directly ready for manufacturing. The methodology includes establishment of thermal and mechanical boundary conditions, evaluation of convective film heat transfer coefficients, identification of material options, evaluation of temperature and stress field distributions, assessment of microscale effects on the stress state of the material, and fatigue analysis. The feature size of vacuum electron devices operating in the high frequency regime of 100 GHz to 1 THz is comparable to the microstructure of the materials employed for their fabrication. As a result, the thermo-mechanical performance of a device is affected by the local material microstructure. Such multiscale effects on the stress state are considered in the range of scales from about 10 microns up to a few millimeters. The design and analysis methodology is demonstrated on three separate microwave devices: a 95 GHz 10 kW cw sheet beam klystron, a 263 GHz 50 W long pulse wide-bandwidth sheet beam travelling wave tube, and a 346 GHz 1 W cw backward wave oscillator.

  15. Evaluation Of The Diagnostic Performance Of A Multimedia Medical Communications System.

    NASA Astrophysics Data System (ADS)

    Robertson, John G.; Coristine, Marjorie; Goldberg, Morris; Beeton, Carolyn; Belanger, Garry; Tombaugh, Jo W.; Hickey, Nancy M.; Millward, Steven F.; Davis, Michael; Whittingham, David

    1989-05-01

    The central concern of radiologists when evaluating Picture Archiving Communication System (PACS) is the diagnostic performance of digital images compared to the original analog versions of the same images. Considerable work has been done comparing the ROC curves of various types of digital systems to the corresponding analog systems for the detection of specific phantoms or diseases. Although the studies may notify the radiologists that for a specific lesion a digital system may perform as well as the analog system, it tells the radiologists very little about the impact on diagnostic performance of a digital system in the general practice of radiology. We describe in this paper an alternative method for evaluating the diagnostic performance of a digital system and a preliminary experiment we conducted to test the methodology.

  16. Generics Pricing: The Greek Paradox.

    PubMed

    Karafyllis, Ioannis; Variti, Lamprini

    2017-01-01

    This paper explains and develops a methodological framework to help evaluate the performance of generic pharmaceutical policies and the correct evaluation of generics sales. Until today erroneous recording of generics does not help proper pricing and their penetration in the Greek market. This classifies Greece on the outliners in every study or comparison that is referred on papers or studies.

  17. Assessment by Employers of Newly Graduated Civil Engineers from the Islamic University of Gaza

    ERIC Educational Resources Information Center

    Enshassi, Adnan; Hassouna, Ahmed

    2005-01-01

    The evaluation process is very important to identify and recognize the strengths and the weaknesses of graduated students. The purpose of this paper is to evaluate the performance of the newly graduated civil engineers from the Islamic University of Gaza in Palestine. The methodology was based on questionnaires and informal interview. The…

  18. Formative Evaluation of an Experimental BE/E [Basic Electricity and Electronics] Program. Report No. 9-75.

    ERIC Educational Resources Information Center

    Fishburne, R. P., Jr.; Mims, Diane M.

    An experimental Basic Electricity and Electronics course (BE/E) utilizing a lock-step, instructor presentation methodology was developed and evaluated at the Service School Command, Great Lakes. The study, directed toward the training of lower mental group, school nonqualified personnel, investigated comparative data on test performance, attitude,…

  19. Medicare program; description of the Health Care Financing Administration's evaluation methodology for the Peer Review Organization 5th Scope of Work contracts--HCFA. General notice with comment period.

    PubMed

    1997-07-02

    This notice describes how HCFA intends to evaluate the Peer Review Organizations (PROs) for quality improvement activities, under their 5th Scope of Work (SOW) contracts, for efficiency and effectiveness in accordance with the Social Security Act. In accordance with the provisions of the Government Performance and Results Act of 1993, the 5th SOW contracts with the PROs are performance-based contracts.

  20. GT-CATS: Tracking Operator Activities in Complex Systems

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.; Mitchell, Christine M.; Palmer, Everett A.

    1999-01-01

    Human operators of complex dynamic systems can experience difficulties supervising advanced control automation. One remedy is to develop intelligent aiding systems that can provide operators with context-sensitive advice and reminders. The research reported herein proposes, implements, and evaluates a methodology for activity tracking, a form of intent inferencing that can supply the knowledge required for an intelligent aid by constructing and maintaining a representation of operator activities in real time. The methodology was implemented in the Georgia Tech Crew Activity Tracking System (GT-CATS), which predicts and interprets the actions performed by Boeing 757/767 pilots navigating using autopilot flight modes. This report first describes research on intent inferencing and complex modes of automation. It then provides a detailed description of the GT-CATS methodology, knowledge structures, and processing scheme. The results of an experimental evaluation using airline pilots are given. The results show that GT-CATS was effective in predicting and interpreting pilot actions in real time.

  1. Congenital Heart Surgery Case Mix Across North American Centers and Impact on Performance Assessment.

    PubMed

    Pasquali, Sara K; Wallace, Amelia S; Gaynor, J William; Jacobs, Marshall L; O'Brien, Sean M; Hill, Kevin D; Gaies, Michael G; Romano, Jennifer C; Shahian, David M; Mayer, John E; Jacobs, Jeffrey P

    2016-11-01

    Performance assessment in congenital heart surgery is challenging due to the wide heterogeneity of disease. We describe current case mix across centers, evaluate methodology inclusive of all cardiac operations versus the more homogeneous subset of Society of Thoracic Surgeons benchmark operations, and describe implications regarding performance assessment. Centers (n = 119) participating in the Society of Thoracic Surgeons Congenital Heart Surgery Database (2010 through 2014) were included. Index operation type and frequency across centers were described. Center performance (risk-adjusted operative mortality) was evaluated and classified when including the benchmark versus all eligible operations. Overall, 207 types of operations were performed during the study period (112,140 total cases). Few operations were performed across all centers; only 25% were performed at least once by 75% or more of centers. There was 7.9-fold variation across centers in the proportion of total cases comprising high-complexity cases (STAT 5). In contrast, the benchmark operations made up 36% of cases, and all but 2 were performed by at least 90% of centers. When evaluating performance based on benchmark versus all operations, 15% of centers changed performance classification; 85% remained unchanged. Benchmark versus all operation methodology was associated with lower power, with 35% versus 78% of centers meeting sample size thresholds. There is wide variation in congenital heart surgery case mix across centers. Metrics based on benchmark versus all operations are associated with strengths (less heterogeneity) and weaknesses (lower power), and lead to differing performance classification for some centers. These findings have implications for ongoing efforts to optimize performance assessment, including choice of target population and appropriate interpretation of reported metrics. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  2. Performance assessment methodology and preliminary results for low-level radioactive waste disposal in Taiwan.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, Bill Walter; Chang, Fu-lin; Mattie, Patrick D.

    2006-02-01

    Sandia National Laboratories (SNL) and Taiwan's Institute for Nuclear Energy Research (INER) have teamed together to evaluate several candidate sites for Low-Level Radioactive Waste (LLW) disposal in Taiwan. Taiwan currently has three nuclear power plants, with another under construction. Taiwan also has a research reactor, as well as medical and industrial wastes to contend with. Eventually the reactors will be decomissioned. Operational and decommissioning wastes will need to be disposed in a licensed disposal facility starting in 2014. Taiwan has adopted regulations similar to the US Nuclear Regulatory Commission's (NRC's) low-level radioactive waste rules (10 CFR 61) to govern themore » disposal of LLW. Taiwan has proposed several potential sites for the final disposal of LLW that is now in temporary storage on Lanyu Island and on-site at operating nuclear power plants, and for waste generated in the future through 2045. The planned final disposal facility will have a capacity of approximately 966,000 55-gallon drums. Taiwan is in the process of evaluating the best candidate site to pursue for licensing. Among these proposed sites there are basically two disposal concepts: shallow land burial and cavern disposal. A representative potential site for shallow land burial is located on a small island in the Taiwan Strait with basalt bedrock and interbedded sedimentary rocks. An engineered cover system would be constructed to limit infiltration for shallow land burial. A representative potential site for cavern disposal is located along the southeastern coast of Taiwan in a tunnel system that would be about 500 to 800 m below the surface. Bedrock at this site consists of argillite and meta-sedimentary rocks. Performance assessment analyses will be performed to evaluate future performance of the facility and the potential dose/risk to exposed populations. Preliminary performance assessment analyses will be used in the site-selection process and to aid in design of the disposal system. Final performance assessment analyses will be used in the regulatory process of licensing a site. The SNL/INER team has developed a performance assessment methodology that is used to simulate processes associated with the potential release of radionuclides to evaluate these sites. The following software codes are utilized in the performance assessment methodology: GoldSim (to implement a probabilistic analysis that will explicitly address uncertainties); the NRC's Breach, Leach, and Transport - Multiple Species (BLT-MS) code (to simulate waste-container degradation, waste-form leaching, and transport through the host rock); the Finite Element Heat and Mass Transfer code (FEHM) (to simulate groundwater flow and estimate flow velocities); the Hydrologic Evaluation of Landfill performance Model (HELP) code (to evaluate infiltration through the disposal cover); the AMBER code (to evaluate human health exposures); and the NRC's Disposal Unit Source Term -- Multiple Species (DUST-MS) code (to screen applicable radionuclides). Preliminary results of the evaluations of the two disposal concept sites are presented.« less

  3. Are Funny Groups Good at Solving Problems? A Methodological Evaluation and Some Preliminary Results.

    ERIC Educational Resources Information Center

    Pollio, Howard R.; Bainum, Charlene Kubo

    1983-01-01

    Observed college students (N=195) divided according to sex and measures of wittiness to determine the effects of humor on problem solving in groups. Results showed that group composition was not a crucial issue in problem-solving performance, but that humerous group interaction was, and did not interfere with ongoing task performance. (LLL)

  4. Using Differential Item Functioning Procedures to Explore Sources of Item Difficulty and Group Performance Characteristics.

    ERIC Educational Resources Information Center

    Scheuneman, Janice Dowd; Gerritz, Kalle

    1990-01-01

    Differential item functioning (DIF) methodology for revealing sources of item difficulty and performance characteristics of different groups was explored. A total of 150 Scholastic Aptitude Test items and 132 Graduate Record Examination general test items were analyzed. DIF was evaluated for males and females and Blacks and Whites. (SLD)

  5. Measuring Longitudinal Student Performance on Student Learning Outcomes in Sustainability Education

    ERIC Educational Resources Information Center

    Jarchow, Meghann E.; Formisano, Paul; Nordyke, Shane; Sayre, Matthew

    2018-01-01

    Purpose: The purpose of this paper is to describe the student learning outcomes (SLOs) for a sustainability major, evaluate faculty incorporation of the SLOs into the courses in the sustainability major curriculum and measure student performance on the SLOs from entry into the major to the senior capstone course. Design/methodology/approach:…

  6. Ensemble modeling of stochastic unsteady open-channel flow in terms of its time-space evolutionary probability distribution - Part 2: numerical application

    NASA Astrophysics Data System (ADS)

    Dib, Alain; Kavvas, M. Levent

    2018-03-01

    The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.

  7. Slow crack growth test method for polyethylene gas pipes. Volume 1. Topical report, December 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leis, B.; Ahmad, J.; Forte, T.

    1992-12-01

    In spite of the excellent performance record of polyethylene (PE) pipes used for gas distribution, a small number of leaks occur in distribution systems each year because of slow growth of cracks through pipe walls. The Slow Crack Growth Test (SCG) has been developed as a key element in a methodology for the assessment of the performance of polyethylene gas distribution systems to resist such leaks. This tropical report describes work conducted in the first part of the research directed at the initial development of the SCG test, including a critical evaluation of the applicability of the SCG test asmore » an element in PE gas pipe system performance methodology. Results of extensive experiments and analysis are reported. The results show that the SCG test should be very useful in performance assessment.« less

  8. Knowledge management performance methodology regarding manufacturing organizations

    NASA Astrophysics Data System (ADS)

    Istrate, C.; Herghiligiu, I. V.

    2016-08-01

    The current business situation is extremely complicated. Business must adapt to the changes in order (a) to survive on the increasingly dynamic markets, (b) to meet customers’ new request for complex, customized and innovative products. In modern manufacturing organizations it can be seen a substantial improvement regarding the management of knowledge. This occurs due to the fact that organizations realized that knowledge and an efficient management of knowledge generates the highest value. Even it could be said that the manufacturing organizations were and are the biggest beneficiary of KM science. Knowledge management performance (KMP) evaluation in manufacturing organizations can be considered as extremely important because without measuring it, they are unable to properly assess (a) what goals, targets and activities must have continuity, (b) what must be improved and (c) what must be completed. Therefore a proper KM will generate multiple competitive advantages for organizations. This paper presents a developed methodological framework regarding the KMP importance regarding manufacturing organizations. This methodological framework was developed using as research methods: bibliographical research and a panel of specialists. The purpose of this paper is to improve the evaluation process of KMP and to provide a viable tool for manufacturing organizations managers.

  9. Quantification of groundwater recharge in urban environments.

    PubMed

    Tubau, Isabel; Vázquez-Suñé, Enric; Carrera, Jesús; Valhondo, Cristina; Criollo, Rotman

    2017-08-15

    Groundwater management in urban areas requires a detailed knowledge of the hydrogeological system as well as the adequate tools for predicting the amount of groundwater and water quality evolution. In that context, a key difference between urban and natural areas lies in recharge evaluation. A large number of studies have been published since the 1990s that evaluate recharge in urban areas, with no specific methodology. Most of these methods show that there are generally higher rates of recharge in urban settings than in natural settings. Methods such as mixing ratios or groundwater modeling can be used to better estimate the relative importance of different sources of recharge and may prove to be a good tool for total recharge evaluation. However, accurate evaluation of this input is difficult. The objective is to present a methodology to help overcome those difficulties, and which will allow us to quantify the variability in space and time of the recharge into aquifers in urban areas. Recharge calculations have been initially performed by defining and applying some analytical equations, and validation has been assessed based on groundwater flow and solute transport modeling. This methodology is applicable to complex systems by considering temporal variability of all water sources. This allows managers of urban groundwater to evaluate the relative contribution of different recharge sources at a city scale by considering quantity and quality factors. The methodology is applied to the assessment of recharge sources in the Barcelona city aquifers. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Seismic Vulnerability Evaluations Within The Structural And Functional Survey Activities Of The COM Bases In Italy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuccaro, G.; Cacace, F.; Albanese, V.

    The paper describes technical and functional surveys on COM buildings (Mixed Operative Centre). This activity started since 2005, with the contribution of both Italian Civil Protection Department and the Regions involved. The project aims to evaluate the efficiency of COM buildings, checking not only structural, architectonic and functional characteristics but also paying attention to surrounding real estate vulnerability, road network, railways, harbours, airports, area morphological and hydro-geological characteristics, hazardous activities, etc. The first survey was performed in eastern Sicily, before the European Civil Protection Exercise 'EUROSOT 2005'. Then, since 2006, a new survey campaign started in Abruzzo, Molise, Calabria andmore » Puglia Regions. The more important issue of the activity was the vulnerability assessment. So this paper deals with a more refined vulnerability evaluation technique by means of the SAVE methodology, developed in the 1st task of SAVE project within the GNDT-DPC programme 2000-2002 (Zuccaro, 2005); the SAVE methodology has been already successfully employed in previous studies (i.e. school buildings intervention programme at national scale; list of strategic public buildings in Campania, Sicilia and Basilicata). In this paper, data elaborated by SAVE methodology are compared with expert evaluations derived from the direct inspections on COM buildings. This represents a useful exercise for the improvement either of the survey forms or of the methodology for the quick assessment of the vulnerability.« less

  11. Efficient evaluation of wireless real-time control networks.

    PubMed

    Horvath, Peter; Yampolskiy, Mark; Koutsoukos, Xenofon

    2015-02-11

    In this paper, we present a system simulation framework for the design and performance evaluation of complex wireless cyber-physical systems. We describe the simulator architecture and the specific developments that are required to simulate cyber-physical systems relying on multi-channel, multihop mesh networks. We introduce realistic and efficient physical layer models and a system simulation methodology, which provides statistically significant performance evaluation results with low computational complexity. The capabilities of the proposed framework are illustrated in the example of WirelessHART, a centralized, real-time, multi-hop mesh network designed for industrial control and monitor applications.

  12. Evaluation of Aeroservoelastic Effects on Flutter

    NASA Technical Reports Server (NTRS)

    Nagaraja, K. S.; Felt, Larry R.; Kraft, Raymond

    1998-01-01

    This report presents work performed by The Boeing Company to satisfy the deliverable "Evaluation of aeroservoelastic Effects on Symmetric Flutter" for Subtask 7 of Reference 1. The objective of this report is to incorporate the improved methods for studying the effects of a closed-loop control system on the aeroservoelastic behavior of the airplane planned under NASA HSR technical Integration Task 20 work. Also, a preliminary evaluation of the existing pitch control laws on symmetric flutter of the TCA configuration was addressed."The goal is to develop an improved modeling methodology and perform design studies that account for the aero-structures-systems interaction effects.

  13. Implementation and Evaluation of Multiple Adaptive Control Technologies for a Generic Transport Aircraft Simulation

    NASA Technical Reports Server (NTRS)

    Campbell, Stefan F.; Kaneshige, John T.; Nguyen, Nhan T.; Krishakumar, Kalmanje S.

    2010-01-01

    Presented here is the evaluation of multiple adaptive control technologies for a generic transport aircraft simulation. For this study, seven model reference adaptive control (MRAC) based technologies were considered. Each technology was integrated into an identical dynamic-inversion control architecture and tuned using a methodology based on metrics and specific design requirements. Simulation tests were then performed to evaluate each technology s sensitivity to time-delay, flight condition, model uncertainty, and artificially induced cross-coupling. The resulting robustness and performance characteristics were used to identify potential strengths, weaknesses, and integration challenges of the individual adaptive control technologies

  14. An evolving-requirements technology assessment process for advanced propulsion concepts

    NASA Astrophysics Data System (ADS)

    McClure, Erin Kathleen

    The following dissertation investigates the development of a methodology suitable for the evaluation of advanced propulsion concepts. At early stages of development, both the future performance of these concepts and their requirements are highly uncertain, making it difficult to forecast their future value. Developing advanced propulsion concepts requires a huge investment of resources. The methodology was developed to enhance the decision-makers understanding of the concepts, so that they could mitigate the risks associated with developing such concepts. A systematic methodology to identify potential advanced propulsion concepts and assess their robustness is necessary to reduce the risk of developing advanced propulsion concepts. Existing advanced design methodologies have evaluated the robustness of technologies or concepts to variations in requirements, but they are not suitable to evaluate a large number of dissimilar concepts. Variations in requirements have been shown to impact the development of advanced propulsion concepts, and any method designed to evaluate these concepts must incorporate the possible variations of the requirements into the assessment. In order to do so, a methodology was formulated to be capable of accounting for two aspects of the problem. First, it had to systemically identify a probabilistic distribution for the future requirements. Such a distribution would allow decision-makers to quantify the uncertainty introduced by variations in requirements. Second, the methodology must be able to assess the robustness of the propulsion concepts as a function of that distribution. This dissertation describes in depth these enabling elements and proceeds to synthesize them into a new method, the Evolving Requirements Technology Assessment (ERTA). As a proof of concept, the ERTA method was used to evaluate and compare advanced propulsion systems that will be capable of powering a hurricane tracking, High Altitude, Long Endurance (HALE) unmanned aerial vehicle (UAV). The use of the ERTA methodology to assess HALE UAV propulsion concepts demonstrated that potential variations in requirements do significantly impact the assessment and selection of propulsion concepts. The proof of concept also demonstrated that traditional forecasting techniques, such as the cross impact analysis, could be used to forecast the requirements for advanced propulsion concepts probabilistically. "Fitness", a measure of relative goodness, was used to evaluate the concepts. Finally, stochastic optimizations were used to evaluate the propulsion concepts across the range of requirement sets that were considered.

  15. Systematic review of guidelines for management of intermediate hepatocellular carcinoma using the Appraisal of Guidelines Research and Evaluation II instrument.

    PubMed

    Holvoet, Tom; Raevens, Sarah; Vandewynckel, Yves-Paul; Van Biesen, Wim; Geboes, Karen; Van Vlierberghe, Hans

    2015-10-01

    Hepatocellular carcinoma is the second leading cause of cancer-related mortality worldwide. Multiple guidelines have been developed to assist clinicians in its management. We aimed to explore methodological quality of these guidelines focusing on treatment of intermediate hepatocellular carcinoma by transarterial chemoembolization. A systematic search was performed for Clinical Practice Guidelines and Consensus statements for hepatocellular carcinoma management. Guideline quality was appraised using the Appraisal of Guidelines Research and Evaluation II instrument, which rates guideline development processes across 6 domains: 'Scope and purpose', 'Stakeholder involvement', 'Rigour of development', 'Clarity of presentation', 'Applicability' and 'Editorial independence'. Thematic analysis of guidelines was performed to map differences in recommendations. Quality of 21 included guidelines varied widely, but was overall poor with only one guideline passing the 50% mark on all domains. Key recommendations as (contra)indications and technical aspects were inconsistent between guidelines. Aspects on side effects and health economics were mainly neglected. Methodological quality of guidelines on transarterial chemoembolization in hepatocellular carcinoma management is poor. This results in important discrepancies between guideline recommendations, creating confusion in clinical practice. Incorporation of the Appraisal of Guidelines Research and Evaluation II instrument in guideline development may improve quality of future guidelines by increasing focus on methodological aspects. Copyright © 2015 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  16. Bayesian Inference on Proportional Elections

    PubMed Central

    Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio

    2015-01-01

    Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259

  17. Bayesian inference on proportional elections.

    PubMed

    Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio

    2015-01-01

    Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.

  18. A new methodology for determining dispersion coefficient using ordinary and partial differential transport equations.

    PubMed

    Cho, Kyung Hwa; Lee, Seungwon; Ham, Young Sik; Hwang, Jin Hwan; Cha, Sung Min; Park, Yongeun; Kim, Joon Ha

    2009-01-01

    The present study proposes a methodology for determining the effective dispersion coefficient based on the field measurements performed in Gwangju (GJ) Creek in South Korea which is environmentally degraded by the artificial interferences such as weirs and culverts. Many previous works determining the dispersion coefficient were limited in application due to the complexity and artificial interferences in natural stream. Therefore, the sequential combination of N-Tank-In-Series (NTIS) model and Advection-Dispersion-Reaction (ADR) model was proposed for evaluating dispersion process in complex stream channel in this study. The series of water quality data were intensively monitored in the field to determine the effective dispersion coefficient of E. coli in rainy day. As a result, the suggested methodology reasonably estimates the dispersion coefficient for GJ Creek with 1.25 m(2)/s. Also, the sequential combined method provided Number of tank-Velocity-Dispersion coefficient (NVD) curves for convenient evaluation of dispersion coefficient of other rivers or streams. Comparing the previous studies, the present methodology is quite general and simple for determining the effective dispersion coefficients which are applicable for other rivers and streams.

  19. The Future Impact of Wind on BPA Power System Load Following and Regulation Requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarov, Yuri V.; Lu, Shuai; McManus, Bart

    Wind power is growing in a very fast pace as an alternative generating resource. As the ratio of wind power over total system capacity increases, the impact of wind on various system aspects becomes significant. This paper presents a methodology to study the future impact of wind on BPA power system load following and regulation requirements. Existing methodologies for similar analysis include dispatch model simulation and standard deviation evaluation on load and wind data. The methodology proposed in this paper uses historical data and stochastic processes to simulate the load balancing processes in the BPA power system. It mimics themore » actual power system operations therefore the results are close to reality yet the study based on this methodology is convenient to perform. The capacity, ramp rate and ramp duration characteristics are extracted from the simulation results. System load following and regulation capacity requirements are calculated accordingly. The ramp rate and ramp duration data obtained from the analysis can be used to evaluate generator response or maneuverability requirement and regulating units’ energy requirement, respectively.« less

  20. A new approach to subjectively assess quality of plenoptic content

    NASA Astrophysics Data System (ADS)

    Viola, Irene; Řeřábek, Martin; Ebrahimi, Touradj

    2016-09-01

    Plenoptic content is becoming increasingly popular thanks to the availability of acquisition and display devices. Thanks to image-based rendering techniques, a plenoptic content can be rendered in real time in an interactive manner allowing virtual navigation through the captured scenes. This way of content consumption enables new experiences, and therefore introduces several challenges in terms of plenoptic data processing, transmission and consequently visual quality evaluation. In this paper, we propose a new methodology to subjectively assess the visual quality of plenoptic content. We also introduce a prototype software to perform subjective quality assessment according to the proposed methodology. The proposed methodology is further applied to assess the visual quality of a light field compression algorithm. Results show that this methodology can be successfully used to assess the visual quality of plenoptic content.

  1. An evaluation of the directed flow graph methodology

    NASA Technical Reports Server (NTRS)

    Snyder, W. E.; Rajala, S. A.

    1984-01-01

    The applicability of the Directed Graph Methodology (DGM) to the design and analysis of special purpose image and signal processing hardware was evaluated. A special purpose image processing system was designed and described using DGM. The design, suitable for very large scale integration (VLSI) implements a region labeling technique. Two computer chips were designed, both using metal-nitride-oxide-silicon (MNOS) technology, as well as a functional system utilizing those chips to perform real time region labeling. The system is described in terms of DGM primitives. As it is currently implemented, DGM is inappropriate for describing synchronous, tightly coupled, special purpose systems. The nature of the DGM formalism lends itself more readily to modeling networks of general purpose processors.

  2. An Energy Storage Assessment: Using Optimal Control Strategies to Capture Multiple Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Di; Jin, Chunlian; Balducci, Patrick J.

    2015-09-01

    This paper presents a methodology for evaluating benefits of battery storage for multiple grid applications, including energy arbitrage, balancing service, capacity value, distribution system equipment deferral, and outage mitigation. In the proposed method, at each hour, a look-ahead optimization is first formulated and solved to determine battery base operating point. The minute by minute simulation is then performed to simulate the actual battery operation. This methodology is used to assess energy storage alternatives in Puget Sound Energy System. Different battery storage candidates are simulated for a period of one year to assess different value streams and overall benefits, as partmore » of a financial feasibility evaluation of battery storage projects.« less

  3. Methodology for worker neutron exposure evaluation in the PDCF facility design.

    PubMed

    Scherpelz, R I; Traub, R J; Pryor, K H

    2004-01-01

    A project headed by Washington Group International is meant to design the Pit Disassembly and Conversion Facility (PDCF) to convert the plutonium pits from excessed nuclear weapons into plutonium oxide for ultimate disposition. Battelle staff are performing the shielding calculations that will determine appropriate shielding so that the facility workers will not exceed target exposure levels. The target exposure levels for workers in the facility are 5 mSv y(-1) for the whole body and 100 mSv y(-1) for the extremity, which presents a significant challenge to the designers of a facility that will process tons of radioactive material. The design effort depended on shielding calculations to determine appropriate thickness and composition for glove box walls, and concrete wall thicknesses for storage vaults. Pacific Northwest National Laboratory (PNNL) staff used ORIGEN-S and SOURCES to generate gamma and neutron source terms, and Monte Carlo (computer code for) neutron photon (transport) (MCNP-4C) to calculate the radiation transport in the facility. The shielding calculations were performed by a team of four scientists, so it was necessary to develop a consistent methodology. There was also a requirement for the study to be cost-effective, so efficient methods of evaluation were required. The calculations were subject to rigorous scrutiny by internal and external reviewers, so acceptability was a major feature of the methodology. Some of the issues addressed in the development of the methodology included selecting appropriate dose factors, developing a method for handling extremity doses, adopting an efficient method for evaluating effective dose equivalent in a non-uniform radiation field, modelling the reinforcing steel in concrete, and modularising the geometry descriptions for efficiency. The relative importance of the neutron dose equivalent compared with the gamma dose equivalent varied substantially depending on the specific shielding conditions and lessons were learned from this effect. This paper addresses these issues and the resulting methodology.

  4. Evaluation of an ontological resource for pharmacovigilance.

    PubMed

    Jaulent, Marie-Christine; Alecu, Iulian

    2009-01-01

    In this work, we present a methodology for evaluating an ontology designed in a previous study to describe adverse drug reactions. We evaluate it in term of its fitness for grouping cases in pharmacovigilance. We define as gold standard the Standardized MedDRA Queries (SMQs) developed manually to group terms representing similar medical conditions. We perform an automatic search in the ontology in order to retrieve concepts related to the medical conditions. An optimal query is built for each medical condition. The evaluation relies on the comparison between the terms in the SMQ and the terms subsumed by the query. The result is quantified by sensitivity and specificity. We applied this methodology for 24 SMQs and we obtain a mean sensitivity of 0.82. This work allows validating the semantic resource and provides, in perspective, tools to maintain the ontology while the knowledge is evolving.

  5. Piloted Evaluation of an Integrated Methodology for Propulsion and Airframe Control Design

    NASA Technical Reports Server (NTRS)

    Bright, Michelle M.; Simon, Donald L.; Garg, Sanjay; Mattern, Duane L.; Ranaudo, Richard J.; Odonoghue, Dennis P.

    1994-01-01

    An integrated methodology for propulsion and airframe control has been developed and evaluated for a Short Take-Off Vertical Landing (STOVL) aircraft using a fixed base flight simulator at NASA Lewis Research Center. For this evaluation the flight simulator is configured for transition flight using a STOVL aircraft model, a full nonlinear turbofan engine model, simulated cockpit and displays, and pilot effectors. The paper provides a brief description of the simulation models, the flight simulation environment, the displays and symbology, the integrated control design, and the piloted tasks used for control design evaluation. In the simulation, the pilots successfully completed typical transition phase tasks such as combined constant deceleration with flight path tracking, and constant acceleration wave-off maneuvers. The pilot comments of the integrated system performance and the display symbology are discussed and analyzed to identify potential areas of improvement.

  6. Inlet design for high-speed propfans

    NASA Technical Reports Server (NTRS)

    Little, B. H., Jr.; Hinson, B. L.

    1982-01-01

    A two-part study was performed to design inlets for high-speed propfan installation. The first part was a parametric study to select promising inlet concepts. A wide range of inlet geometries was examined and evaluated - primarily on the basis of cruise thrust and fuel burn performance. Two inlet concepts were than chosen for more detailed design studies - one apropriate to offset engine/gearbox arrangements and the other to in-line arrangements. In the second part of this study, inlet design points were chosen to optimize the net installed thrust, and detailed design of the two inlet configurations was performed. An analytical methodology was developed to account for propfan slipstream effects, transonic flow efects, and three-dimensional geometry effects. Using this methodology, low drag cowls were designed for the two inlets.

  7. School Leadership Preparation and Development in Kenya: Evaluating Performance Impact and Return on Leadership Development Investment

    ERIC Educational Resources Information Center

    Asuga, Gladys; Eacott, Scott; Scevak, Jill

    2015-01-01

    Purpose: The purpose of this paper is to evaluate the quality of the current provision for school leadership in Kenya, the extent to which they have an impact on student outcomes and the return on school leadership preparation and development investment. Design/Methodology/Approach: The paper draws from educational leadership, management and…

  8. Comparison of Expert-Based and Empirical Evaluation Methodologies in the Case of a CBL Environment: The ''Orestis'' Experience

    ERIC Educational Resources Information Center

    Karoulis, Athanasis; Demetriadis, Stavros; Pombortsis, Andreas

    2006-01-01

    This paper compares several interface evaluation methods applied in the case of a computer based learning (CBL) environment, during a longitudinal study performed in three European countries, Greece, Germany, and Holland, and within the framework of an EC funded Leonardo da Vinci program. The paper firstly considers the particularities of the CBL…

  9. A Systematic Review of Economic Evaluation Methodologies Between Resource-Limited and Resource-Rich Countries: A Case of Rotavirus Vaccines.

    PubMed

    Thiboonboon, Kittiphong; Santatiwongchai, Benjarin; Chantarastapornchit, Varit; Rattanavipapong, Waranya; Teerawattananon, Yot

    2016-12-01

    For more than three decades, the number and influence of economic evaluations of healthcare interventions have been increasing and gaining attention from a policy level. However, concerns about the credibility of these studies exist, particularly in studies from low- and middle- income countries (LMICs). This analysis was performed to explore economic evaluations conducted in LMICs in terms of methodological variations, quality of reporting and evidence used for the analyses. These results were compared with those studies conducted in high-income countries (HICs). Rotavirus vaccine was selected as a case study, as it is one of the interventions that many studies in both settings have explored. The search to identify individual studies on rotavirus vaccines was performed in March 2014 using MEDLINE and the National Health Service Economic Evaluation Database. Only full economic evaluations, comparing cost and outcomes of at least two alternatives, were included for review. Selected criteria were applied to assess methodological variation, quality of reporting and quality of evidence used. Eighty-five studies were included, consisting of 45 studies in HICs and 40 studies in LMICs. Seventy-five percent of the studies in LMICs were published by researchers from HICs. Compared with studies in HICs, the LMIC studies showed less methodological variety. In terms of the quality of reporting, LMICs had a high adherence to technical criteria, but HICs ultimately proved to be better. The same trend applied for the quality of evidence used. Although the quality of economic evaluations in LMICs was not as high as those from HICs, it is of an acceptable level given several limitations that exist in these settings. However, the results of this study may not reflect the fact that LMICs have developed a better research capacity in the domain of health economics, given that most of the studies were in theory led by researchers from HICs. Putting more effort into fostering the development of both research infrastructure and capacity building as well as encouraging local engagement in LMICs is thus necessary.

  10. A stochastic approach for automatic generation of urban drainage systems.

    PubMed

    Möderl, M; Butler, D; Rauch, W

    2009-01-01

    Typically, performance evaluation of new developed methodologies is based on one or more case studies. The investigation of multiple real world case studies is tedious and time consuming. Moreover extrapolating conclusions from individual investigations to a general basis is arguable and sometimes even wrong. In this article a stochastic approach is presented to evaluate new developed methodologies on a broader basis. For the approach the Matlab-tool "Case Study Generator" is developed which generates a variety of different virtual urban drainage systems automatically using boundary conditions e.g. length of urban drainage system, slope of catchment surface, etc. as input. The layout of the sewer system is based on an adapted Galton-Watson branching process. The sub catchments are allocated considering a digital terrain model. Sewer system components are designed according to standard values. In total, 10,000 different virtual case studies of urban drainage system are generated and simulated. Consequently, simulation results are evaluated using a performance indicator for surface flooding. Comparison between results of the virtual and two real world case studies indicates the promise of the method. The novelty of the approach is that it is possible to get more general conclusions in contrast to traditional evaluations with few case studies.

  11. Integrated design of the CSI evolutionary structure: A verification of the design methodology

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Joshi, S. M.; Elliott, Kenny B.; Walz, J. E.

    1993-01-01

    One of the main objectives of the Controls-Structures Interaction (CSI) program is to develop and evaluate integrated controls-structures design methodology for flexible space structures. Thus far, integrated design methodologies for a class of flexible spacecraft, which require fine attitude pointing and vibration suppression with no payload articulation, have been extensively investigated. Various integrated design optimization approaches, such as single-objective optimization, and multi-objective optimization, have been implemented with an array of different objectives and constraints involving performance and cost measures such as total mass, actuator mass, steady-state pointing performance, transient performance, control power, and many more. These studies have been performed using an integrated design software tool (CSI-DESIGN CODE) which is under development by the CSI-ADM team at the NASA Langley Research Center. To date, all of these studies, irrespective of the type of integrated optimization posed or objectives and constraints used, have indicated that integrated controls-structures design results in an overall spacecraft design which is considerably superior to designs obtained through a conventional sequential approach. Consequently, it is believed that validation of some of these results through fabrication and testing of a structure which is designed through an integrated design approach is warranted. The objective of this paper is to present and discuss the efforts that have been taken thus far for the validation of the integrated design methodology.

  12. The reliability of physical examination tests for the diagnosis of anterior cruciate ligament rupture--A systematic review.

    PubMed

    Lange, Toni; Freiberg, Alice; Dröge, Patrik; Lützner, Jörg; Schmitt, Jochen; Kopkow, Christian

    2015-06-01

    Systematic literature review. Despite their frequent application in routine care, a systematic review on the reliability of clinical examination tests to evaluate the integrity of the ACL is missing. To summarize and evaluate intra- and interrater reliability research on physical examination tests used for the diagnosis of ACL tears. A comprehensive systematic literature search was conducted in MEDLINE, EMBASE and AMED until May 30th 2013. Studies were included if they assessed the intra- and/or interrater reliability of physical examination tests for the integrity of the ACL. Methodological quality was evaluated with the Quality Appraisal of Reliability Studies (QAREL) tool by two independent reviewers. 110 hits were achieved of which seven articles finally met the inclusion criteria. These studies examined the reliability of four physical examination tests. Intrarater reliability was assessed in three studies and ranged from fair to almost perfect (Cohen's k = 0.22-1.00). Interrater reliability was assessed in all included studies and ranged from slight to almost perfect (Cohen's k = 0.02-0.81). The Lachman test is the physical tests with the highest intrarater reliability (Cohen's k = 1.00), the Lachman test performed in prone position the test with the highest interrater reliability (Cohen's k = 0.81). Included studies were partly of low methodological quality. A meta-analysis could not be performed due to the heterogeneity in study populations, reliability measures and methodological quality of included studies. Systematic investigations on the reliability of physical examination tests to assess the integrity of the ACL are scarce and of varying methodological quality. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Tool for Analyzing Station Characteristics (TASC) : evaluating the performance of intermodal connectivity.

    DOT National Transportation Integrated Search

    2012-08-01

    In previous phases of this research, we developed a methodology for surveying transit riders about their levels of satisfaction and how : important they find various attributes at transit stops and stations. We applied an Importance-Satisfaction Anal...

  14. Assessment of the effectiveness of wrong way driving countermeasures and mitigation methods.

    DOT National Transportation Integrated Search

    2014-12-01

    This report describes the methodology and results of tasks performed to evaluate the effectiveness of : wrong way driving countermeasures and mitigation methods. Researchers reviewed the state of the practice : regarding wrong way driving in the Unit...

  15. \\t Capital Planning and Investment Control (CPIC) for the Management of Information Technology Investments

    EPA Pesticide Factsheets

    Capital Planning and Investment Control (CPIC) is the Information Technology (IT) governance and management methodology in use at EPA for selecting, controlling and evaluating the performance of EPA IT investments throughout the full lifecycle.

  16. Finite difference methods for reducing numerical diffusion in TEACH-type calculations. [Teaching Elliptic Axisymmetric Characteristics Heuristically

    NASA Technical Reports Server (NTRS)

    Syed, S. A.; Chiappetta, L. M.

    1985-01-01

    A methodological evaluation for two-finite differencing schemes for computer-aided gas turbine design is presented. The two computational schemes include; a Bounded Skewed Finite Differencing Scheme (BSUDS); and a Quadratic Upwind Differencing Scheme (QSDS). In the evaluation, the derivations of the schemes were incorporated into two-dimensional and three-dimensional versions of the Teaching Axisymmetric Characteristics Heuristically (TEACH) computer code. Assessments were made according to performance criteria for the solution of problems of turbulent, laminar, and coannular turbulent flow. The specific performance criteria used in the evaluation were simplicity, accuracy, and computational economy. It is found that the BSUDS scheme performed better with respect to the criteria than the QUDS. Some of the reasons for the more successful performance BSUDS are discussed.

  17. Single point aerosol sampling: evaluation of mixing and probe performance in a nuclear stack.

    PubMed

    Rodgers, J C; Fairchild, C I; Wood, G O; Ortiz, C A; Muyshondt, A; McFarland, A R

    1996-01-01

    Alternative reference methodologies have been developed for sampling of radionuclides from stacks and ducts, which differ from the methods previously required by the United States Environmental Protection Agency. These alternative reference methodologies have recently been approved by the U.S. EPA for use in lieu of the current standard techniques. The standard EPA methods are prescriptive in selection of sampling locations and in design of sampling probes whereas the alternative reference methodologies are performance driven. Tests were conducted in a stack at Los Alamos National Laboratory to demonstrate the efficacy of some aspects of the alternative reference methodologies. Coefficients of variation of velocity, tracer gas, and aerosol particle profiles were determined at three sampling locations. Results showed that numerical criteria placed upon the coefficients of variation by the alternative reference methodologies were met at sampling stations located 9 and 14 stack diameters from the flow entrance, but not at a location that was 1.5 diameters downstream from the inlet. Experiments were conducted to characterize the transmission of 10 microns aerodynamic diameter liquid aerosol particles through three types of sampling probes. The transmission ratio (ratio of aerosol concentration at the probe exit plane to the concentration in the free stream) was 107% for a 113 L min-1 (4-cfm) anisokinetic shrouded probe, but only 20% for an isokinetic probe that follows the existing EPA standard requirements. A specially designed isokinetic probe showed a transmission ratio of 63%. The shrouded probe performance would conform to the alternative reference methodologies criteria; however, the isokinetic probes would not.

  18. Evaluating supplier quality performance using analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Kalimuthu Rajoo, Shanmugam Sundram; Kasim, Maznah Mat; Ahmad, Nazihah

    2013-09-01

    This paper elaborates the importance of evaluating supplier quality performance to an organization. Supplier quality performance evaluation reflects the actual performance of the supplier exhibited at customer's end. It is critical in enabling the organization to determine the area of improvement and thereafter works with supplier to close the gaps. Success of the customer partly depends on supplier's quality performance. Key criteria as quality, cost, delivery, technology support and customer service are categorized as main factors in contributing to supplier's quality performance. 18 suppliers' who were manufacturing automotive application parts evaluated in year 2010 using weight point system. There were few suppliers with common rating which led to common ranking observed by few suppliers'. Analytical Hierarchy Process (AHP), a user friendly decision making tool for complex and multi criteria problems was used to evaluate the supplier's quality performance challenging the weight point system that was used for 18 suppliers'. The consistency ratio was checked for criteria and sub-criteria. Final results of AHP obtained with no overlap ratings, therefore yielded a better decision making methodology as compared to weight point rating system.

  19. Integrated Job Skills and Reading Skills Training System. Final Report.

    ERIC Educational Resources Information Center

    Sticht, Thomas G.; And Others

    An exploratory study was conducted to evaluate the feasibility of determining the reading demands of navy jobs, using a methodology that identifies both the type of reading tasks performed on the job and the level of general reading skill required to perform that set of reading tasks. Next, a survey was made of the navy's job skills training…

  20. Degradation of ticarcillin by subcritial water oxidation method: Application of response surface methodology and artificial neural network modeling.

    PubMed

    Yabalak, Erdal

    2018-05-18

    This study was performed to investigate the mineralization of ticarcillin in the artificially prepared aqueous solution presenting ticarcillin contaminated waters, which constitute a serious problem for human health. 81.99% of total organic carbon removal, 79.65% of chemical oxygen demand removal, and 94.35% of ticarcillin removal were achieved by using eco-friendly, time-saving, powerful and easy-applying, subcritical water oxidation method in the presence of a safe-to-use oxidizing agent, hydrogen peroxide. Central composite design, which belongs to the response surface methodology, was applied to design the degradation experiments, to optimize the methods, to evaluate the effects of the system variables, namely, temperature, hydrogen peroxide concentration, and treatment time, on the responses. In addition, theoretical equations were proposed in each removal processes. ANOVA tests were utilized to evaluate the reliability of the performed models. F values of 245.79, 88.74, and 48.22 were found for total organic carbon removal, chemical oxygen demand removal, and ticarcillin removal, respectively. Moreover, artificial neural network modeling was applied to estimate the response in each case and its prediction and optimizing performance was statistically examined and compared to the performance of central composite design.

  1. A critical investigation of post-liquefaction strength and steady-state flow behavior of saturated soils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jong, H.L.

    1988-01-01

    The first objective was to perform a critical evaluation of the recently proposed steady-state analysis methodology for evaluation of post-liquefaction stability of potentially liquefiable soils. This analysis procedure is based on direct comparison between the in-situ undrained residual (steady state) strength of soils in an embankment or foundation, and the driving shear stresses in these soils. A laboratory investigation was performed to investigate factors affecting steady-state strengths, and also to evaluate the validity of assumptions involved in correcting the results of laboratory steady-state strength tests on undisturbed samples for effects of sampling disturbance in order to estimate in-situ strengths. Next,more » a field case study was performed using the steady-state analysis and testing methodologies to analyze Lower San Fernando Dam, which suffered a liquefaction-induced slope failure as a results of a 1971 earthquake. This leads to the second objective which was to extend the Lower San Fernando Dam case study to consideration of analysis methods used to evaluate the likelihood of triggering liquefaction during an earthquake. Finally, a number of the high quality undisturbed samples were subjected to undrained cyclic testing in order to repeat an earlier (1973) study of the use of cyclic tests data to predict liquefaction behavior at Lower San Fernando Dam.« less

  2. Development of a test protocol for evaluating EVA glove performance

    NASA Technical Reports Server (NTRS)

    Hinman, Elaine M.

    1992-01-01

    Testing gloved hand performance involves work from several disciplines. Evaluations performed in the course of reenabling a disabled hand, designing a robotic end effector or master controller, or hard-suit design have all yielded relevant information, and, in most cases, produced performance test methods. Most times, these test methods have been primarily oriented toward their parent discipline. For space operations, a comparative test which would provide a way to quantify pressure glove and end effector performance would be useful in dividing tasks between humans and robots. Such a test would have to rely heavily on sensored measurement, as opposed to questionnaires, to produce relevant data. However, at some point human preference would have to be taken into account. This paper presents a methodology for evaluating gloved hand performance which attempts to respond to these issues. Glove testing of a prototype glove design using this method is described.

  3. Causal inferences on the effectiveness of complex social programs: Navigating assumptions, sources of complexity and evaluation design challenges.

    PubMed

    Chatterji, Madhabi

    2016-12-01

    This paper explores avenues for navigating evaluation design challenges posed by complex social programs (CSPs) and their environments when conducting studies that call for generalizable, causal inferences on the intervention's effectiveness. A definition is provided of a CSP drawing on examples from different fields, and an evaluation case is analyzed in depth to derive seven (7) major sources of complexity that typify CSPs, threatening assumptions of textbook-recommended experimental designs for performing impact evaluations. Theoretically-supported, alternative methodological strategies are discussed to navigate assumptions and counter the design challenges posed by the complex configurations and ecology of CSPs. Specific recommendations include: sequential refinement of the evaluation design through systems thinking, systems-informed logic modeling; and use of extended term, mixed methods (ETMM) approaches with exploratory and confirmatory phases of the evaluation. In the proposed approach, logic models are refined through direct induction and interactions with stakeholders. To better guide assumption evaluation, question-framing, and selection of appropriate methodological strategies, a multiphase evaluation design is recommended. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Performance evaluation of automated segmentation software on optical coherence tomography volume data

    PubMed Central

    Tian, Jing; Varga, Boglarka; Tatrai, Erika; Fanni, Palya; Somfai, Gabor Mark; Smiddy, William E.

    2016-01-01

    Over the past two decades a significant number of OCT segmentation approaches have been proposed in the literature. Each methodology has been conceived for and/or evaluated using specific datasets that do not reflect the complexities of the majority of widely available retinal features observed in clinical settings. In addition, there does not exist an appropriate OCT dataset with ground truth that reflects the realities of everyday retinal features observed in clinical settings. While the need for unbiased performance evaluation of automated segmentation algorithms is obvious, the validation process of segmentation algorithms have been usually performed by comparing with manual labelings from each study and there has been a lack of common ground truth. Therefore, a performance comparison of different algorithms using the same ground truth has never been performed. This paper reviews research-oriented tools for automated segmentation of the retinal tissue on OCT images. It also evaluates and compares the performance of these software tools with a common ground truth. PMID:27159849

  5. A Method for the Evaluation of Thousands of Automated 3D Stem Cell Segmentations

    PubMed Central

    Bajcsy, Peter; Simon, Mylene; Florczyk, Stephen; Simon, Carl G.; Juba, Derek; Brady, Mary

    2016-01-01

    There is no segmentation method that performs perfectly with any data set in comparison to human segmentation. Evaluation procedures for segmentation algorithms become critical for their selection. The problems associated with segmentation performance evaluations and visual verification of segmentation results are exaggerated when dealing with thousands of 3D image volumes because of the amount of computation and manual inputs needed. We address the problem of evaluating 3D segmentation performance when segmentation is applied to thousands of confocal microscopy images (z-stacks). Our approach is to incorporate experimental imaging and geometrical criteria, and map them into computationally efficient segmentation algorithms that can be applied to a very large number of z-stacks. This is an alternative approach to considering existing segmentation methods and evaluating most state-of-the-art algorithms. We designed a methodology for 3D segmentation performance characterization that consists of design, evaluation and verification steps. The characterization integrates manual inputs from projected surrogate “ground truth” of statistically representative samples and from visual inspection into the evaluation. The novelty of the methodology lies in (1) designing candidate segmentation algorithms by mapping imaging and geometrical criteria into algorithmic steps, and constructing plausible segmentation algorithms with respect to the order of algorithmic steps and their parameters, (2) evaluating segmentation accuracy using samples drawn from probability distribution estimates of candidate segmentations, and (3) minimizing human labor needed to create surrogate “truth” by approximating z-stack segmentations with 2D contours from three orthogonal z-stack projections and by developing visual verification tools. We demonstrate the methodology by applying it to a dataset of 1253 mesenchymal stem cells. The cells reside on 10 different types of biomaterial scaffolds, and are stained for actin and nucleus yielding 128 460 image frames (on average 125 cells/scaffold × 10 scaffold types × 2 stains × 51 frames/cell). After constructing and evaluating six candidates of 3D segmentation algorithms, the most accurate 3D segmentation algorithm achieved an average precision of 0.82 and an accuracy of 0.84 as measured by the Dice similarity index where values greater than 0.7 indicate a good spatial overlap. A probability of segmentation success was 0.85 based on visual verification, and a computation time was 42.3 h to process all z-stacks. While the most accurate segmentation technique was 4.2 times slower than the second most accurate algorithm, it consumed on average 9.65 times less memory per z-stack segmentation. PMID:26268699

  6. Fast and Cost-Effective Biochemical Spectrophotometric Analysis of Solution of Insect "Blood" and Body Surface Elution.

    PubMed

    Łoś, Aleksandra; Strachecka, Aneta

    2018-05-09

    Using insect hemolymph ("blood") and insect body surface elutions, researchers can perform rapid and cheap biochemical analyses to determine the insect's immunology status. The authors of this publication describe a detailed methodology for a quick marking of the concentration of total proteins and evaluation of the proteolytic system activity (acid, neutral, and alkaline proteases and protease inhibitors), as well as a methodology for quick "liver" tests in insects: alanine aminotransferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP), and urea and glucose concentration analyses. The meaning and examples of an interpretation of the results of the presented methodology for biochemical parameter determination are described for the example of honey bees.

  7. On sustainable and efficient design of ground-source heat pump systems

    NASA Astrophysics Data System (ADS)

    Grassi, W.; Conti, P.; Schito, E.; Testi, D.

    2015-11-01

    This paper is mainly aimed at stressing some fundamental features of the GSHP design and is based on a broad research we are performing at the University of Pisa. In particular, we focus the discussion on an environmentally sustainable approach, based on performance optimization during the entire operational life. The proposed methodology aims at investigating design and management strategies to find the optimal level of exploitation of the ground source and refer to other technical means to cover the remaining energy requirements and modulate the power peaks. The method is holistic, considering the system as a whole, rather than focusing only on some components, usually considered as the most important ones. Each subsystem is modeled and coupled to the others in a full set of equations, which is used within an optimization routine to reproduce the operative performances of the overall GSHP system. As a matter of fact, the recommended methodology is a 4-in-1 activity, including sizing of components, lifecycle performance evaluation, optimization process, and feasibility analysis. The paper reviews also some previous works concerning possible applications of the proposed methodology. In conclusion, we describe undergoing research activities and objectives of future works.

  8. Application Of The Iberdrola Licensing Methodology To The Cofrentes BWR-6 110% Extended Power Up-rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mata, Pedro; Fuente, Rafael de la; Iglesias, Javier

    Iberdrola (spanish utility) and Iberdrola Ingenieria (engineering branch) have been developing during the last two years the 110% Extended Power Up-rate Project (EPU 110%) for Cofrentes BWR-6. IBERDROLA has available an in-house design and licensing reload methodology that has been approved by the Spanish Nuclear Regulatory Authority. This methodology has been already used to perform the nuclear design and the reload licensing analysis for Cofrentes cycles 12 to 14. The methodology has been also applied to develop a significant number of safety analysis of the Cofrentes Extended Power Up-rate including: Reactor Heat Balance, Core and Fuel performance, Thermal Hydraulic Stability,more » ECCS LOCA Evaluation, Transient Analysis, Anticipated Transient Without Scram (ATWS) and Station Blackout (SBO) Since the scope of the licensing process of the Cofrentes Extended Power Up-rate exceeds the range of analysis included in the Cofrentes generic reload licensing process, it has been required to extend the applicability of the Cofrentes licensing methodology to the analysis of new transients. This is the case of the TLFW transient. The content of this paper shows the benefits of having an in-house design and licensing methodology, and describes the process to extend the applicability of the methodology to the analysis of new transients. The case of analysis of Total Loss of Feedwater with the Cofrentes Retran Model is included as an example of this process. (authors)« less

  9. Evaluation of the methodological quality of the Health Protection Agency's 2009 guidance on neuraminidase inhibitors.

    PubMed

    Hopayian, Kevork; Jackson, Lucy

    2012-01-01

    The Health Protection Agency (HPA) issued guidance advocating the prescription of neuraminidase inhibitors in July 2009 in response to a predicted pandemic of influenza. Although the contents of the guidance have been debated, the methodology has not. The guidance was evaluated by two reviewers using a validated and internationally recognised tool for assessing guidelines, the Appraisal of Guidelines Research & Evaluation instrument (AGREE). This tool scores six domains independently of each other. The guidance scored 61% for the domain scope and purpose and 54% for the domain clarity and presentation. By contrast, it scored only 31% for rigour of development due to poor linkage of its recommendations to evidence. The HPA should improve its performance in this domain to general practitioners in order to improve the credibility of its future guidance.

  10. Comparison of calibration strategies for optical 3D scanners based on structured light projection using a new evaluation methodology

    NASA Astrophysics Data System (ADS)

    Bräuer-Burchardt, Christian; Ölsner, Sandy; Kühmstedt, Peter; Notni, Gunther

    2017-06-01

    In this paper a new evaluation strategy for optical 3D scanners based on structured light projection is introduced. It can be used for the characterization of the expected measurement accuracy. Compared to the procedure proposed in the VDI/VDE guidelines for optical 3D measurement systems based on area scanning it requires less effort and provides more impartiality. The methodology is suitable for the evaluation of sets of calibration parameters, which mainly determine the quality of the measurement result. It was applied to several calibrations of a mobile stereo camera based optical 3D scanner. The performed calibrations followed different strategies regarding calibration bodies and arrangement of the observed scene. The results obtained by the different calibration strategies are discussed and suggestions concerning future work on this area are given.

  11. Structured syncope care pathways based on lean six sigma methodology optimises resource use with shorter time to diagnosis and increased diagnostic yield.

    PubMed

    Martens, Leon; Goode, Grahame; Wold, Johan F H; Beck, Lionel; Martin, Georgina; Perings, Christian; Stolt, Pelle; Baggerman, Lucas

    2014-01-01

    To conduct a pilot study on the potential to optimise care pathways in syncope/Transient Loss of Consciousness management by using Lean Six Sigma methodology while maintaining compliance with ESC and/or NICE guidelines. Five hospitals in four European countries took part. The Lean Six Sigma methodology consisted of 3 phases: 1) Assessment phase, in which baseline performance was mapped in each centre, processes were evaluated and a new operational model was developed with an improvement plan that included best practices and change management; 2) Improvement phase, in which optimisation pathways and standardised best practice tools and forms were developed and implemented. Staff were trained on new processes and change-management support provided; 3) Sustaining phase, which included support, refinement of tools and metrics. The impact of the implementation of new pathways was evaluated on number of tests performed, diagnostic yield, time to diagnosis and compliance with guidelines. One hospital with focus on geriatric populations was analysed separately from the other four. With the new pathways, there was a 59% reduction in the average time to diagnosis (p = 0.048) and a 75% increase in diagnostic yield (p = 0.007). There was a marked reduction in repetitions of diagnostic tests and improved prioritisation of indicated tests. Applying a structured Lean Six Sigma based methodology to pathways for syncope management has the potential to improve time to diagnosis and diagnostic yield.

  12. Structured Syncope Care Pathways Based on Lean Six Sigma Methodology Optimises Resource Use with Shorter Time to Diagnosis and Increased Diagnostic Yield

    PubMed Central

    Martens, Leon; Goode, Grahame; Wold, Johan F. H.; Beck, Lionel; Martin, Georgina; Perings, Christian; Stolt, Pelle; Baggerman, Lucas

    2014-01-01

    Aims To conduct a pilot study on the potential to optimise care pathways in syncope/Transient Loss of Consciousness management by using Lean Six Sigma methodology while maintaining compliance with ESC and/or NICE guidelines. Methods Five hospitals in four European countries took part. The Lean Six Sigma methodology consisted of 3 phases: 1) Assessment phase, in which baseline performance was mapped in each centre, processes were evaluated and a new operational model was developed with an improvement plan that included best practices and change management; 2) Improvement phase, in which optimisation pathways and standardised best practice tools and forms were developed and implemented. Staff were trained on new processes and change-management support provided; 3) Sustaining phase, which included support, refinement of tools and metrics. The impact of the implementation of new pathways was evaluated on number of tests performed, diagnostic yield, time to diagnosis and compliance with guidelines. One hospital with focus on geriatric populations was analysed separately from the other four. Results With the new pathways, there was a 59% reduction in the average time to diagnosis (p = 0.048) and a 75% increase in diagnostic yield (p = 0.007). There was a marked reduction in repetitions of diagnostic tests and improved prioritisation of indicated tests. Conclusions Applying a structured Lean Six Sigma based methodology to pathways for syncope management has the potential to improve time to diagnosis and diagnostic yield. PMID:24927475

  13. Gallium-arsenide process evaluation based on a RISC microprocessor example

    NASA Astrophysics Data System (ADS)

    Brown, Richard B.; Upton, Michael; Chandna, Ajay; Huff, Thomas R.; Mudge, Trevor N.; Oettel, Richard E.

    1993-10-01

    This work evaluates the features of a gallium-arsenide E/D MESFET process in which a 32-b RISC microprocessor was implemented. The design methodology and architecture of this prototype CPU are described. The performance sensitivity of the microprocessor and other large circuit blocks to different process parameters is analyzed, and recommendations for future process features, circuit approaches, and layout styles are made. These recommendations are reflected in the design of a second microprocessor using a more advanced process that achieves much higher density and performance.

  14. MAEPOPP Center 2015 Best Education Practices Directory

    ERIC Educational Resources Information Center

    Arendale, David R., Ed.

    2015-01-01

    Purpose: This directory identifies, describes, and contains evaluative data evidence-based practices that improve academic performance, close the achievement gap, and improve persistence towards graduation for low-income, first-generation, and historically-underrepresented 6th grade through college students. Methodology: The directory was a…

  15. RAPID ASSESSMENT OF POTENTIAL GROUND-WATER CONTAMINATION UNDER EMERGENCY RESPONSE CONDITIONS

    EPA Science Inventory

    Emergency response actions at chemical spills and abandoned hazardous waste sites often require rapid assessment of the potential for groundwater contamination by the chemical or waste compound. This manual provides a rapid assessment methodology for performing such an evaluation...

  16. Product environmental footprint in policy and market decisions: Applicability and impact assessment.

    PubMed

    Lehmann, Annekatrin; Bach, Vanessa; Finkbeiner, Matthias

    2015-07-01

    In April 2013, the European Commission published the Product and Organisation Environmental Footprint (PEF/OEF) methodology--a life cycle-based multicriteria measure of the environmental performance of products, services, and organizations. With its approach of "comparability over flexibility," the PEF/OEF methodology aims at harmonizing existing methods, while decreasing the flexibility provided by the International Organization for Standardization (ISO) standards regarding methodological choices. Currently, a 3-y pilot phase is running, aiming at testing the methodology and developing product category and organization sector rules (PEFCR/OEFSR). Although a harmonized method is in theory a good idea, the PEF/OEF methodology presents challenges, including a risk of confusion and limitations in applicability to practice. The paper discusses the main differences between the PEF and ISO methodologies and highlights challenges regarding PEF applicability, with a focus on impact assessment. Some methodological aspects of the PEF and PEFCR Guides are found to contradict the ISO 14044 (2006) and ISO 14025 (2006). Others, such as prohibition of inventory cutoffs, are impractical. The evaluation of the impact assessment methods proposed in the PEF/OEF Guide showed that the predefined methods for water consumption, land use, and abiotic resources are not adequate because of modeling artefacts, missing inventory data, or incomplete characterization factors. However, the methods for global warming and ozone depletion perform very well. The results of this study are relevant for the PEF (and OEF) pilot phase, which aims at testing the PEF (OEF) methodology (and potentially adapting it) as well as addressing challenges and coping with them. © 2015 SETAC.

  17. Evaluation of a mobile augmented reality application for image guidance of neurosurgical interventions.

    PubMed

    Kramers, Matthew; Armstrong, Ryan; Bakhshmand, Saeed M; Fenster, Aaron; de Ribaupierre, Sandrine; Eagleson, Roy

    2014-01-01

    Image guidance can provide surgeons with valuable contextual information during a medical intervention. Often, image guidance systems require considerable infrastructure, setup-time, and operator experience to be utilized. Certain procedures performed at bedside are susceptible to navigational errors that can lead to complications. We present an application for mobile devices that can provide image guidance using augmented reality to assist in performing neurosurgical tasks. A methodology is outlined that evaluates this mode of visualization from the standpoint of perceptual localization, depth estimation, and pointing performance, in scenarios derived from a neurosurgical targeting task. By measuring user variability and speed we can report objective metrics of performance for our augmented reality guidance system.

  18. [Quality of clinical studies published in the RBGO over one decade (1999-2009): methodological and ethical aspects and statistical procedures].

    PubMed

    de Sá, Joceline Cássia Ferezini; Marini, Gabriela; Gelaleti, Rafael Bottaro; da Silva, João Batista; de Azevedo, George Gantas; Rudge, Marilza Vieira Cunha

    2013-11-01

    To evaluate the methodological and statistical design evolution of the publications in the Brazilian Journal of Gynecology and Obstetrics (RBGO) from resolution 196/96. A review of 133 articles published in 1999 (65) and 2009 (68) was performed by two independent reviewers with training in clinical epidemiology and methodology of scientific research. We included all original clinical articles, case and series reports and excluded editorials, letters to the editor, systematic reviews, experimental studies, opinion articles, besides abstracts of theses and dissertations. Characteristics related to the methodological quality of the studies were analyzed in each article using a checklist that evaluated two criteria: methodological aspects and statistical procedures. We used descriptive statistics and the χ2 test for comparison of the two years. There was a difference between 1999 and 2009 regarding the study and statistical design, with more accuracy in the procedures and the use of more robust tests between 1999 and 2009. In RBGO, we observed an evolution in the methods of published articles and a more in-depth use of the statistical analyses, with more sophisticated tests such as regression and multilevel analyses, which are essential techniques for the knowledge and planning of health interventions, leading to fewer interpretation errors.

  19. ARAMIS project: a more explicit demonstration of risk control through the use of bow-tie diagrams and the evaluation of safety barrier performance.

    PubMed

    de Dianous, Valérie; Fiévez, Cécile

    2006-03-31

    Over the last two decades a growing interest for risk analysis has been noted in the industries. The ARAMIS project has defined a methodology for risk assessment. This methodology has been built to help the industrialist to demonstrate that they have a sufficient risk control on their site. Risk analysis consists first in the identification of all the major accidents, assuming that safety functions in place are inefficient. This step of identification of the major accidents uses bow-tie diagrams. Secondly, the safety barriers really implemented on the site are taken into account. The barriers are identified on the bow-ties. An evaluation of their performance (response time, efficiency, and level of confidence) is performed to validate that they are relevant for the expected safety function. At last, the evaluation of their probability of failure enables to assess the frequency of occurrence of the accident. The demonstration of the risk control based on a couple gravity/frequency of occurrence is also possible for all the accident scenarios. During the risk analysis, a practical tool called risk graph is used to assess if the number and the reliability of the safety functions for a given cause are sufficient to reach a good risk control.

  20. A new method to assess the sustainability performance of events: Application to the 2014 World Orienteering Championship

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scrucca, Flavio; Severi, Claudio; Galvan, Nicola

    Nowadays an increasing attention of public and private agencies to the sustainability performance of events is observed, since it is recognized as a key issue in the context of sustainable development. Assessing the sustainability performance of events involves environmental, social and economic aspects; their impacts are complex and a quantitative assessment is often difficult. This paper presents a new quali-quantitative method developed to measure the sustainability of events, taking into account all its potential impacts. The 2014 World Orienteering Championship, held in Italy, was selected to test the proposed evaluation methodology. The total carbon footprint of the event was 165.34more » tCO{sub 2}eq and the avoided emissions were estimated as being 46 tCO{sub 2}eq. The adopted quali-quantitative method resulted to be efficient in assessing the sustainability impacts and can be applied for the evaluation of similar events. - Highlights: • A quali-quantitative method to assess events' sustainability is presented. • All the methodological issues related to the method are explained. • The method is used to evaluate the sustainability of an international sports event. • The method resulted to be valid to assess the event's sustainability level. • The carbon footprint of the event has been calculated.« less

  1. Direct metal laser sintering titanium dental implants: a review of the current literature.

    PubMed

    Mangano, F; Chambrone, L; van Noort, R; Miller, C; Hatton, P; Mangano, C

    2014-01-01

    Statement of Problem. Direct metal laser sintering (DMLS) is a technology that allows fabrication of complex-shaped objects from powder-based materials, according to a three-dimensional (3D) computer model. With DMLS, it is possible to fabricate titanium dental implants with an inherently porous surface, a key property required of implantation devices. Objective. The aim of this review was to evaluate the evidence for the reliability of DMLS titanium dental implants and their clinical and histologic/histomorphometric outcomes, as well as their mechanical properties. Materials and Methods. Electronic database searches were performed. Inclusion criteria were clinical and radiographic studies, histologic/histomorphometric studies in humans and animals, mechanical evaluations, and in vitro cell culture studies on DMLS titanium implants. Meta-analysis could be performed only for randomized controlled trials (RCTs); to evaluate the methodological quality of observational human studies, the Newcastle-Ottawa scale (NOS) was used. Results. Twenty-seven studies were included in this review. No RCTs were found, and meta-analysis could not be performed. The outcomes of observational human studies were assessed using the NOS: these studies showed medium methodological quality. Conclusions. Several studies have demonstrated the potential for the use of DMLS titanium implants. However, further studies that demonstrate the benefits of DMLS implants over conventional implants are needed.

  2. Direct Metal Laser Sintering Titanium Dental Implants: A Review of the Current Literature

    PubMed Central

    Mangano, F.; Chambrone, L.; van Noort, R.; Miller, C.; Hatton, P.; Mangano, C.

    2014-01-01

    Statement of Problem. Direct metal laser sintering (DMLS) is a technology that allows fabrication of complex-shaped objects from powder-based materials, according to a three-dimensional (3D) computer model. With DMLS, it is possible to fabricate titanium dental implants with an inherently porous surface, a key property required of implantation devices. Objective. The aim of this review was to evaluate the evidence for the reliability of DMLS titanium dental implants and their clinical and histologic/histomorphometric outcomes, as well as their mechanical properties. Materials and Methods. Electronic database searches were performed. Inclusion criteria were clinical and radiographic studies, histologic/histomorphometric studies in humans and animals, mechanical evaluations, and in vitro cell culture studies on DMLS titanium implants. Meta-analysis could be performed only for randomized controlled trials (RCTs); to evaluate the methodological quality of observational human studies, the Newcastle-Ottawa scale (NOS) was used. Results. Twenty-seven studies were included in this review. No RCTs were found, and meta-analysis could not be performed. The outcomes of observational human studies were assessed using the NOS: these studies showed medium methodological quality. Conclusions. Several studies have demonstrated the potential for the use of DMLS titanium implants. However, further studies that demonstrate the benefits of DMLS implants over conventional implants are needed. PMID:25525434

  3. Methodological and ethical aspects of the sexual maturation assessment in adolescents

    PubMed Central

    de Faria, Eliane Rodrigues; Franceschini, Sylvia do Carmo C.; Peluzio, Maria do Carmo G.; Sant'Ana, Luciana Ferreira da R.; Priore, Silvia Eloiza

    2013-01-01

    OBJECTIVE To analyze methodological and ethical aspects in the sexual maturation assessment of adolescents. DATA SOURCES Books and theses, articles and legislations on the Medline, SciELO, Science Direct databases, besides institutional documents of the World Health Organization and the Pediatric Societies of Brazil and São Paulo, considering the period from 1962 to 2012. The following keywords were used in Portuguese and English: "sexual maturation", "self-assessment", "ethics", "OBJECTIVE assessment of sexual maturation", "puberty", "adolescent", and "adolescentdevelopment". DATA SYNTHESIS The sexual maturation assessment is used in populatinal studies and in clinical daily care. The direct evaluation is performed by a specialized physician, whereas the self-assessment is carried out by the adolescent. This evaluation should be carefully performed in the appropriate place, taking into account the ethical aspects. The patient should not be constrained and the physician must respect the privacy and the confidentiality. Before this evaluation and independently of the used method, the adolescent should receive information and explanation about the procedure and the tools that will be applied. Furthermore, the patient has the right to want or not an adult close to him. CONCLUSIONS Validation studies showed that self-assessment is inferior to clinical assessment and should, therefore, be performed only when the direct examination by physicians is not possible. PMID:24142325

  4. Defining a reference set to support methodological research in drug safety.

    PubMed

    Ryan, Patrick B; Schuemie, Martijn J; Welebob, Emily; Duke, Jon; Valentine, Sarah; Hartzema, Abraham G

    2013-10-01

    Methodological research to evaluate the performance of methods requires a benchmark to serve as a referent comparison. In drug safety, the performance of analyses of spontaneous adverse event reporting databases and observational healthcare data, such as administrative claims and electronic health records, has been limited by the lack of such standards. To establish a reference set of test cases that contain both positive and negative controls, which can serve the basis for methodological research in evaluating methods performance in identifying drug safety issues. Systematic literature review and natural language processing of structured product labeling was performed to identify evidence to support the classification of drugs as either positive controls or negative controls for four outcomes: acute liver injury, acute kidney injury, acute myocardial infarction, and upper gastrointestinal bleeding. Three-hundred and ninety-nine test cases comprised of 165 positive controls and 234 negative controls were identified across the four outcomes. The majority of positive controls for acute kidney injury and upper gastrointestinal bleeding were supported by randomized clinical trial evidence, while the majority of positive controls for acute liver injury and acute myocardial infarction were only supported based on published case reports. Literature estimates for the positive controls shows substantial variability that limits the ability to establish a reference set with known effect sizes. A reference set of test cases can be established to facilitate methodological research in drug safety. Creating a sufficient sample of drug-outcome pairs with binary classification of having no effect (negative controls) or having an increased effect (positive controls) is possible and can enable estimation of predictive accuracy through discrimination. Since the magnitude of the positive effects cannot be reliably obtained and the quality of evidence may vary across outcomes, assumptions are required to use the test cases in real data for purposes of measuring bias, mean squared error, or coverage probability.

  5. Consistency of performance of robot-assisted surgical tasks in virtual reality.

    PubMed

    Suh, I H; Siu, K-C; Mukherjee, M; Monk, E; Oleynikov, D; Stergiou, N

    2009-01-01

    The purpose of this study was to investigate consistency of performance of robot-assisted surgical tasks in a virtual reality environment. Eight subjects performed two surgical tasks, bimanual carrying and needle passing, with both the da Vinci surgical robot and a virtual reality equivalent environment. Nonlinear analysis was utilized to evaluate consistency of performance by calculating the regularity and the amount of divergence in the movement trajectories of the surgical instrument tips. Our results revealed that movement patterns for both training tasks were statistically similar between the two environments. Consistency of performance as measured by nonlinear analysis could be an appropriate methodology to evaluate the complexity of the training tasks between actual and virtual environments and assist in developing better surgical training programs.

  6. Probabilistic design of fibre concrete structures

    NASA Astrophysics Data System (ADS)

    Pukl, R.; Novák, D.; Sajdlová, T.; Lehký, D.; Červenka, J.; Červenka, V.

    2017-09-01

    Advanced computer simulation is recently well-established methodology for evaluation of resistance of concrete engineering structures. The nonlinear finite element analysis enables to realistically predict structural damage, peak load, failure, post-peak response, development of cracks in concrete, yielding of reinforcement, concrete crushing or shear failure. The nonlinear material models can cover various types of concrete and reinforced concrete: ordinary concrete, plain or reinforced, without or with prestressing, fibre concrete, (ultra) high performance concrete, lightweight concrete, etc. Advanced material models taking into account fibre concrete properties such as shape of tensile softening branch, high toughness and ductility are described in the paper. Since the variability of the fibre concrete material properties is rather high, the probabilistic analysis seems to be the most appropriate format for structural design and evaluation of structural performance, reliability and safety. The presented combination of the nonlinear analysis with advanced probabilistic methods allows evaluation of structural safety characterized by failure probability or by reliability index respectively. Authors offer a methodology and computer tools for realistic safety assessment of concrete structures; the utilized approach is based on randomization of the nonlinear finite element analysis of the structural model. Uncertainty of the material properties or their randomness obtained from material tests are accounted in the random distribution. Furthermore, degradation of the reinforced concrete materials such as carbonation of concrete, corrosion of reinforcement, etc. can be accounted in order to analyze life-cycle structural performance and to enable prediction of the structural reliability and safety in time development. The results can serve as a rational basis for design of fibre concrete engineering structures based on advanced nonlinear computer analysis. The presented methodology is illustrated on results from two probabilistic studies with different types of concrete structures related to practical applications and made from various materials (with the parameters obtained from real material tests).

  7. Ergonomics and design: traffic sign and street name sign.

    PubMed

    Moroni, Janaina Luisa da Silva; Aymone, José Luís Farinatti

    2012-01-01

    This work proposes a design methodology using ergonomics and anthropometry concepts applied to traffic sign and street name sign projects. Initially, a literature revision on cognitive ergonomics and anthropometry is performed. Several authors and their design methodologies are analyzed and the aspects to be considered in projects of traffic and street name signs are selected and other specific aspects are proposed for the design methodology. A case study of the signs of "Street of Antiques" in Porto Alegre city is presented. To do that, interviews with the population are made to evaluate the current situation of signs. After that, a new sign proposal with virtual prototyping is done using the developed methodology. The results obtained with new interviews about the proposal show the user satisfaction and the importance of cognitive ergonomics to development of this type of urban furniture.

  8. Supervisor/Peer Involvement in Evaluation Transfer of Training Process and Results Reliability: A Research in an Italian Public Body

    ERIC Educational Resources Information Center

    Capaldo, Guido; Depolo, Marco; Rippa, Pierluigi; Schiattone, Domenico

    2017-01-01

    Purpose: The aim of this paper is to present a study performed in conjunction with a branch of the Italian Public Italian Administration, the ISSP (Istituto Superiore di Studi Penitenziari--the Higher Institute of Penitentiary Studies). The study aimed to develop a Transfer of Training (ToT) evaluation methodology that would be both scientifically…

  9. [Non-randomized evaluation studies (TREND)].

    PubMed

    Vallvé, Carles; Artés, Maite; Cobo, Erik

    2005-12-01

    Nonrandomized intervention trials are needed when randomized clinical trials cannot be performed. To report the results from nonrandomized intervention studies transparently, the TREND (Transparent Reporting of Evaluations with Nonrandomized Designs) checklist should be used. This implies that nonrandomized studies should follow the remaining methodological tools usually employed in randomized trials and that the uncertainty introduced by the allocation mechanism should be explicitly reported and, if possible, quantified.

  10. MULTI-SITE EVALUATIONS OF CANDIDATE METHODOLOGIES FOR DETERMINING COARSE PARTICULATE (PM 10-2.5) CONCENTRATIONS: AUGUST 2005 UPDATED REPORT REGARDING SECOND-GENERATION AND NEW PM 10-2.5 SAMPLERS

    EPA Science Inventory

    Multi-site field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 (PM10 2.5) in ambient air. The field studies involved the use of both time-integrated filter-based and direct continuous methods. Despite operationa...

  11. SOAP Methodology in General Practice/Family Medicine Teaching in Practical Context.

    PubMed

    Santiago, Luiz Miguel; Neto, Isabel

    2016-12-30

    Medical records in General Practice/Family Medicine are an essential information support on the health status of the patient and a communication document between health professionals. The development of competencies in General Practice/Family Medicine during pre-graduation must include the ability to make adequate medical records in practical context. As of 2012, medicine students at the University of Beira Interior have been performing visits using the Subjective, Objective, Assessment and Plan - SOAP methodology, with a performance evaluation of the visit, with the aim to check on which Subjective, Objective, Assessment and Plan - SOAP aspects students reveal the most difficulties in order to define improvement techniques and to correlate patient grade with tutor evaluation. Analysing the evaluation data for the 2015 - 2016 school year at the General Practice/Family Medicine visit carried out by fourth year students in medicine, comparing the averages of each item in the Subjective, Objective, Assessment and Plan - SOAP checklist and the patient evaluation. In the Subjective, Objective, Assessment and Plan - SOAP, 29.7% of students are on the best grade quartile, 37.1% are on the best competencies quartile and 27.2% on the best patient grade quartile. 'Evolution was verified/noted' received the worst grades in Subjective, 'Record of physical examination focused on the problem of the visit' received the worst grades in Objective, 'Notes of Diagnostic reasoning / differential diagnostic' received de worst grades in Assessment and 'Negotiation of aims to achieve' received the worst grades in Plan. The best tutor evaluation is found in 'communication'. Only one single study evaluated student´s performance under examination during a visit, with similar results to the present one and none addressed the patient's evaluation. Students revealed a good performance in using the Subjective, Objective, Assessment and Plan - SOAP. The findings represent the beginning of the introduction of the Subjective, Objective, Assessment and Plan - SOAP to the students. This evaluation breaks ground towards better ways to teach the most difficult aspects.

  12. Instruments for Assessing Risk of Bias and Other Methodological Criteria of Published Animal Studies: A Systematic Review

    PubMed Central

    Krauth, David; Woodruff, Tracey J.

    2013-01-01

    Background: Results from animal toxicology studies are critical to evaluating the potential harm from exposure to environmental chemicals or the safety of drugs prior to human testing. However, there is significant debate about how to evaluate the methodology and potential biases of the animal studies. There is no agreed-upon approach, and a systematic evaluation of current best practices is lacking. Objective: We performed a systematic review to identify and evaluate instruments for assessing the risk of bias and/or other methodological criteria of animal studies. Method: We searched Medline (January 1966–November 2011) to identify all relevant articles. We extracted data on risk of bias criteria (e.g., randomization, blinding, allocation concealment) and other study design features included in each assessment instrument. Discussion: Thirty distinct instruments were identified, with the total number of assessed risk of bias, methodological, and/or reporting criteria ranging from 2 to 25. The most common criteria assessed were randomization (25/30, 83%), investigator blinding (23/30, 77%), and sample size calculation (18/30, 60%). In general, authors failed to empirically justify why these or other criteria were included. Nearly all (28/30, 93%) of the instruments have not been rigorously tested for validity or reliability. Conclusion: Our review highlights a number of risk of bias assessment criteria that have been empirically tested for animal research, including randomization, concealment of allocation, blinding, and accounting for all animals. In addition, there is a need for empirically testing additional methodological criteria and assessing the validity and reliability of a standard risk of bias assessment instrument. Citation: Krauth D, Woodruff TJ, Bero L. 2013. Instruments for assessing risk of bias and other methodological criteria of published animal studies: a systematic review. Environ Health Perspect 121:985–992 (2013); http://dx.doi.org/10.1289/ehp.1206389 PMID:23771496

  13. Experimental Evaluation Methodology for Spacecraft Proximity Maneuvers in a Dynamic Environment

    DTIC Science & Technology

    2017-06-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA DISSERTATION EXPERIMENTAL EVALUATION METHODOLOGY FOR SPACECRAFT PROXIMITY MANEUVERS IN A DYNAMIC...29, 2014 – June 16, 2017 4. TITLE AND SUBTITLE EXPERIMENTAL EVALUATION METHODOLOGY FOR SPACECRAFT PROXIMITY MANEUVERS IN A DYNAMIC ENVIRONMENT 5...LEFT BLANK ii Approved for public release. Distribution is unlimited. EXPERIMENTAL EVALUATION METHODOLOGY FOR SPACECRAFT PROXIMITY MANEUVERS IN A

  14. Uncertainty evaluation of EnPIs in industrial applications as a key factor in setting improvement actions

    NASA Astrophysics Data System (ADS)

    D'Emilia, G.; Di Gasbarro, D.; Gaspari, A.; Natale, E.

    2015-11-01

    A methodology is proposed assuming high-level Energy Performance Indicators (EnPIs) uncertainty as quantitative indicator of the evolution of an Energy Management System (EMS). Motivations leading to the selection of the EnPIs, uncertainty evaluation techniques and criteria supporting decision-making are discussed, in order to plan and pursue reliable measures for energy performance improvement. In this paper, problems, priorities, operative possibilities and reachable improvement limits are examined, starting from the measurement uncertainty assessment. Two different industrial cases are analysed with reference to the following aspects: absence/presence of energy management policy and action plans; responsibility level for the energy issues; employees’ training and motivation in respect of the energy problems; absence/presence of adequate infrastructures for monitoring and sharing of energy information; level of standardization and integration of methods and procedures linked to the energy activities; economic and financial resources for the improvement of energy efficiency. A critic and comparative analysis of the obtained results is realized. The methodology, experimentally validated, allows developing useful considerations for effective, realistic and economically feasible improvement plans, depending on the specific situation. Recursive application of the methodology allows getting reliable and resolved assessment of the EMS status, also in dynamic industrial contexts.

  15. Incorporating Probability Models of Complex Test Structures to Perform Technology Independent FPGA Single Event Upset Analysis

    NASA Technical Reports Server (NTRS)

    Berg, M. D.; Kim, H. S.; Friendlich, M. A.; Perez, C. E.; Seidlick, C. M.; LaBel, K. A.

    2011-01-01

    We present SEU test and analysis of the Microsemi ProASIC3 FPGA. SEU Probability models are incorporated for device evaluation. Included is a comparison to the RTAXS FPGA illustrating the effectiveness of the overall testing methodology.

  16. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    USDA-ARS?s Scientific Manuscript database

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  17. 48 CFR 1552.216-70 - Award Fee.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Section 1552.216-70 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY CLAUSES AND... the award fee to be paid is determined by the Government's judgmental evaluation of the contractor's performance in terms of the criteria stated in the contract. This determination and the methodology for...

  18. Evaluating the Effects of Clothing and Individual Equipment on Marksmanship Performance Using a Novel Five Target Methodology

    DTIC Science & Technology

    2016-11-01

    operationally relevant and address the key factors for Warfighter performance. N ot s ub je ct to U .S . c op yr ig ht re st ric tio ns . D...11B) from the 75th Ranger Regiment. Two TPs were Aberdeen Test Center (ATC) Contractors as Representative Soldiers (CARS). One of the CARS is

  19. The effects of massage therapy in hospitalized preterm neonates: A systematic review.

    PubMed

    Álvarez, María José; Fernández, Daniel; Gómez-Salgado, Juan; Rodríguez-González, Dolores; Rosón, María; Lapeña, Santiago

    2017-04-01

    The aim of this study was to perform a systematic review to identify, evaluate and summarise studies on the administration of therapeutic massage to preterm neonates during their stay in the NICU, and to assess their methodological quality. systematic review following PRISMA statements guidelines. A comprehensive search was performed including relevant articles between January 2004 and December 2013, using the following electronic databases: Medline, PEDro, Web of Science and Scopus. Two reviewers conducted a review of the selected articles: one evaluated the methodological quality of the studies and performed data extraction and the other performed a cross-check. Divergences of opinion were resolved by discussion with a third reviewer. The studies reviewed implemented a wide variety of interventions and evaluation methods, and therefore it was not possible to perform a meta-analysis. The following data were extracted from each article: year of publication, study design, participants and main measurements of outcomes obtained through the intervention. A non-quantitative synthesis of the extracted data was performed. Level of evidence was graded using the Jadad Scale. A total of 23 articles met the inclusion criteria and were thus included in the review; these presented a methodological quality ranging from 1 to 5 points (with a mean of 3 points). Most studies reported that the administration of various forms of therapeutic massage exerted a beneficial effect on factors related to the growth of preterm infants. The causes indicated by the researchers for these anthropometric benefits included increased vagal activity, increased gastric activity and increased serum insulin levels. Other demonstrated benefits of massage therapy when administered to hospitalised preterm infants included better neurodevelopment, a positive effect on brain development, a reduced risk of neonatal sepsis, a reduction in length of hospital stay and reduced neonatal stress. Although based on a qualitative analysis of heterogeneous data, the present review suggests that a clear benefit is obtained from the administration of massage therapy in hospitalised preterm infants, a finding which should encourage the more generalised use of massotherapy in NICU clinical practice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Mortar radiocarbon dating: preliminary accuracy evaluation of a novel methodology.

    PubMed

    Marzaioli, Fabio; Lubritto, Carmine; Nonni, Sara; Passariello, Isabella; Capano, Manuela; Terrasi, Filippo

    2011-03-15

    Mortars represent a class of building and art materials that are widespread at archeological sites from the Neolithic period on. After about 50 years of experimentation, the possibility to evaluate their absolute chronology by means of radiocarbon ((14)C) remains still uncertain. With the use of a simplified mortar production process in the laboratory environment, this study shows the overall feasibility of a novel physical pretreatment for the isolation of the atmospheric (14)CO(2) (i.e., binder) signal absorbed by the mortars during their setting. This methodology is based on the assumption that an ultrasonic attack in liquid phase isolates a suspension of binder carbonates from bulk mortars. Isotopic ((13)C and (14)C), % C, X-ray diffractometry (XRD), and scanning electron microscopy (SEM) analyses were performed to characterize the proposed methodology. The applied protocol allows suppression of the fossil carbon (C) contamination originating from the incomplete burning of the limestone during the quick lime production, providing unbiased dating for "laboratory" mortars produced operating at historically adopted burning temperatures.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Darren M.

    Sandia National Laboratories has tested and evaluated Geotech Smart24 data acquisition system with active Fortezza crypto card data signing and authentication. The test results included in this report were in response to static and tonal-dynamic input signals. Most test methodologies used were based on IEEE Standards 1057 for Digitizing Waveform Recorders and 1241 for Analog to Digital Converters; others were designed by Sandia specifically for infrasound application evaluation and for supplementary criteria not addressed in the IEEE standards. The objective of this work was to evaluate the overall technical performance of the Geotech Smart24 digitizer with a Fortezza PCMCIA cryptomore » card actively implementing the signing of data packets. The results of this evaluation were compared to relevant specifications provided within manufacturer's documentation notes. The tests performed were chosen to demonstrate different performance aspects of the digitizer under test. The performance aspects tested include determining noise floor, least significant bit (LSB), dynamic range, cross-talk, relative channel-to-channel timing, time-tag accuracy, analog bandwidth and calibrator performance.« less

  2. Identification of Dynamic Simulation Models for Variable Speed Pumped Storage Power Plants

    NASA Astrophysics Data System (ADS)

    Moreira, C.; Fulgêncio, N.; Silva, B.; Nicolet, C.; Béguin, A.

    2017-04-01

    This paper addresses the identification of reduced order models for variable speed pump-turbine plants, including the representation of the dynamic behaviour of the main components: hydraulic system, turbine governors, electromechanical equipment and power converters. A methodology for the identification of appropriated reduced order models both for turbine and pump operating modes is presented and discussed. The methodological approach consists of three main steps: 1) detailed pumped-storage power plant modelling in SIMSEN; 2) reduced order models identification and 3) specification of test conditions for performance evaluation.

  3. Bayesian design of decision rules for failure detection

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Willsky, A. S.

    1984-01-01

    The formulation of the decision making process of a failure detection algorithm as a Bayes sequential decision problem provides a simple conceptualization of the decision rule design problem. As the optimal Bayes rule is not computable, a methodology that is based on the Bayesian approach and aimed at a reduced computational requirement is developed for designing suboptimal rules. A numerical algorithm is constructed to facilitate the design and performance evaluation of these suboptimal rules. The result of applying this design methodology to an example shows that this approach is potentially a useful one.

  4. Assessing the role of mini-applications in predicting key performance characteristics of scientific and engineering applications

    DOE PAGES

    Barrett, R. F.; Crozier, P. S.; Doerfler, D. W.; ...

    2014-09-28

    Computational science and engineering application programs are typically large, complex, and dynamic, and are often constrained by distribution limitations. As a means of making tractable rapid explorations of scientific and engineering application programs in the context of new, emerging, and future computing architectures, a suite of miniapps has been created to serve as proxies for full scale applications. Each miniapp is designed to represent a key performance characteristic that does or is expected to significantly impact the runtime performance of an application program. In this paper we introduce a methodology for assessing the ability of these miniapps to effectively representmore » these performance issues. We applied this methodology to four miniapps, examining the linkage between them and an application they are intended to represent. Herein we evaluate the fidelity of that linkage. This work represents the initial steps required to begin to answer the question, ''Under what conditions does a miniapp represent a key performance characteristic in a full app?''« less

  5. Performance Management in Healthcare Organizations: Concept and Practicum.

    PubMed

    Dimitropoulos, Panagiotis E

    2017-01-01

    Organizational performance can create and sustain competitive advantages for corporations and even improve their sustainability and future prospects. Health care organizations present a sector where performance management is structured by multiple dimensions. The scope of this study is to analyze the issue of performance management in healthcare organizations and specifically the implementation of the Balanced Scorecard (BSC) methodology on organizations providing health services. The study provides a discussion on the BSC development process, the steps that management has to take in order to prepare the implementation of the BSC and finally discusses a practical example of a scorecard with specific strategic goals and performance indicators. Managers of healthcare organizations and specifically those providing services to the elderly and the general population could use the propositions of the study as a roadmap for processing, analyzing, evaluating and implementing the balanced scorecard approach in their organizations' daily operations. BSC methodology can give an advantage in terms of enhanced stakeholder management and preservation within a highly volatile and competitive economic environment.

  6. MiRduplexSVM: A High-Performing MiRNA-Duplex Prediction and Evaluation Methodology

    PubMed Central

    Karathanasis, Nestoras; Tsamardinos, Ioannis; Poirazi, Panayiota

    2015-01-01

    We address the problem of predicting the position of a miRNA duplex on a microRNA hairpin via the development and application of a novel SVM-based methodology. Our method combines a unique problem representation and an unbiased optimization protocol to learn from mirBase19.0 an accurate predictive model, termed MiRduplexSVM. This is the first model that provides precise information about all four ends of the miRNA duplex. We show that (a) our method outperforms four state-of-the-art tools, namely MaturePred, MiRPara, MatureBayes, MiRdup as well as a Simple Geometric Locator when applied on the same training datasets employed for each tool and evaluated on a common blind test set. (b) In all comparisons, MiRduplexSVM shows superior performance, achieving up to a 60% increase in prediction accuracy for mammalian hairpins and can generalize very well on plant hairpins, without any special optimization. (c) The tool has a number of important applications such as the ability to accurately predict the miRNA or the miRNA*, given the opposite strand of a duplex. Its performance on this task is superior to the 2nts overhang rule commonly used in computational studies and similar to that of a comparative genomic approach, without the need for prior knowledge or the complexity of performing multiple alignments. Finally, it is able to evaluate novel, potential miRNAs found either computationally or experimentally. In relation with recent confidence evaluation methods used in miRBase, MiRduplexSVM was successful in identifying high confidence potential miRNAs. PMID:25961860

  7. Acceptance testing for PACS: from methodology to design to implementation

    NASA Astrophysics Data System (ADS)

    Liu, Brent J.; Huang, H. K.

    2004-04-01

    Acceptance Testing (AT) is a crucial step in the implementation process of a PACS within a clinical environment. AT determines whether the PACS is ready for clinical use and marks the official sign off of the PACS product. Most PACS vendors have Acceptance Testing (AT) plans, however, these plans do not provide a complete and robust evaluation of the full system. In addition, different sites will have different special requirements that vendor AT plans do not cover. The purpose of this paper is to introduce a protocol for AT design and present case studies of AT performed on clinical PACS. A methodology is presented that includes identifying testing components within PACS, quality assurance for both functionality and performance, and technical testing focusing on key single points-of-failure within the PACS product. Tools and resources that provide assistance in performing AT are discussed. In addition, implementation of the AT within the clinical environment and the overall implementation timeline of the PACS process are presented. Finally, case studies of actual AT of clinical PACS performed in the healthcare environment will be reviewed. The methodology for designing and implementing a robust AT plan for PACS was documented and has been used in PACS acceptance tests in several sites. This methodology can be applied to any PACS and can be used as a validation for the PACS product being acquired by radiology departments and hospitals. A methodology for AT design and implementation was presented that can be applied to future PACS installations. A robust AT plan for a PACS installation can increase both the utilization and satisfaction of a successful implementation of a PACS product that benefits both vendor and customer.

  8. Epidemiological characteristics and methodological quality of meta-analyses on diabetes mellitus treatment: a systematic review.

    PubMed

    Wu, Xin Yin; Lam, Victor C K; Yu, Yue Feng; Ho, Robin S T; Feng, Ye; Wong, Charlene H L; Yip, Benjamin H K; Tsoi, Kelvin K F; Wong, Samuel Y S; Chung, Vincent C H

    2016-11-01

    Well-conducted meta-analyses (MAs) are considered as one of the best sources of clinical evidence for treatment decision. MA with methodological flaws may introduce bias and mislead evidence users. The aim of this study is to investigate the characteristics and methodological quality of MAs on diabetes mellitus (DM) treatments. Systematic review. Cochrane Database of Systematic Review and Database of Abstract of Reviews of Effects were searched for relevant MAs. Assessing methodological quality of systematic reviews (AMSTAR) tool was used to evaluate the methodological quality of included MAs. Logistic regression analysis was used to identify association between characteristics of MA and AMSTAR results. A total of 252 MAs including 4999 primary studies and 13,577,025 patients were included. Over half of the MAs (65.1%) only included type 2 DM patients and 160 MAs (63.5%) focused on pharmacological treatments. About 89.7% MAs performed comprehensive literature search and 89.3% provided characteristics of included studies. Included MAs generally had poor performance on the remaining AMSTAR items, especially in assessing publication bias (39.3%), providing lists of studies (19.0%) and declaring source of support comprehensively (7.5%). Only 62.7% MAs mentioned about harm of interventions. MAs with corresponding author from Asia performed less well in providing MA protocol than those from Europe. Methodological quality of MA on DM treatments was unsatisfactory. There is considerable room for improvement, especially in assessing publication bias, providing lists of studies and declaring source of support comprehensively. Also, there is an urgent need for MA authors to report treatment harm comprehensively. © 2016 European Society of Endocrinology.

  9. Methodological adequacy of articles published in two open-access Brazilian cardiology periodicals.

    PubMed

    Macedo, Cristiane Rufino; Silva, Davi Leite da; Puga, Maria Eduarda

    2010-01-01

    The use of rigorous scientific methods has contributed towards developing scientific articles of excellent methodological quality. This has made it possible to promote their citation and increase the impact factor. Brazilian periodicals have had to adapt to certain quality standards demanded by these indexing organizations, such as the content and the number of original articles published in each issue. This study aimed to evaluate the methodological adequacy of two Brazilian periodicals within the field of cardiology that are indexed in several databases and freely accessible through the Scientific Electronic Library Online (SciELO), and which are now indexed by the Web of Science (Institute for Scientific Information, ISI). Descriptive study at Brazilian Cochrane Center. All the published articles were evaluated according to merit assessment (content) and form assessment (performance). Ninety-six percent of the articles analyzed presented study designs that were adequate for answering the objectives. These two Brazilian periodicals within the field of cardiology published methodologically adequate articles, since they followed the quality standards. Thus, these periodicals can be considered both for consultation and as vehicles for publishing future articles. For further analyses, it is essential to apply other indicators of scientific activity such as bibliometrics, which evaluates quantitative aspects of the production, dissemination and use of information, and scientometrics, which is also concerned with the development of science policies, within which it is often superimposed on bibliometrics.

  10. Research Project Evaluation-Learnings from the PATHWAYS Project Experience.

    PubMed

    Galas, Aleksander; Pilat, Aleksandra; Leonardi, Matilde; Tobiasz-Adamczyk, Beata

    2018-05-25

    Every research project faces challenges regarding how to achieve its goals in a timely and effective manner. The purpose of this paper is to present a project evaluation methodology gathered during the implementation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS Project). The PATHWAYS project involved multiple countries and multi-cultural aspects of re/integrating chronically ill patients into labor markets in different countries. This paper describes key project's evaluation issues including: (1) purposes, (2) advisability, (3) tools, (4) implementation, and (5) possible benefits and presents the advantages of a continuous monitoring. Project evaluation tool to assess structure and resources, process, management and communication, achievements, and outcomes. The project used a mixed evaluation approach and included Strengths (S), Weaknesses (W), Opportunities (O), and Threats (SWOT) analysis. A methodology for longitudinal EU projects' evaluation is described. The evaluation process allowed to highlight strengths and weaknesses and highlighted good coordination and communication between project partners as well as some key issues such as: the need for a shared glossary covering areas investigated by the project, problematic issues related to the involvement of stakeholders from outside the project, and issues with timing. Numerical SWOT analysis showed improvement in project performance over time. The proportion of participating project partners in the evaluation varied from 100% to 83.3%. There is a need for the implementation of a structured evaluation process in multidisciplinary projects involving different stakeholders in diverse socio-environmental and political conditions. Based on the PATHWAYS experience, a clear monitoring methodology is suggested as essential in every multidisciplinary research projects.

  11. Image processing in digital pathology: an opportunity to solve inter-batch variability of immunohistochemical staining

    NASA Astrophysics Data System (ADS)

    van Eycke, Yves-Rémi; Allard, Justine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine

    2017-02-01

    Immunohistochemistry (IHC) is a widely used technique in pathology to evidence protein expression in tissue samples. However, this staining technique is known for presenting inter-batch variations. Whole slide imaging in digital pathology offers a possibility to overcome this problem by means of image normalisation techniques. In the present paper we propose a methodology to objectively evaluate the need of image normalisation and to identify the best way to perform it. This methodology uses tissue microarray (TMA) materials and statistical analyses to evidence the possible variations occurring at colour and intensity levels as well as to evaluate the efficiency of image normalisation methods in correcting them. We applied our methodology to test different methods of image normalisation based on blind colour deconvolution that we adapted for IHC staining. These tests were carried out for different IHC experiments on different tissue types and targeting different proteins with different subcellular localisations. Our methodology enabled us to establish and to validate inter-batch normalization transforms which correct the non-relevant IHC staining variations. The normalised image series were then processed to extract coherent quantitative features characterising the IHC staining patterns.

  12. Image processing in digital pathology: an opportunity to solve inter-batch variability of immunohistochemical staining

    PubMed Central

    Van Eycke, Yves-Rémi; Allard, Justine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine

    2017-01-01

    Immunohistochemistry (IHC) is a widely used technique in pathology to evidence protein expression in tissue samples. However, this staining technique is known for presenting inter-batch variations. Whole slide imaging in digital pathology offers a possibility to overcome this problem by means of image normalisation techniques. In the present paper we propose a methodology to objectively evaluate the need of image normalisation and to identify the best way to perform it. This methodology uses tissue microarray (TMA) materials and statistical analyses to evidence the possible variations occurring at colour and intensity levels as well as to evaluate the efficiency of image normalisation methods in correcting them. We applied our methodology to test different methods of image normalisation based on blind colour deconvolution that we adapted for IHC staining. These tests were carried out for different IHC experiments on different tissue types and targeting different proteins with different subcellular localisations. Our methodology enabled us to establish and to validate inter-batch normalization transforms which correct the non-relevant IHC staining variations. The normalised image series were then processed to extract coherent quantitative features characterising the IHC staining patterns. PMID:28220842

  13. Teaching clinical research methodology to the academic medical community: a fifteen-year retrospective of a comprehensive curriculum.

    PubMed

    Supino, Phyllis G; Borer, Jeffrey S

    2007-05-01

    Due to inadequate preparation, many medical professionals are unable to critically evaluate published research articles or properly design, execute and present their own research. To increase exposure among physicians, medical students, and allied health professionals to diverse methodological issues involved in performing research. A comprehensive course on research methodology was newly designed for physicians and other members of an academic medical community, and has been successfully implemented beginning 1991. The role of the study hypothesis is highlighted; interactive pedagogical techniques are employed to promote audience engagement. Participants complete an annual evaluation to assess course quality and perceived outcomes. Outcomes also are assessed qualitatively by faculty. More than 500 physicians/other professionals have participated. Ratings have been consistently high. Topics deemed most valuable are investigational planning, hypothesis construction and study designs. An enhancement of capacity to define hypotheses and apply methodological concepts in the criticism of scientific papers and development of protocols/manuscripts has been observed. Participants and faculty believe the course improves critical appraisal skills and ability to conduct research. Our experience shows it is feasible to accomplish these objectives, with a high level of satisfaction, through a didactic program targeted to the general academic community.

  14. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 1: Methodology and applications

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  15. De-individualized psychophysiological strain assessment during a flight simulation test—Validation of a space methodology

    NASA Astrophysics Data System (ADS)

    Johannes, Bernd; Salnitski, Vyacheslav; Soll, Henning; Rauch, Melina; Hoermann, Hans-Juergen

    For the evaluation of an operator's skill reliability indicators of work quality as well as of psychophysiological states during the work have to be considered. The herein presented methodology and measurement equipment were developed and tested in numerous terrestrial and space experiments using a simulation of a spacecraft docking on a space station. However, in this study the method was applied to a comparable terrestrial task—the flight simulator test (FST) used in the DLR selection procedure for ab initio pilot applicants for passenger airlines. This provided a large amount of data for a statistical verification of the space methodology. For the evaluation of the strain level of applicants during the FST psychophysiological measurements were used to construct a "psychophysiological arousal vector" (PAV) which is sensitive to various individual reaction patterns of the autonomic nervous system to mental load. Its changes and increases will be interpreted as "strain". In the first evaluation study, 614 subjects were analyzed. The subjects first underwent a calibration procedure for the assessment of their autonomic outlet type (AOT) and on the following day they performed the FST, which included three tasks and was evaluated by instructors applying well-established and standardized rating scales. This new method will possibly promote a wide range of other future applications in aviation and space psychology.

  16. H-infinity based integrated flight-propulsion control design for a STOVL aircraft in transition flight

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Mattern, Duane L.; Bright, Michelle M.; Ouzts, Peter J.

    1990-01-01

    Results are presented from an application of H-infinity control design methodology to a centralized integrated flight/propulsion control (IFPC) system design for a supersonic Short Take-Off and Vertical Landing (STOVL) fighter aircraft in transition flight. The overall design methodology consists of a centralized IFPC controller design with controller partitioning. Only the feedback controller design portion of the methodology is addressed. Design and evaluation vehicle models are summarized, and insight is provided into formulating the H-infinity control problem such that it reflects the IFPC design objectives. The H-infinity controller is shown to provide decoupled command tracking for the design model. The controller order could be significantly reduced by modal residualization of the fast controller modes without any deterioration in performance. A discussion is presented of the areas in which the controller performance needs to be improved, and ways in which these improvements can be achieved within the framework of an H-infinity based linear control design.

  17. Statistical optimization of process parameters for lipase-catalyzed synthesis of triethanolamine-based esterquats using response surface methodology in 2-liter bioreactor.

    PubMed

    Masoumi, Hamid Reza Fard; Basri, Mahiran; Kassim, Anuar; Abdullah, Dzulkefly Kuang; Abdollahi, Yadollah; Abd Gani, Siti Salwa; Rezaee, Malahat

    2013-01-01

    Lipase-catalyzed production of triethanolamine-based esterquat by esterification of oleic acid (OA) with triethanolamine (TEA) in n-hexane was performed in 2 L stirred-tank reactor. A set of experiments was designed by central composite design to process modeling and statistically evaluate the findings. Five independent process variables, including enzyme amount, reaction time, reaction temperature, substrates molar ratio of OA to TEA, and agitation speed, were studied under the given conditions designed by Design Expert software. Experimental data were examined for normality test before data processing stage and skewness and kurtosis indices were determined. The mathematical model developed was found to be adequate and statistically accurate to predict the optimum conversion of product. Response surface methodology with central composite design gave the best performance in this study, and the methodology as a whole has been proven to be adequate for the design and optimization of the enzymatic process.

  18. An eco-balance of a recycling plant for spent lead-acid batteries.

    PubMed

    Salomone, Roberta; Mondello, Fabio; Lanuzza, Francesco; Micali, Giuseppe

    2005-02-01

    This study applies Life Cycle Assessment (LCA) methodology to present an eco-balance of a recycling plant that treats spent lead-acid batteries. The recycling plant uses pyrometallurgical treatment to obtain lead from spent batteries. The application of LCA methodology (ISO 14040 series) enabled us to assess the potential environmental impacts arising from the recycling plant's operations. Thus, net emissions of greenhouse gases as well as other major environmental consequences were examined and hot spots inside the recycling plant were identified. A sensitivity analysis was also performed on certain variables to evaluate their effect on the LCA study. The LCA of a recycling plant for spent lead-acid batteries presented shows that this methodology allows all of the major environmental consequences associated with lead recycling using the pyrometallurgical process to be examined. The study highlights areas in which environmental improvements are easily achievable by a business, providing a basis for suggestions to minimize the environmental impact of its production phases, improving process and company performance in environmental terms.

  19. Evaluation Methodology. The Evaluation Exchange. Volume 11, Number 2, Summer 2005

    ERIC Educational Resources Information Center

    Coffman, Julia, Ed.

    2005-01-01

    This is the third issue of "The Evaluation Exchange" devoted entirely to the theme of methodology, though every issue tries to identify new methodological choices, the instructive ways in which people have applied or combined different methods, and emerging methodological trends. For example, lately "theories of change" have gained almost…

  20. Counter unmanned aerial system testing and evaluation methodology

    NASA Astrophysics Data System (ADS)

    Kouhestani, C.; Woo, B.; Birch, G.

    2017-05-01

    Unmanned aerial systems (UAS) are increasing in flight times, ease of use, and payload sizes. Detection, classification, tracking, and neutralization of UAS is a necessary capability for infrastructure and facility protection. We discuss test and evaluation methodology developed at Sandia National Laboratories to establish a consistent, defendable, and unbiased means for evaluating counter unmanned aerial system (CUAS) technologies. The test approach described identifies test strategies, performance metrics, UAS types tested, key variables, and the necessary data analysis to accurately quantify the capabilities of CUAS technologies. The tests conducted, as defined by this approach, will allow for the determination of quantifiable limitations, strengths, and weaknesses in terms of detection, tracking, classification, and neutralization. Communicating the results of this testing in such a manner informs decisions by government sponsors and stakeholders that can be used to guide future investments and inform procurement, deployment, and advancement of such systems into their specific venues.

  1. Advances in Degradable Embolic Microspheres: A State of the Art Review

    PubMed Central

    Doucet, Jensen; Kiri, Lauren; O’Connell, Kathleen; Kehoe, Sharon; Lewandowski, Robert J.; Liu, David M.; Abraham, Robert J.; Boyd, Daniel

    2018-01-01

    Considerable efforts have been placed on the development of degradable microspheres for use in transarterial embolization indications. Using the guidance of the U.S. Food and Drug Administration (FDA) special controls document for the preclinical evaluation of vascular embolization devices, this review consolidates all relevant data pertaining to novel degradable microsphere technologies for bland embolization into a single reference. This review emphasizes intended use, chemical composition, degradative mechanisms, and pre-clinical safety, efficacy, and performance, while summarizing the key advantages and disadvantages for each degradable technology that is currently under development for transarterial embolization. This review is intended to provide an inclusive reference for clinicians that may facilitate an understanding of clinical and technical concepts related to this field of interventional radiology. For materials scientists, this review highlights innovative devices and current evaluation methodologies (i.e., preclinical models), and is designed to be instructive in the development of innovative/new technologies and evaluation methodologies. PMID:29373510

  2. Distal biceps brachii tendon repair: a systematic review of patient outcome determination using modified Coleman methodology score criteria.

    PubMed

    Nyland, John; Causey, Brandon; Wera, Jeff; Krupp, Ryan; Tate, David; Gupta, Amit

    2017-07-01

    This systematic literature review evaluated the methodological research design quality of studies that evaluated patient outcomes following distal biceps brachii tendon repair and developed evidence-based recommendations for future patient clinical outcomes research. Following the preferred reporting items for systematic reviews and meta-analyses criteria, and using "biceps brachii", "tendon", "repair" and "outcome assessment" search terms, the CINAHL, Academic Search Premier and MEDLINE databases were searched from January 1960-October 2015. The modified Coleman methodology score (MCMS) served as the primary outcome measure. Descriptive statistical analysis was performed for composite and component MCMS and for patient outcome assessment methodology use frequency. A total of 93 studies were evaluated. Overall MCMS was low (57.1 ± 14). Only 12 (12.9 %) had prospective cohort or randomized controlled trial designs. There was a moderate relationship between publication year and MCMS (r = 0.53, P < 0.0001). Although 61 studies (65.6 %) had adequate surgical descriptions, only 3 (3.2 %) had well-described rehabilitation. Of 2253 subjects, only 39 (1.7 %) were women. Studies published after 2008 had higher MCMS scores than studies published earlier (61.3 ± 10 versus 52.9 ± 16, P = 0.003). Although overall research study methodological scores improved on average since 2008, generally low MCMS scores, retrospective designs, lack of eccentric elbow flexor or supinator strength testing, and poorly described surgical and rehabilitation descriptions remain commonplace. These findings decrease clinical study validity and generalizability. III.

  3. Guidelines for the Design and Conduct of Clinical Studies in Knee Articular Cartilage Repair

    PubMed Central

    Mithoefer, Kai; Saris, Daniel B.F.; Farr, Jack; Kon, Elizaveta; Zaslav, Kenneth; Cole, Brian J.; Ranstam, Jonas; Yao, Jian; Shive, Matthew; Levine, David; Dalemans, Wilfried; Brittberg, Mats

    2011-01-01

    Objective: To summarize current clinical research practice and develop methodological standards for objective scientific evaluation of knee cartilage repair procedures and products. Design: A comprehensive literature review was performed of high-level original studies providing information relevant for the design of clinical studies on articular cartilage repair in the knee. Analysis of cartilage repair publications and synopses of ongoing trials were used to identify important criteria for the design, reporting, and interpretation of studies in this field. Results: Current literature reflects the methodological limitations of the scientific evidence available for articular cartilage repair. However, clinical trial databases of ongoing trials document a trend suggesting improved study designs and clinical evaluation methodology. Based on the current scientific information and standards of clinical care, detailed methodological recommendations were developed for the statistical study design, patient recruitment, control group considerations, study endpoint definition, documentation of results, use of validated patient-reported outcome instruments, and inclusion and exclusion criteria for the design and conduct of scientifically sound cartilage repair study protocols. A consensus statement among the International Cartilage Repair Society (ICRS) and contributing authors experienced in clinical trial design and implementation was achieved. Conclusions: High-quality clinical research methodology is critical for the optimal evaluation of current and new cartilage repair technologies. In addition to generally applicable principles for orthopedic study design, specific criteria and considerations apply to cartilage repair studies. Systematic application of these criteria and considerations can facilitate study designs that are scientifically rigorous, ethical, practical, and appropriate for the question(s) being addressed in any given cartilage repair research project. PMID:26069574

  4. Feedback Effects of Teaching Quality Assessment: Macro and Micro Evidence

    ERIC Educational Resources Information Center

    Bianchini, Stefano

    2014-01-01

    This study investigates the feedback effects of teaching quality assessment. Previous literature looked separately at the evolution of individual and aggregate scores to understand whether instructors and university performance depends on its past evaluation. I propose a new quantitative-based methodology, combining statistical distributions and…

  5. Transformation of Roles and Responsibilities of Principals in Times of Change

    ERIC Educational Resources Information Center

    Stringer, Patricia; Hourani, Rida Blaik

    2016-01-01

    Schools in Abu Dhabi are going through transformation and reform. The New School Model (2010) introduced changes to the curriculum and teaching and learning methodologies. In line with these changes, recently introduced "Principal Professional Standards" and "Performance Evaluation" documents have conceptualized new roles and…

  6. Policy options evaluation tool for managed lanes (POET-ML) users guide and methodology description : Federal Highway Administration HOV lane performance

    DOT National Transportation Integrated Search

    2008-12-01

    Users guide for a sketch planning tool for exploring policy alternatives. It is intended for an audience of transportation professionals responsible for planning, designing, funding, operating, enforcing, monitoring, and managing HOV and HOT lanes...

  7. Development and evaluation of a simplified superpave IDT testing system for implementation in mix design and control : final report, March 2008.

    DOT National Transportation Integrated Search

    2009-08-01

    Asphalt mixtures designed using modern conventional methods, whether Marshall or Superpave methodologies, fail to address the cracking performance of these mixtures. Research previously conducted at the University of Florida for the Florida Departmen...

  8. The development of a streamlined, coordinated and sustainable evaluation methodology for a diverse chronic disease management program.

    PubMed

    Berlowitz, David J; Graco, Marnie

    2010-05-01

    The Northern Alliance Hospital Admission Risk Program-Chronic Disease Management comprises 13 services delivering care to those with chronic disease and older people with complex care needs, who are frequent hospital users. To develop and implement a system-wide approach to the evaluation of this existing program. The Northern Clinical Research Centre audited all existing, routinely collected administrative data within the program and then met with each service to develop service specific outcome measures. The evaluators then developed and implemented a system-wide evaluation approach to measure performance in terms of: client profile; access and entry; service efficiency; client outcomes; and hospital demand. Data are collected electronically and more than 80% are derived from existing, administrative datasets, minimising staff and client burden. Additional data include client outcomes and a health related quality of life measure. The preliminary twelve month data suggest that clients have the equivalent of 'fair' or 'poor' self-reported health status (n = 862) and the average health utility scores are significantly (P < 0.05) worse than population control data. These analyses reveal, for the first time, that the Northern Alliance Hospital Admission Risk Program-Chronic Disease Management program is targeting appropriate clients. This methodology will enable many prospective assessments to be performed including; client outcome evaluation, service model comparisons, and cost-utility analyses. This evaluation approach demonstrates the feasibility of a highly coordinated 'whole of system' evaluation. Such an approach may ultimately contribute to the development of evidence-based policy.

  9. A novel integrated assessment methodology of urban water reuse.

    PubMed

    Listowski, A; Ngo, H H; Guo, W S; Vigneswaran, S

    2011-01-01

    Wastewater is no longer considered a waste product and water reuse needs to play a stronger part in securing urban water supply. Although treatment technologies for water reclamation have significantly improved the question that deserves further analysis is, how selection of a particular wastewater treatment technology relates to performance and sustainability? The proposed assessment model integrates; (i) technology, characterised by selected quantity and quality performance parameters; (ii) productivity, efficiency and reliability criteria; (iii) quantitative performance indicators; (iv) development of evaluation model. The challenges related to hierarchy and selections of performance indicators have been resolved through the case study analysis. The goal of this study is to validate a new assessment methodology in relation to performance of the microfiltration (MF) technology, a key element of the treatment process. Specific performance data and measurements were obtained at specific Control and Data Acquisition Points (CP) to satisfy the input-output inventory in relation to water resources, products, material flows, energy requirements, chemicals use, etc. Performance assessment process contains analysis and necessary linking across important parametric functions leading to reliable outcomes and results.

  10. Global trends in the incidence and prevalence of type 2 diabetes in children and adolescents: a systematic review and evaluation of methodological approaches.

    PubMed

    Fazeli Farsani, S; van der Aa, M P; van der Vorst, M M J; Knibbe, C A J; de Boer, A

    2013-07-01

    This study aimed to systematically review what has been reported on the incidence and prevalence of type 2 diabetes in children and adolescents, to scrutinise the methodological issues observed in the included studies and to prepare recommendations for future research and surveillances. PubMed, the Cochrane Database of Systematic Reviews, Scopus, EMBASE and Web of Science were searched from inception to February 2013. Population-based studies on incidence and prevalence of type 2 diabetes in children and adolescents were summarised and methodologically evaluated. Owing to substantial methodological heterogeneity and considerable differences in study populations a quantitative meta-analysis was not performed. Among 145 potentially relevant studies, 37 population-based studies met the inclusion criteria. Variations in the incidence and prevalence rates of type 2 diabetes in children and adolescents were mainly related to age of the study population, calendar time, geographical regions and ethnicity, resulting in a range of 0-330 per 100,000 person-years for incidence rates, and 0-5,300 per 100,000 population for prevalence rates. Furthermore, a substantial variation in the methodological characteristics was observed for response rates (60-96%), ascertainment rates (53-99%), diagnostic tests and criteria used to diagnose type 2 diabetes. Worldwide incidence and prevalence of type 2 diabetes in children and adolescents vary substantially among countries, age categories and ethnic groups and this can be explained by variations in population characteristics and methodological dissimilarities between studies.

  11. Aero-Mechanical Design Methodology for Subsonic Civil Transport High-Lift Systems

    NASA Technical Reports Server (NTRS)

    vanDam, C. P.; Shaw, S. G.; VanderKam, J. C.; Brodeur, R. R.; Rudolph, P. K. C.; Kinney, D.

    2000-01-01

    In today's highly competitive and economically driven commercial aviation market, the trend is to make aircraft systems simpler and to shorten their design cycle which reduces recurring, non-recurring and operating costs. One such system is the high-lift system. A methodology has been developed which merges aerodynamic data with kinematic analysis of the trailing-edge flap mechanism with minimum mechanism definition required. This methodology provides quick and accurate aerodynamic performance prediction for a given flap deployment mechanism early on in the high-lift system preliminary design stage. Sample analysis results for four different deployment mechanisms are presented as well as descriptions of the aerodynamic and mechanism data required for evaluation. Extensions to interactive design capabilities are also discussed.

  12. New model framework and structure and the commonality evaluation model. [concerning unmanned spacecraft projects

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The development of a framework and structure for shuttle era unmanned spacecraft projects and the development of a commonality evaluation model is documented. The methodology developed for model utilization in performing cost trades and comparative evaluations for commonality studies is discussed. The model framework consists of categories of activities associated with the spacecraft system's development process. The model structure describes the physical elements to be treated as separate identifiable entities. Cost estimating relationships for subsystem and program-level components were calculated.

  13. The Cylindrical Component Methodology Evaluation Module for MUVES-S2

    DTIC Science & Technology

    2017-04-01

    ARL-TR-7990 ● APR 2017 US Army Research Laboratory The Cylindrical Component Methodology Evaluation Module for MUVES-S2 by...Laboratory The Cylindrical Component Methodology Evaluation Module for MUVES-S2 by David S Butler, Marianne Kunkel, and Brian G Smith...Methodology Evaluation Module for MUVES-S2 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) David S Butler, Marianne

  14. Detection of explosives on the surface of banknotes by Raman hyperspectral imaging and independent component analysis.

    PubMed

    Almeida, Mariana R; Correa, Deleon N; Zacca, Jorge J; Logrado, Lucio Paulo Lima; Poppi, Ronei J

    2015-02-20

    The aim of this study was to develop a methodology using Raman hyperspectral imaging and chemometric methods for identification of pre- and post-blast explosive residues on banknote surfaces. The explosives studied were of military, commercial and propellant uses. After the acquisition of the hyperspectral imaging, independent component analysis (ICA) was applied to extract the pure spectra and the distribution of the corresponding image constituents. The performance of the methodology was evaluated by the explained variance and the lack of fit of the models, by comparing the ICA recovered spectra with the reference spectra using correlation coefficients and by the presence of rotational ambiguity in the ICA solutions. The methodology was applied to forensic samples to solve an automated teller machine explosion case. Independent component analysis proved to be a suitable method of resolving curves, achieving equivalent performance with the multivariate curve resolution with alternating least squares (MCR-ALS) method. At low concentrations, MCR-ALS presents some limitations, as it did not provide the correct solution. The detection limit of the methodology presented in this study was 50 μg cm(-2). Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Evaluation of the measurement properties of self-reported health-related work-functioning instruments among workers with common mental disorders.

    PubMed

    Abma, Femke I; van der Klink, Jac J L; Terwee, Caroline B; Amick, Benjamin C; Bültmann, Ute

    2012-01-01

    During the past decade, common mental disorders (CMD) have emerged as a major public and occupational health problem in many countries. Several instruments have been developed to measure the influence of health on functioning at work. To select appropriate instruments for use in occupational health practice and research, the measurement properties (eg, reliability, validity, responsiveness) must be evaluated. The objective of this study is to appraise critically and compare the measurement properties of self-reported health-related work-functioning instruments among workers with CMD. A systematic review was performed searching three electronic databases. Papers were included that: (i) mainly focused on the development and/or evaluation of the measurement properties of a self-reported health-related work-functioning instrument; (ii) were conducted in a CMD population; and (iii) were fulltext original papers. Quality appraisal was performed using the consensus-based standards for the selection of health status measurement instruments (COSMIN) checklist. Five papers evaluating measurement properties of five self-reported health-related work-functioning instruments in CMD populations were included. There is little evidence available for the measurement properties of the identified instruments in this population, mainly due to low methodological quality of the included studies. The available evidence on measurement properties is based on studies of poor-to-fair methodological quality. Information on a number of measurement properties, such as measurement error, content validity, and cross-cultural validity is still lacking. Therefore, no evidence-based decisions and recommendations can be made for the use of health-related work functioning instruments. Studies of high methodological quality are needed to properly assess the existing instruments' measurement properties.

  16. Evaluation of Structural Robustness against Column Loss: Methodology and Application to RC Frame Buildings.

    PubMed

    Bao, Yihai; Main, Joseph A; Noh, Sam-Young

    2017-08-01

    A computational methodology is presented for evaluating structural robustness against column loss. The methodology is illustrated through application to reinforced concrete (RC) frame buildings, using a reduced-order modeling approach for three-dimensional RC framing systems that includes the floor slabs. Comparisons with high-fidelity finite-element model results are presented to verify the approach. Pushdown analyses of prototype buildings under column loss scenarios are performed using the reduced-order modeling approach, and an energy-based procedure is employed to account for the dynamic effects associated with sudden column loss. Results obtained using the energy-based approach are found to be in good agreement with results from direct dynamic analysis of sudden column loss. A metric for structural robustness is proposed, calculated by normalizing the ultimate capacities of the structural system under sudden column loss by the applicable service-level gravity loading and by evaluating the minimum value of this normalized ultimate capacity over all column removal scenarios. The procedure is applied to two prototype 10-story RC buildings, one employing intermediate moment frames (IMFs) and the other employing special moment frames (SMFs). The SMF building, with its more stringent seismic design and detailing, is found to have greater robustness.

  17. Application of the Hardman methodology to the Army Remotely Piloted Vehicle (RPV)

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The application of the HARDMAN Methodology to the Remotely Piloted Vehicle (RPV) is described. The methodology was used to analyze the manpower, personnel, and training (MPT) requirements of the proposed RPV system design for a number of operating scenarios. The RPV system is defined as consisting of the equipment, personnel, and operational procedures needed to perform five basic artillery missions: reconnaissance, target acquisition, artillery adjustment, target designation and damage assessment. The RPV design evaluated includes an air vehicle (AV), a modular integrated communications and navigation system (MICNS), a ground control station (GCS), a launch subsystem (LS), a recovery subsystem (RS), and a number of ground support requirements. The HARDMAN Methodology is an integrated set of data base management techniques and analytic tools, designed to provide timely and fully documented assessments of the human resource requirements associated with an emerging system's design.

  18. Development and application of a safety assessment methodology for waste disposals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Little, R.H.; Torres, C.; Schaller, K.H.

    1996-12-31

    As part of a European Commission funded research programme, QuantiSci (formerly the Environmental Division of Intera Information Technologies) and Instituto de Medio Ambiente of the Centro de Investigaciones Energeticas Medioambientales y Tecnologicas (IMA/CIEMAT) have developed and applied a comprehensive, yet practicable, assessment methodology for post-disposal safety assessment of land-based disposal facilities. This Safety Assessment Comparison (SACO) Methodology employs a systematic approach to the collection, evaluation and use of waste and disposal system data. It can be used to assess engineered barrier performance, the attenuating properties of host geological formations, and the long term impacts of a facility on the environmentmore » and human health, as well as allowing the comparison of different disposal options for radioactive, mixed and non-radioactive wastes. This paper describes the development of the methodology and illustrates its use.« less

  19. Structural Loads Analysis for Wave Energy Converters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Rij, Jennifer A; Yu, Yi-Hsiang; Guo, Yi

    2017-06-03

    This study explores and verifies the generalized body-modes method for evaluating the structural loads on a wave energy converter (WEC). Historically, WEC design methodologies have focused primarily on accurately evaluating hydrodynamic loads, while methodologies for evaluating structural loads have yet to be fully considered and incorporated into the WEC design process. As wave energy technologies continue to advance, however, it has become increasingly evident that an accurate evaluation of the structural loads will enable an optimized structural design, as well as the potential utilization of composites and flexible materials, and hence reduce WEC costs. Although there are many computational fluidmore » dynamics, structural analyses and fluid-structure-interaction (FSI) codes available, the application of these codes is typically too computationally intensive to be practical in the early stages of the WEC design process. The generalized body-modes method, however, is a reduced order, linearized, frequency-domain FSI approach, performed in conjunction with the linear hydrodynamic analysis, with computation times that could realistically be incorporated into the WEC design process.« less

  20. VASSAR: Value assessment of system architectures using rules

    NASA Astrophysics Data System (ADS)

    Selva, D.; Crawley, E. F.

    A key step of the mission development process is the selection of a system architecture, i.e., the layout of the major high-level system design decisions. This step typically involves the identification of a set of candidate architectures and a cost-benefit analysis to compare them. Computational tools have been used in the past to bring rigor and consistency into this process. These tools can automatically generate architectures by enumerating different combinations of decisions and options. They can also evaluate these architectures by applying cost models and simplified performance models. Current performance models are purely quantitative tools that are best fit for the evaluation of the technical performance of mission design. However, assessing the relative merit of a system architecture is a much more holistic task than evaluating performance of a mission design. Indeed, the merit of a system architecture comes from satisfying a variety of stakeholder needs, some of which are easy to quantify, and some of which are harder to quantify (e.g., elegance, scientific value, political robustness, flexibility). Moreover, assessing the merit of a system architecture at these very early stages of design often requires dealing with a mix of: a) quantitative and semi-qualitative data; objective and subjective information. Current computational tools are poorly suited for these purposes. In this paper, we propose a general methodology that can used to assess the relative merit of several candidate system architectures under the presence of objective, subjective, quantitative, and qualitative stakeholder needs. The methodology called VASSAR (Value ASsessment for System Architectures using Rules). The major underlying assumption of the VASSAR methodology is that the merit of a system architecture can assessed by comparing the capabilities of the architecture with the stakeholder requirements. Hence for example, a candidate architecture that fully satisfies all critical sta- eholder requirements is a good architecture. The assessment process is thus fundamentally seen as a pattern matching process where capabilities match requirements, which motivates the use of rule-based expert systems (RBES). This paper describes the VASSAR methodology and shows how it can be applied to a large complex space system, namely an Earth observation satellite system. Companion papers show its applicability to the NASA space communications and navigation program and the joint NOAA-DoD NPOESS program.

  1. A Long Short-Term Memory deep learning network for the prediction of epileptic seizures using EEG signals.

    PubMed

    Tsiouris, Κostas Μ; Pezoulas, Vasileios C; Zervakis, Michalis; Konitsiotis, Spiros; Koutsouris, Dimitrios D; Fotiadis, Dimitrios I

    2018-05-17

    The electroencephalogram (EEG) is the most prominent means to study epilepsy and capture changes in electrical brain activity that could declare an imminent seizure. In this work, Long Short-Term Memory (LSTM) networks are introduced in epileptic seizure prediction using EEG signals, expanding the use of deep learning algorithms with convolutional neural networks (CNN). A pre-analysis is initially performed to find the optimal architecture of the LSTM network by testing several modules and layers of memory units. Based on these results, a two-layer LSTM network is selected to evaluate seizure prediction performance using four different lengths of preictal windows, ranging from 15 min to 2 h. The LSTM model exploits a wide range of features extracted prior to classification, including time and frequency domain features, between EEG channels cross-correlation and graph theoretic features. The evaluation is performed using long-term EEG recordings from the open CHB-MIT Scalp EEG database, suggest that the proposed methodology is able to predict all 185 seizures, providing high rates of seizure prediction sensitivity and low false prediction rates (FPR) of 0.11-0.02 false alarms per hour, depending on the duration of the preictal window. The proposed LSTM-based methodology delivers a significant increase in seizure prediction performance compared to both traditional machine learning techniques and convolutional neural networks that have been previously evaluated in the literature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Aircraft flight test trajectory control

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Walker, R. A.

    1988-01-01

    Two control law design techniques are compared and the performance of the resulting controllers evaluated. The design requirement is for a flight test trajectory controller (FTTC) capable of closed-loop, outer-loop control of an F-15 aircraft performing high-quality research flight test maneuvers. The maneuver modeling, linearization, and design methodologies utilized in this research, are detailed. The results of applying these FTTCs to a nonlinear F-15 simulation are presented.

  3. Proposing a Research Methodology to Evaluate the Relation Between Training Needs Assessment and Employee Performance

    DTIC Science & Technology

    2016-06-01

    intrinsic drive is related to accomplishments such as personal interest or achievements and the satisfaction of the employer’s goals. Extrinsic...determining. Achievement of intrinsic values depends on the person’s extrinsic satisfaction , however. Thus, in order to maintain the motivational...explained that specific relationships may exist in facets of job satisfaction and job performance scopes. Satisfaction in work, promotion, and pay may lead

  4. New methodology to baseline and match AME polysilicon etcher using advanced diagnostic tools

    NASA Astrophysics Data System (ADS)

    Poppe, James; Shipman, John; Reinhardt, Barbara E.; Roussel, Myriam; Hedgecock, Raymond; Fonda, Arturo

    1999-09-01

    As process controls tighten in the semiconductor industry, the need to understand the variables that determine system performance become more important. For plasma etch systems, process success depends on the control of key parameters such as: vacuum integrity, pressure, gas flows, and RF power. It is imperative to baseline, monitor, and control these variables. This paper presents an overview of the methods and tools used by Motorola BMC fabrication facility to characterize an Applied Materials polysilicon etcher. Tool performance data obtained from our traditional measurement techniques are limited in their scope and do not provide a complete picture of the ultimate tool performance. Presently the BMC traditional characterization tools provide a snapshot of the static operation of the equipment under test (EUT); however, complete evaluation of the dynamic performance cannot be monitored without the aid of specialized diagnostic equipment. To provide us with a complete system baseline evaluation of the polysilicon etcher, three diagnostic tools were utilized: Lucas Labs Vacuum Diagnostic System, Residual Gas Analyzer, and the ENI Voltage/Impedance Probe. The diagnostic methodology used to baseline and match key parameters of qualified production equipment has had an immense impact on other equipment characterization in the facility. It has resulted in reduced cycle time for new equipment introduction as well.

  5. Thermal and optical performance of encapsulation systems for flat-plate photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Minning, C. P.; Coakley, J. F.; Perrygo, C. M.; Garcia, A., III; Cuddihy, E. F.

    1981-01-01

    The electrical power output from a photovoltaic module is strongly influenced by the thermal and optical characteristics of the module encapsulation system. Described are the methodology and computer model for performing fast and accurate thermal and optical evaluations of different encapsulation systems. The computer model is used to evaluate cell temperature, solar energy transmittance through the encapsulation system, and electric power output for operation in a terrestrial environment. Extensive results are presented for both superstrate-module and substrate-module design schemes which include different types of silicon cell materials, pottants, and antireflection coatings.

  6. Catchment area-based evaluation of the AMC-dependent SCS-CN-based rainfall-runoff models

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Jain, M. K.; Pandey, R. P.; Singh, V. P.

    2005-09-01

    Using a large set of rainfall-runoff data from 234 watersheds in the USA, a catchment area-based evaluation of the modified version of the Mishra and Singh (2002a) model was performed. The model is based on the Soil Conservation Service Curve Number (SCS-CN) methodology and incorporates the antecedent moisture in computation of direct surface runoff. Comparison with the existing SCS-CN method showed that the modified version performed better than did the existing one on the data of all seven area-based groups of watersheds ranging from 0.01 to 310.3 km2.

  7. First Comprehensive Evaluation of the M.I.C. Evaluator Device Compared to Etest and CLSI Broth Microdilution for MIC Testing of Aerobic Gram-Positive and Gram-Negative Bacterial Species

    PubMed Central

    Turnbull, L.; Brosnikoff, C.; Cloke, J.

    2012-01-01

    The M.I.C. Evaluator strip (Thermo Fisher Scientific, Basingstoke, United Kingdom) uses a methodology similar to that of Etest. In this first assessment of the M.I.C. Evaluator device, 409 strains of aerobic Gram-positive bacteria (staphylococci, streptococci, and enterococci) and 325 strains of Enterobacteriaceae, Pseudomonas species, and Acinetobacter species were tested by M.I.C. Evaluator strip, Etest, and broth microdilution as a reference standard. The Gram-positive bacteria included staphylococci (methicillin-resistant Staphylococcus aureus, methicillin-susceptible S. aureus, and coagulase-negative staphylococci), Streptococcus pneumoniae, beta-hemolytic streptococci and viridians group strains, vancomycin-resistant enterococci, and other enterococci. The Gram-negative bacteria included 250 strains of 60 Enterobacteriaceae species plus 50 Pseudomonas and 25 Acinetobacter species. A total of 14 antimicrobial agents (depending on the species) were included. The same methodology and reading format were used for M.I.C. Evaluator strips and Etest. Broth microdilution methodology was performed according to CLSI document M07-A8. For the clinical strains, >95% of results were plus or minus one doubling dilution for all species. There were fewer than 5% minor errors, fewer than 3% major errors, and fewer than 1% very major errors. M.I.C. Evaluator strips and Etest often reported higher MICs than the reference broth microdilution method. The M.I.C. Evaluator strips provided results comparable to those of the predicate Etest device and are of value for the accurate testing of MICs for these important pathogens. PMID:22238441

  8. Evidence and practice in spine registries

    PubMed Central

    van Hooff, Miranda L; Jacobs, Wilco C H; Willems, Paul C; Wouters, Michel W J M; de Kleuver, Marinus; Peul, Wilco C; Ostelo, Raymond W J G; Fritzell, Peter

    2015-01-01

    Background and purpose We performed a systematic review and a survey in order to (1) evaluate the evidence for the impact of spine registries on the quality of spine care, and with that, on patient-related outcomes, and (2) evaluate the methodology used to organize, analyze, and report the “quality of spine care” from spine registries. Methods To study the impact, the literature on all spinal disorders was searched. To study methodology, the search was restricted to degenerative spinal disorders. The risk of bias in the studies included was assessed with the Newcastle-Ottawa scale. Additionally, a survey among registry representatives was performed to acquire information about the methodology and practice of existing registries. Results 4,273 unique references up to May 2014 were identified, and 1,210 were eligible for screening and assessment. No studies on impact were identified, but 34 studies were identified to study the methodology. Half of these studies (17 of the 34) were judged to have a high risk of bias. The survey identified 25 spine registries, representing 14 countries. The organization of these registries, methods used, analytical approaches, and dissemination of results are presented. Interpretation We found a lack of evidence that registries have had an impact on the quality of spine care, regardless of whether intervention was non-surgical and/or surgical. To improve the quality of evidence published with registry data, we present several recommendations. Application of these recommendations could lead to registries showing trends, monitoring the quality of spine care given, and ultimately improving the value of the care given to patients with degenerative spinal disorders. PMID:25909475

  9. Randomized controlled trials of simulation-based interventions in Emergency Medicine: a methodological review.

    PubMed

    Chauvin, Anthony; Truchot, Jennifer; Bafeta, Aida; Pateron, Dominique; Plaisance, Patrick; Yordanov, Youri

    2018-04-01

    The number of trials assessing Simulation-Based Medical Education (SBME) interventions has rapidly expanded. Many studies show that potential flaws in design, conduct and reporting of randomized controlled trials (RCTs) can bias their results. We conducted a methodological review of RCTs assessing a SBME in Emergency Medicine (EM) and examined their methodological characteristics. We searched MEDLINE via PubMed for RCT that assessed a simulation intervention in EM, published in 6 general and internal medicine and in the top 10 EM journals. The Cochrane Collaboration risk of Bias tool was used to assess risk of bias, intervention reporting was evaluated based on the "template for intervention description and replication" checklist, and methodological quality was evaluated by the Medical Education Research Study Quality Instrument. Reports selection and data extraction was done by 2 independents researchers. From 1394 RCTs screened, 68 trials assessed a SBME intervention. They represent one quarter of our sample. Cardiopulmonary resuscitation (CPR) is the most frequent topic (81%). Random sequence generation and allocation concealment were performed correctly in 66 and 49% of trials. Blinding of participants and assessors was performed correctly in 19 and 68%. Risk of attrition bias was low in three-quarters of the studies (n = 51). Risk of selective reporting bias was unclear in nearly all studies. The mean MERQSI score was of 13.4/18.4% of the reports provided a description allowing the intervention replication. Trials assessing simulation represent one quarter of RCTs in EM. Their quality remains unclear, and reproducing the interventions appears challenging due to reporting issues.

  10. Diversity in livestock resources in pastoral systems in Africa.

    PubMed

    Kaufmann, B A; Lelea, M A; Hulsebusch, C G

    2016-11-01

    Pastoral systems are important producers and repositories of livestock diversity. Pastoralists use variability in their livestock resources to manage high levels of environmental variability in economically advantageous ways. In pastoral systems, human-animal-environment interactions are the basis of production and the key to higher productivity and efficiency. In other words, pastoralists manage a production system that exploits variability and keeps production costs low. When differentiating, characterising and evaluating pastoral breeds, this context-specific, functional dimension of diversity in livestock resources needs to be considered. The interaction of animals with their environment is determined not only by morphological and physiological traits but also by experience and socially learned behaviour. This high proportion of non-genetic components determining the performance of livestock means that current models for analysing livestock diversity and performance, which are based on genetic inheritance, have limited ability to describe pastoral performance. There is a need for methodological innovations to evaluate pastoral breeds and animals, since comparisons based on performance 'under optimal conditions' are irrelevant within this production system. Such innovations must acknowledge that livestock or breed performance is governed by complex human-animal-environment interactions, and varies through time and space due to the mobile and seasonal nature of the pastoral system. Pastoralists' breeding concepts and selection strategies seem to be geared towards improving their animals' capability to exploit variability, by - among other things - enhancing within-breed diversity. In-depth studies of these concepts and strategies could contribute considerably towards developing methodological innovations for the characterisation and evaluation of pastoral livestock resources.

  11. A framework for assessing the adequacy and effectiveness of software development methodologies

    NASA Technical Reports Server (NTRS)

    Arthur, James D.; Nance, Richard E.

    1990-01-01

    Tools, techniques, environments, and methodologies dominate the software engineering literature, but relatively little research in the evaluation of methodologies is evident. This work reports an initial attempt to develop a procedural approach to evaluating software development methodologies. Prominent in this approach are: (1) an explication of the role of a methodology in the software development process; (2) the development of a procedure based on linkages among objectives, principles, and attributes; and (3) the establishment of a basis for reduction of the subjective nature of the evaluation through the introduction of properties. An application of the evaluation procedure to two Navy methodologies has provided consistent results that demonstrate the utility and versatility of the evaluation procedure. Current research efforts focus on the continued refinement of the evaluation procedure through the identification and integration of product quality indicators reflective of attribute presence, and the validation of metrics supporting the measure of those indicators. The consequent refinement of the evaluation procedure offers promise of a flexible approach that admits to change as the field of knowledge matures. In conclusion, the procedural approach presented in this paper represents a promising path toward the end goal of objectively evaluating software engineering methodologies.

  12. Operational rate-distortion performance for joint source and channel coding of images.

    PubMed

    Ruf, M J; Modestino, J W

    1999-01-01

    This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.

  13. Failure detection system design methodology. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.

    1980-01-01

    The design of a failure detection and identification system consists of designing a robust residual generation process and a high performance decision making process. The design of these two processes are examined separately. Residual generation is based on analytical redundancy. Redundancy relations that are insensitive to modelling errors and noise effects are important for designing robust residual generation processes. The characterization of the concept of analytical redundancy in terms of a generalized parity space provides a framework in which a systematic approach to the determination of robust redundancy relations are developed. The Bayesian approach is adopted for the design of high performance decision processes. The FDI decision problem is formulated as a Bayes sequential decision problem. Since the optimal decision rule is incomputable, a methodology for designing suboptimal rules is proposed. A numerical algorithm is developed to facilitate the design and performance evaluation of suboptimal rules.

  14. The effect of music on cognitive performance: insight from neurobiological and animal studies.

    PubMed

    Rickard, Nikki S; Toukhsati, Samia R; Field, Simone E

    2005-12-01

    The past 50 years have seen numerous claims that music exposure enhances human cognitive performance. Critical evaluation of studies across a variety of contexts, however, reveals important methodological weaknesses. The current article argues that an interdisciplinary approach is required to advance this research. A case is made for the use of appropriate animal models to avoid many confounds associated with human music research. Although such research has validity limitations for humans, reductionist methodology enables a more controlled exploration of music's elementary effects. This article also explores candidate mechanisms for this putative effect. A review of neurobiological evidence from human and comparative animal studies confirms that musical stimuli modify autonomic and neurochemical arousal indices, and may also modify synaptic plasticity. It is proposed that understanding how music affects animals provides a valuable conjunct to human research and may be vital in uncovering how music might be used to enhance cognitive performance.

  15. Sandia National Laboratories performance assessment methodology for long-term environmental programs : the history of nuclear waste management.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marietta, Melvin Gary; Anderson, D. Richard; Bonano, Evaristo J.

    2011-11-01

    Sandia National Laboratories (SNL) is the world leader in the development of the detailed science underpinning the application of a probabilistic risk assessment methodology, referred to in this report as performance assessment (PA), for (1) understanding and forecasting the long-term behavior of a radioactive waste disposal system, (2) estimating the ability of the disposal system and its various components to isolate the waste, (3) developing regulations, (4) implementing programs to estimate the safety that the system can afford to individuals and to the environment, and (5) demonstrating compliance with the attendant regulatory requirements. This report documents the evolution of themore » SNL PA methodology from inception in the mid-1970s, summarizing major SNL PA applications including: the Subseabed Disposal Project PAs for high-level radioactive waste; the Waste Isolation Pilot Plant PAs for disposal of defense transuranic waste; the Yucca Mountain Project total system PAs for deep geologic disposal of spent nuclear fuel and high-level radioactive waste; PAs for the Greater Confinement Borehole Disposal boreholes at the Nevada National Security Site; and PA evaluations for disposal of high-level wastes and Department of Energy spent nuclear fuels stored at Idaho National Laboratory. In addition, the report summarizes smaller PA programs for long-term cover systems implemented for the Monticello, Utah, mill-tailings repository; a PA for the SNL Mixed Waste Landfill in support of environmental restoration; PA support for radioactive waste management efforts in Egypt, Iraq, and Taiwan; and, most recently, PAs for analysis of alternative high-level radioactive waste disposal strategies including repositories deep borehole disposal and geologic repositories in shale and granite. Finally, this report summarizes the extension of the PA methodology for radioactive waste disposal toward development of an enhanced PA system for carbon sequestration and storage systems. These efforts have produced a generic PA methodology for the evaluation of waste management systems that has gained wide acceptance within the international community. This report documents how this methodology has been used as an effective management tool to evaluate different disposal designs and sites; inform development of regulatory requirements; identify, prioritize, and guide research aimed at reducing uncertainties for objective estimations of risk; and support safety assessments.« less

  16. Life support technology investment strategies for flight programs: An application of decision analysis

    NASA Technical Reports Server (NTRS)

    Schlater, Nelson J.; Simonds, Charles H.; Ballin, Mark G.

    1993-01-01

    Applied research and technology development (R&TD) is often characterized by uncertainty, risk, and significant delays before tangible returns are obtained. Given the increased awareness of limitations in resources, effective R&TD today needs a method for up-front assessment of competing technologies to help guide technology investment decisions. Such an assessment approach must account for uncertainties in system performance parameters, mission requirements and architectures, and internal and external events influencing a development program. The methodology known as decision analysis has the potential to address these issues. It was evaluated by performing a case study assessment of alternative carbon dioxide removal technologies for NASA"s proposed First Lunar Outpost program. An approach was developed that accounts for the uncertainties in each technology's cost and performance parameters as well as programmatic uncertainties such as mission architecture. Life cycle cost savings relative to a baseline, adjusted for the cost of money, was used as a figure of merit to evaluate each of the alternative carbon dioxide removal technology candidates. The methodology was found to provide a consistent decision-making strategy for the develpoment of new life support technology. The case study results provided insight that was not possible from more traditional analysis approaches.

  17. Life support technology investment strategies for flight programs: An application of decision analysis

    NASA Technical Reports Server (NTRS)

    Schlater, Nelson J.; Simonds, Charles H.; Ballin, Mark G.

    1993-01-01

    Applied research and technology development (R&TD) is often characterized by uncertainty, risk, and significant delays before tangible returns are obtained. Given the increased awareness of limitations in resources, effective R&TD today needs a method for up-front assessment of competing technologies to help guide technology investment decisions. Such an assessment approach must account for uncertainties in system performance parameters, mission requirements and architectures, and internal and external events influencing a development program. The methodology known as decision analysis has the potential to address these issues. It was evaluated by performing a case study assessment of alternative carbon dioxide removal technologies for NASA's proposed First Lunar Outpost program. An approach was developed that accounts for the uncertainties in each technology's cost and performance parameters as well as programmatic uncertainties such as mission architecture. Life cycle cost savings relative to a baseline, adjusted for the cost of money, was used as a figure of merit to evaluate each of the alternative carbon dioxide removal technology candidates. The methodology was found to provide a consistent decision-making strategy for development of new life support technology. The case study results provided insight that was not possible from more traditional analysis approaches.

  18. Methodological evaluation and comparison of five urinary albumin measurements.

    PubMed

    Liu, Rui; Li, Gang; Cui, Xiao-Fan; Zhang, Dong-Ling; Yang, Qing-Hong; Mu, Xiao-Yan; Pan, Wen-Jie

    2011-01-01

    Microalbuminuria is an indicator of kidney damage and a risk factor for the progression kidney disease, cardiovascular disease, and so on. Therefore, accurate and precise measurement of urinary albumin is critical. However, there are no reference measurement procedures and reference materials for urinary albumin. Nephelometry, turbidimetry, colloidal gold method, radioimmunoassay, and chemiluminescence immunoassay were performed for methodological evaluation, based on imprecision test, recovery rate, linearity, haemoglobin interference rate, and verified reference interval. Then we tested 40 urine samples from diabetic patients by each method, and compared the result between assays. The results indicate that nephelometry is the method with best analytical performance among the five methods, with an average intraassay coefficient of variation (CV) of 2.6%, an average interassay CV of 1.7%, a mean recovery of 99.6%, a linearity of R=1.00 from 2 to 250 mg/l, and an interference rate of <10% at haemoglobin concentrations of <1.82 g/l. The correlation (r) between assays was from 0.701 to 0.982, and the Bland-Altman plots indicated each assay provided significantly different results from each other. Nephelometry is the clinical urinary albumin method with best analytical performance in our study. © 2011 Wiley-Liss, Inc.

  19. Flexible thermal protection materials for entry systems

    NASA Astrophysics Data System (ADS)

    Kourtides, Demetrius A.

    1993-02-01

    Current programs addressed in aeroassist flight experiment are: (1) evaluation of thermal performance of advanced rigid and flexible insulations and reflective coating; (2) investigation of lighter than baseline materials; (3) investigation of rigid insulations which perform well; (4) study of flexible insulations which require ceramic coating; and (5) study of reflective coating effective at greater than 15 percent. In National Aerospace Plane (NASP), the programs addressed are: (1) high and low temperature insulations; and (2) attachment/standoff methodology critical which affects thermal performance.

  20. Flexible thermal protection materials for entry systems

    NASA Technical Reports Server (NTRS)

    Kourtides, Demetrius A.

    1993-01-01

    Current programs addressed in aeroassist flight experiment are: (1) evaluation of thermal performance of advanced rigid and flexible insulations and reflective coating; (2) investigation of lighter than baseline materials; (3) investigation of rigid insulations which perform well; (4) study of flexible insulations which require ceramic coating; and (5) study of reflective coating effective at greater than 15 percent. In National Aerospace Plane (NASP), the programs addressed are: (1) high and low temperature insulations; and (2) attachment/standoff methodology critical which affects thermal performance.

  1. Effects of image processing on the detective quantum efficiency

    NASA Astrophysics Data System (ADS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  2. Research methodology workshops evaluation using the Kirkpatrick's model: translating theory into practice.

    PubMed

    Abdulghani, Hamza Mohammad; Shaik, Shaffi Ahamed; Khamis, Nehal; Al-Drees, Abdulmajeed Abdulrahman; Irshad, Mohammad; Khalil, Mahmoud Salah; Alhaqwi, Ali Ibrahim; Isnani, Arthur

    2014-04-01

    Qualitative and quantitative evaluation of academic programs can enhance the development, effectiveness, and dissemination of comparative quality reports as well as quality improvement efforts. To evaluate the five research methodology workshops through assessing participants' satisfaction, knowledge and skills gain and impact on practices by the Kirkpatrick's evaluation model. The four level Kirkpatrick's model was applied for the evaluation. Training feedback questionnaires, pre and post tests, learner development plan reports and behavioral surveys were used to evaluate the effectiveness of the workshop programs. Of the 116 participants, 28 (24.1%) liked with appreciation, 62 (53.4%) liked with suggestions and 26 (22.4%) disliked the programs. Pre and post MCQs tests mean scores showed significant improvement of relevant basic knowledge and cognitive skills by 17.67% (p ≤ 0.005). Pre-and-post tests scores on workshops sub-topics also significantly improved for the manuscripts (p ≤ 0.031) and proposal writing (p ≤ 0.834). As for the impact, 56.9% of participants started research, and 6.9% published their studies. The results from participants' performance revealed an overall positive feedback and 79% of participant reported transfer of training skills at their workplace. The course outcomes achievement and suggestions given for improvements offer insight into the program which were encouraging and very useful. Encouraging "research culture" and work-based learning are probably the most powerful determinants for research promotion. These findings therefore encourage faculty development unit to continue its training and development in the research methodology aspects.

  3. Loads and performance data from a wind-tunnel test of model articulated helicopter rotors with 2 different blade torsional stiffnesses

    NASA Technical Reports Server (NTRS)

    Yeager, W. T., Jr.; Mantay, W. R.

    1983-01-01

    A passive means of tailoring helicopter rotor blades to improve performance and reduce loads was evaluated. The parameters investigated were blade torsional stiffness, blade section camber, and distance between blade structural elastic axis and blade tip aerodynamic center. This offset was accomplished by sweeping the tip. The investigation was conducted at advance ratios of 0.20, 0.30, and 0.40. Data are presented without analysis; however, cross referencing of performance data and harmonic loads data may be useful to the analyst for validating aeroelastic theories and design methodologies as well as for evaluating passive aeroelastic tailoring or rotor blade parameters.

  4. Evaluation of Formal Training Programmes in Greek Organisations

    ERIC Educational Resources Information Center

    Diamantidis, Anastasios D.; Chatzoglou, Prodromos D.

    2012-01-01

    Purpose: The purpose of the paper is to highlight the training factors that mostly affect trainees' perception of learning and training usefulness. Design/methodology/approach: A new research model is proposed exploring the relationships between a trainer's performance, training programme components, outcomes of the learning process and training…

  5. Systematically Evaluating the Effectiveness of Quality Assurance Programmes in Leading to Improvements in Institutional Performance

    ERIC Educational Resources Information Center

    Lillis, Deirdre

    2012-01-01

    Higher education institutions worldwide invest significant resources in their quality assurance systems. Little empirical evidence exists that demonstrates the effectiveness (or otherwise) of these systems. Methodological approaches for determining effectiveness are also underdeveloped. Self-study-with-peer-review is a widely used model for…

  6. Blended and Online Learning: Student Perceptions and Performance

    ERIC Educational Resources Information Center

    Adam, Stewart; Nel, Deon

    2009-01-01

    Purpose: The purpose of this paper is to improve educator knowledge of the antecedents and consequences of blended learning in higher education. Design/methodology/approach: A longitudinal case study approach is adopted. Three case studies each involve tracking a student evaluations of teaching (SET) measure (willingness to recommend) and grade…

  7. Bangalore Revisited: A Reluctant Complaint.

    ERIC Educational Resources Information Center

    Greenwood, John

    1985-01-01

    Discusses the Bangalore Project in South India and responds to three articles on it, particularly the one by C. J. Brumfit (ELT, 1984). Argues that more information on teacher and learner performance and more explicit and illustrative evidence of materials and methodology are needed in order to evaluate the project accurately. (SED)

  8. Studies to determine the operational effects of shoulder and centerline rumble strips on two-lane undivided roadways.

    DOT National Transportation Integrated Search

    2009-08-01

    This report describes the methodology and results of analyses performed to (1) evaluate the impact of : shoulder rumble strips (SRS) and centerline rumble strips (CRS) on the placement of vehicles in the travel : lane of two-lane, undivided roadways ...

  9. Developments in REDES: The rocket engine design expert system

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth O.

    1990-01-01

    The Rocket Engine Design Expert System (REDES) is being developed at the NASA-Lewis to collect, automate, and perpetuate the existing expertise of performing a comprehensive rocket engine analysis and design. Currently, REDES uses the rigorous JANNAF methodology to analyze the performance of the thrust chamber and perform computational studies of liquid rocket engine problems. The following computer codes were included in REDES: a gas properties program named GASP, a nozzle design program named RAO, a regenerative cooling channel performance evaluation code named RTE, and the JANNAF standard liquid rocket engine performance prediction code TDK (including performance evaluation modules ODE, ODK, TDE, TDK, and BLM). Computational analyses are being conducted by REDES to provide solutions to liquid rocket engine thrust chamber problems. REDES is built in the Knowledge Engineering Environment (KEE) expert system shell and runs on a Sun 4/110 computer.

  10. Developments in REDES: The Rocket Engine Design Expert System

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth O.

    1990-01-01

    The Rocket Engine Design Expert System (REDES) was developed at NASA-Lewis to collect, automate, and perpetuate the existing expertise of performing a comprehensive rocket engine analysis and design. Currently, REDES uses the rigorous JANNAF methodology to analyze the performance of the thrust chamber and perform computational studies of liquid rocket engine problems. The following computer codes were included in REDES: a gas properties program named GASP; a nozzle design program named RAO; a regenerative cooling channel performance evaluation code named RTE; and the JANNAF standard liquid rocket engine performance prediction code TDK (including performance evaluation modules ODE, ODK, TDE, TDK, and BLM). Computational analyses are being conducted by REDES to provide solutions to liquid rocket engine thrust chamber problems. REDES was built in the Knowledge Engineering Environment (KEE) expert system shell and runs on a Sun 4/110 computer.

  11. Transportation Systems Evaluation

    NASA Technical Reports Server (NTRS)

    Fanning, M. L.; Michelson, R. A.

    1972-01-01

    A methodology for the analysis of transportation systems consisting of five major interacting elements is reported. The analysis begins with the causes of travel demand: geographic, economic, and demographic characteristics as well as attitudes toward travel. Through the analysis, the interaction of these factors with the physical and economic characteristics of the transportation system is determined. The result is an evaluation of the system from the point of view of both passenger and operator. The methodology is applicable to the intraurban transit systems as well as major airlines. Applications of the technique to analysis of a PRT system and a study of intraurban air travel are given. In the discussion several unique models or techniques are mentioned: i.e., passenger preference modeling, an integrated intraurban transit model, and a series of models to perform airline analysis.

  12. Reliability analysis of composite structures

    NASA Technical Reports Server (NTRS)

    Kan, Han-Pin

    1992-01-01

    A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.

  13. System Evaluation and Life-Cycle Cost Analysis of a Commercial-Scale High-Temperature Electrolysis Hydrogen Production Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwin A. Harvego; James E. O'Brien; Michael G. McKellar

    2012-11-01

    Results of a system evaluation and lifecycle cost analysis are presented for a commercial-scale high-temperature electrolysis (HTE) central hydrogen production plant. The plant design relies on grid electricity to power the electrolysis process and system components, and industrial natural gas to provide process heat. The HYSYS process analysis software was used to evaluate the reference central plant design capable of producing 50,000 kg/day of hydrogen. The HYSYS software performs mass and energy balances across all components to allow optimization of the design using a detailed process flow sheet and realistic operating conditions specified by the analyst. The lifecycle cost analysismore » was performed using the H2A analysis methodology developed by the Department of Energy (DOE) Hydrogen Program. This methodology utilizes Microsoft Excel spreadsheet analysis tools that require detailed plant performance information (obtained from HYSYS), along with financial and cost information to calculate lifecycle costs. The results of the lifecycle analyses indicate that for a 10% internal rate of return, a large central commercial-scale hydrogen production plant can produce 50,000 kg/day of hydrogen at an average cost of $2.68/kg. When the cost of carbon sequestration is taken into account, the average cost of hydrogen production increases by $0.40/kg to $3.08/kg.« less

  14. A Negative Selection Immune System Inspired Methodology for Fault Diagnosis of Wind Turbines.

    PubMed

    Alizadeh, Esmaeil; Meskin, Nader; Khorasani, Khashayar

    2017-11-01

    High operational and maintenance costs represent as major economic constraints in the wind turbine (WT) industry. These concerns have made investigation into fault diagnosis of WT systems an extremely important and active area of research. In this paper, an immune system (IS) inspired methodology for performing fault detection and isolation (FDI) of a WT system is proposed and developed. The proposed scheme is based on a self nonself discrimination paradigm of a biological IS. Specifically, the negative selection mechanism [negative selection algorithm (NSA)] of the human body is utilized. In this paper, a hierarchical bank of NSAs are designed to detect and isolate both individual as well as simultaneously occurring faults common to the WTs. A smoothing moving window filter is then utilized to further improve the reliability and performance of the FDI scheme. Moreover, the performance of our proposed scheme is compared with another state-of-the-art data-driven technique, namely the support vector machines (SVMs) to demonstrate and illustrate the superiority and advantages of our proposed NSA-based FDI scheme. Finally, a nonparametric statistical comparison test is implemented to evaluate our proposed methodology with that of the SVM under various fault severities.

  15. Application of tolerance limits to the characterization of image registration performance.

    PubMed

    Fedorov, Andriy; Wells, William M; Kikinis, Ron; Tempany, Clare M; Vangel, Mark G

    2014-07-01

    Deformable image registration is used increasingly in image-guided interventions and other applications. However, validation and characterization of registration performance remain areas that require further study. We propose an analysis methodology for deriving tolerance limits on the initial conditions for deformable registration that reliably lead to a successful registration. This approach results in a concise summary of the probability of registration failure, while accounting for the variability in the test data. The (β, γ) tolerance limit can be interpreted as a value of the input parameter that leads to successful registration outcome in at least 100β% of cases with the 100γ% confidence. The utility of the methodology is illustrated by summarizing the performance of a deformable registration algorithm evaluated in three different experimental setups of increasing complexity. Our examples are based on clinical data collected during MRI-guided prostate biopsy registered using publicly available deformable registration tool. The results indicate that the proposed methodology can be used to generate concise graphical summaries of the experiments, as well as a probabilistic estimate of the registration outcome for a future sample. Its use may facilitate improved objective assessment, comparison and retrospective stress-testing of deformable.

  16. Economic evaluation of HIV pre-exposure prophylaxis strategies: protocol for a methodological systematic review and quantitative synthesis.

    PubMed

    Thavorn, Kednapa; Kugathasan, Howsikan; Tan, Darrell H S; Moqueet, Nasheed; Baral, Stefan D; Skidmore, Becky; MacFadden, Derek; Simkin, Anna; Mishra, Sharmistha

    2018-03-15

    Pre-exposure prophylaxis (PrEP) with antiretrovirals is an efficacious and effective intervention to decrease the risk of HIV (human immunodeficiency virus) acquisition. Yet drug and delivery costs prohibit access in many jurisdictions. In the absence of guidelines for the synthesis of economic evaluations, we developed a protocol for a systematic review of economic evaluation studies for PrEP by drawing on best practices in systematic reviews and the conduct and reporting of economic evaluations. We aim to estimate the incremental cost per health outcome of PrEP compared with placebo, no PrEP, or other HIV prevention strategies; assess the methodological variability in, and quality of, economic evaluations of PrEP; estimate the incremental cost per health outcome of different PrEP implementation strategies; and quantify the potential sources of heterogeneity in outcomes. We will systematically search electronic databases (MEDLINE, Embase) and the gray literature. We will include economic evaluation studies that assess both costs and health outcomes of PrEP in HIV-uninfected individuals, without restricting language or year of publication. Two reviewers will independently screen studies using predefined inclusion criteria, extract data, and assess methodological quality using the Philips checklist, Second Panel on the Cost-effectiveness of Health and Medicines, and the International Society for Pharmacoeconomics and Outcomes Research recommendations. Outcomes of interest include incremental costs and outcomes in natural units or utilities, cost-effectiveness ratios, and net monetary benefit. We will perform descriptive and quantitative syntheses using sensitivity analyses of outcomes by population subgroups, HIV epidemic settings, study designs, baseline intervention contexts, key parameter inputs and assumptions, type of outcomes, economic perspectives, and willingness to pay values. Findings will guide future economic evaluation of PrEP strategies in terms of methodological and knowledge gaps, and will inform decisions on the efficient integration of PrEP into public health programs across epidemiologic and health system contexts. PROSPERO CRD42016038440 .

  17. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  18. Anaerobic treatment of complex chemical wastewater in a sequencing batch biofilm reactor: process optimization and evaluation of factor interactions using the Taguchi dynamic DOE methodology.

    PubMed

    Venkata Mohan, S; Chandrasekhara Rao, N; Krishna Prasad, K; Murali Krishna, P; Sreenivas Rao, R; Sarma, P N

    2005-06-20

    The Taguchi robust experimental design (DOE) methodology has been applied on a dynamic anaerobic process treating complex wastewater by an anaerobic sequencing batch biofilm reactor (AnSBBR). For optimizing the process as well as to evaluate the influence of different factors on the process, the uncontrollable (noise) factors have been considered. The Taguchi methodology adopting dynamic approach is the first of its kind for studying anaerobic process evaluation and process optimization. The designed experimental methodology consisted of four phases--planning, conducting, analysis, and validation connected sequence-wise to achieve the overall optimization. In the experimental design, five controllable factors, i.e., organic loading rate (OLR), inlet pH, biodegradability (BOD/COD ratio), temperature, and sulfate concentration, along with the two uncontrollable (noise) factors, volatile fatty acids (VFA) and alkalinity at two levels were considered for optimization of the anae robic system. Thirty-two anaerobic experiments were conducted with a different combination of factors and the results obtained in terms of substrate degradation rates were processed in Qualitek-4 software to study the main effect of individual factors, interaction between the individual factors, and signal-to-noise (S/N) ratio analysis. Attempts were also made to achieve optimum conditions. Studies on the influence of individual factors on process performance revealed the intensive effect of OLR. In multiple factor interaction studies, biodegradability with other factors, such as temperature, pH, and sulfate have shown maximum influence over the process performance. The optimum conditions for the efficient performance of the anaerobic system in treating complex wastewater by considering dynamic (noise) factors obtained are higher organic loading rate of 3.5 Kg COD/m3 day, neutral pH with high biodegradability (BOD/COD ratio of 0.5), along with mesophilic temperature range (40 degrees C), and low sulfate concentration (700 mg/L). The optimization resulted in enhanced anaerobic performance (56.7%) from a substrate degradation rate (SDR) of 1.99 to 3.13 Kg COD/m3 day. Considering the obtained optimum factors, further validation experiments were carried out, which showed enhanced process performance (3.04 Kg COD/m3-day from 1.99 Kg COD/m3 day) accounting for 52.13% improvement with the optimized process conditions. The proposed method facilitated a systematic mathematical approach to understand the complex multi-species manifested anaerobic process treating complex chemical wastewater by considering the uncontrollable factors. Copyright (c) 2005 Wiley Periodicals, Inc.

  19. Performance Optimization Control of ECH using Fuzzy Inference Application

    NASA Astrophysics Data System (ADS)

    Dubey, Abhay Kumar

    Electro-chemical honing (ECH) is a hybrid electrolytic precision micro-finishing technology that, by combining physico-chemical actions of electro-chemical machining and conventional honing processes, provides the controlled functional surfaces-generation and fast material removal capabilities in a single operation. Process multi-performance optimization has become vital for utilizing full potential of manufacturing processes to meet the challenging requirements being placed on the surface quality, size, tolerances and production rate of engineering components in this globally competitive scenario. This paper presents an strategy that integrates the Taguchi matrix experimental design, analysis of variances and fuzzy inference system (FIS) to formulate a robust practical multi-performance optimization methodology for complex manufacturing processes like ECH, which involve several control variables. Two methodologies one using a genetic algorithm tuning of FIS (GA-tuned FIS) and another using an adaptive network based fuzzy inference system (ANFIS) have been evaluated for a multi-performance optimization case study of ECH. The actual experimental results confirm their potential for a wide range of machining conditions employed in ECH.

  20. "Assessing the methodological quality of systematic reviews in radiation oncology: A systematic review".

    PubMed

    Hasan, Haroon; Muhammed, Taaha; Yu, Jennifer; Taguchi, Kelsi; Samargandi, Osama A; Howard, A Fuchsia; Lo, Andrea C; Olson, Robert; Goddard, Karen

    2017-10-01

    The objective of our study was to evaluate the methodological quality of systematic reviews and meta-analyses in Radiation Oncology. A systematic literature search was conducted for all eligible systematic reviews and meta-analyses in Radiation Oncology from 1966 to 2015. Methodological characteristics were abstracted from all works that satisfied the inclusion criteria and quality was assessed using the critical appraisal tool, AMSTAR. Regression analyses were performed to determine factors associated with a higher score of quality. Following exclusion based on a priori criteria, 410 studies (157 systematic reviews and 253 meta-analyses) satisfied the inclusion criteria. Meta-analyses were found to be of fair to good quality while systematic reviews were found to be of less than fair quality. Factors associated with higher scores of quality in the multivariable analysis were including primary studies consisting of randomized control trials, performing a meta-analysis, and applying a recommended guideline related to establishing a systematic review protocol and/or reporting. Systematic reviews and meta-analyses may introduce a high risk of bias if applied to inform decision-making based on AMSTAR. We recommend that decision-makers in Radiation Oncology scrutinize the methodological quality of systematic reviews and meta-analyses prior to assessing their utility to inform evidence-based medicine and researchers adhere to methodological standards outlined in validated guidelines when embarking on a systematic review. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Solar energy program evaluation: an introduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    deLeon, P.

    The Program Evaluation Methodology provides an overview of the practice and methodology of program evaluation and defines more precisely the evaluation techniques and methodologies that would be most appropriate to government organizations which are actively involved in the research, development, and commercialization of solar energy systems. Formal evaluation cannot be treated as a single methodological approach for assessing a program. There are four basic types of evaluation designs - the pre-experimental design; the quasi-experimental design based on time series; the quasi-experimental design based on comparison groups; and the true experimental design. This report is organized to first introduce the rolemore » and issues of evaluation. This is to provide a set of issues to organize the subsequent sections detailing the national solar energy programs. Then, these two themes are integrated by examining the evaluation strategies and methodologies tailored to fit the particular needs of the various individual solar energy programs. (MCW)« less

  2. Efficient generation of receiver operating characteristics for the evaluation of damage detection in practical structural health monitoring applications.

    PubMed

    Liu, Chang; Dobson, Jacob; Cawley, Peter

    2017-03-01

    Permanently installed guided wave monitoring systems are attractive for monitoring large structures. By frequently interrogating the test structure over a long period of time, such systems have the potential to detect defects much earlier than with conventional one-off inspection, and reduce the time and labour cost involved. However, for the systems to be accepted under real operational conditions, their damage detection performance needs to be evaluated in these practical settings. The receiver operating characteristic (ROC) is an established performance metric for one-off inspections, but the generation of the ROC requires many test structures with realistic damage growth at different locations and different environmental conditions, and this is often impractical. In this paper, we propose an evaluation framework using experimental data collected over multiple environmental cycles on an undamaged structure with synthetic damage signatures added by superposition. Recent advances in computation power enable examples covering a wide range of practical scenarios to be generated, and for multiple cases of each scenario to be tested so that the statistics of the performance can be evaluated. The proposed methodology has been demonstrated using data collected from a laboratory pipe specimen over many temperature cycles, superposed with damage signatures predicted for a flat-bottom hole growing at different rates at various locations. Three damage detection schemes, conventional baseline subtraction, singular value decomposition (SVD) and independent component analysis (ICA), have been evaluated. It has been shown that in all cases, the component methods perform significantly better than the residual method, with ICA generally the better of the two. The results have been validated using experimental data monitoring a pipe in which a flat-bottom hole was drilled and enlarged over successive temperature cycles. The methodology can be used to evaluate the performance of an installed monitoring system and to show whether it is capable of detecting particular damage growth at any given location. It will enable monitoring results to be evaluated rigorously and will be valuable in the development of safety cases.

  3. Efficient generation of receiver operating characteristics for the evaluation of damage detection in practical structural health monitoring applications

    PubMed Central

    Dobson, Jacob; Cawley, Peter

    2017-01-01

    Permanently installed guided wave monitoring systems are attractive for monitoring large structures. By frequently interrogating the test structure over a long period of time, such systems have the potential to detect defects much earlier than with conventional one-off inspection, and reduce the time and labour cost involved. However, for the systems to be accepted under real operational conditions, their damage detection performance needs to be evaluated in these practical settings. The receiver operating characteristic (ROC) is an established performance metric for one-off inspections, but the generation of the ROC requires many test structures with realistic damage growth at different locations and different environmental conditions, and this is often impractical. In this paper, we propose an evaluation framework using experimental data collected over multiple environmental cycles on an undamaged structure with synthetic damage signatures added by superposition. Recent advances in computation power enable examples covering a wide range of practical scenarios to be generated, and for multiple cases of each scenario to be tested so that the statistics of the performance can be evaluated. The proposed methodology has been demonstrated using data collected from a laboratory pipe specimen over many temperature cycles, superposed with damage signatures predicted for a flat-bottom hole growing at different rates at various locations. Three damage detection schemes, conventional baseline subtraction, singular value decomposition (SVD) and independent component analysis (ICA), have been evaluated. It has been shown that in all cases, the component methods perform significantly better than the residual method, with ICA generally the better of the two. The results have been validated using experimental data monitoring a pipe in which a flat-bottom hole was drilled and enlarged over successive temperature cycles. The methodology can be used to evaluate the performance of an installed monitoring system and to show whether it is capable of detecting particular damage growth at any given location. It will enable monitoring results to be evaluated rigorously and will be valuable in the development of safety cases. PMID:28413339

  4. On Applying the Prognostic Performance Metrics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics performance evaluation has gained significant attention in the past few years. As prognostics technology matures and more sophisticated methods for prognostic uncertainty management are developed, a standardized methodology for performance evaluation becomes extremely important to guide improvement efforts in a constructive manner. This paper is in continuation of previous efforts where several new evaluation metrics tailored for prognostics were introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. Several shortcomings identified, while applying these metrics to a variety of real applications, are also summarized along with discussions that attempt to alleviate these problems. Further, these metrics have been enhanced to include the capability of incorporating probability distribution information from prognostic algorithms as opposed to evaluation based on point estimates only. Several methods have been suggested and guidelines have been provided to help choose one method over another based on probability distribution characteristics. These approaches also offer a convenient and intuitive visualization of algorithm performance with respect to some of these new metrics like prognostic horizon and alpha-lambda performance, and also quantify the corresponding performance while incorporating the uncertainty information.

  5. Methodology for Evaluating Raw Material Changes to RSRM Elastomeric Insulation Materials

    NASA Technical Reports Server (NTRS)

    Mildenhall, Scott D.; McCool, Alex (Technical Monitor)

    2001-01-01

    The Reusable Solid Rocket Motor (RSRM) uses asbestos and silicon dioxide filled acrylonitrile butadiene rubber (AS-NBR) as the primary internal insulation to protect the case from heat. During the course of the RSRM Program, several changes have been made to the raw materials and processing of the AS-NBR elastomeric insulation material. These changes have been primarily caused by raw materials becoming obsolete. In addition, some process changes have been implemented that were deemed necessary to improve the quality and consistency of the AS-NBR insulation material. Each change has been evaluated using unique test efforts customized to determine the potential impacts of the specific raw material or process change. Following the evaluations, the various raw material and process changes were successfully implemented with no detectable effect on the performance of the AS-NBR insulation. This paper will discuss some of the raw material and process changes evaluated, the methodology used in designing the unique test plans, and the general evaluation results. A summary of the change history of RSRM AS-NBR internal insulation is also presented.

  6. Cost-Effectiveness of HBV and HCV Screening Strategies – A Systematic Review of Existing Modelling Techniques

    PubMed Central

    Geue, Claudia; Wu, Olivia; Xin, Yiqiao; Heggie, Robert; Hutchinson, Sharon; Martin, Natasha K.; Fenwick, Elisabeth; Goldberg, David

    2015-01-01

    Introduction Studies evaluating the cost-effectiveness of screening for Hepatitis B Virus (HBV) and Hepatitis C Virus (HCV) are generally heterogeneous in terms of risk groups, settings, screening intervention, outcomes and the economic modelling framework. It is therefore difficult to compare cost-effectiveness results between studies. This systematic review aims to summarise and critically assess existing economic models for HBV and HCV in order to identify the main methodological differences in modelling approaches. Methods A structured search strategy was developed and a systematic review carried out. A critical assessment of the decision-analytic models was carried out according to the guidelines and framework developed for assessment of decision-analytic models in Health Technology Assessment of health care interventions. Results The overall approach to analysing the cost-effectiveness of screening strategies was found to be broadly consistent for HBV and HCV. However, modelling parameters and related structure differed between models, producing different results. More recent publications performed better against a performance matrix, evaluating model components and methodology. Conclusion When assessing screening strategies for HBV and HCV infection, the focus should be on more recent studies, which applied the latest treatment regimes, test methods and had better and more complete data on which to base their models. In addition to parameter selection and associated assumptions, careful consideration of dynamic versus static modelling is recommended. Future research may want to focus on these methodological issues. In addition, the ability to evaluate screening strategies for multiple infectious diseases, (HCV and HIV at the same time) might prove important for decision makers. PMID:26689908

  7. A Probabilistic Performance Assessment Study of Potential Low-Level Radioactive Waste Disposal Sites in Taiwan

    NASA Astrophysics Data System (ADS)

    Knowlton, R. G.; Arnold, B. W.; Mattie, P. D.; Kuo, M.; Tien, N.

    2006-12-01

    For several years now, Taiwan has been engaged in a process to select a low-level radioactive waste (LLW) disposal site. Taiwan is generating LLW from operational and decommissioning wastes associated with nuclear power reactors, as well as research, industrial, and medical radioactive wastes. The preliminary selection process has narrowed the search to four potential candidate sites. These sites are to be evaluated in a performance assessment analysis to determine the likelihood of meeting the regulatory criteria for disposal. Sandia National Laboratories and Taiwan's Institute of Nuclear Energy Research have been working together to develop the necessary performance assessment methodology and associated computer models to perform these analyses. The methodology utilizes both deterministic (e.g., single run) and probabilistic (e.g., multiple statistical realizations) analyses to achieve the goals. The probabilistic approach provides a means of quantitatively evaluating uncertainty in the model predictions and a more robust basis for performing sensitivity analyses to better understand what is driving the dose predictions from the models. Two types of disposal configurations are under consideration: a shallow land burial concept and a cavern disposal concept. The shallow land burial option includes a protective cover to limit infiltration potential to the waste. Both conceptual designs call for the disposal of 55 gallon waste drums within concrete lined trenches or tunnels, and backfilled with grout. Waste emplaced in the drums may be solidified. Both types of sites are underlain or placed within saturated fractured bedrock material. These factors have influenced the conceptual model development of each site, as well as the selection of the models to employ for the performance assessment analyses. Several existing codes were integrated in order to facilitate a comprehensive performance assessment methodology to evaluate the potential disposal sites. First, a need existed to simulate the failure processes of the waste containers, with subsequent leaching of the waste form to the underlying host rock. The Breach, Leach, and Transport Multiple Species (BLT-MS) code was selected to meet these needs. BLT-MS also has a 2-D finite-element advective-dispersive transport module, with radionuclide in-growth and decay. BLT-MS does not solve the groundwater flow equation, but instead requires the input of Darcy flow velocity terms. These terms were abstracted from a groundwater flow model using the FEHM code. For the shallow land burial site, the HELP code was also used to evaluate the performance of the protective cover. The GoldSim code was used for two purposes: quantifying uncertainties in the predictions, and providing a platform to evaluate an alternative conceptual model involving matrix-diffusion transport. Results of the preliminary performance assessment analyses using examples to illustrate the computational framework will be presented. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE AC04 94AL85000.

  8. Coupling Hydraulic Fracturing Propagation and Gas Well Performance for Simulation of Production in Unconventional Shale Gas Reservoirs

    NASA Astrophysics Data System (ADS)

    Wang, C.; Winterfeld, P. H.; Wu, Y. S.; Wang, Y.; Chen, D.; Yin, C.; Pan, Z.

    2014-12-01

    Hydraulic fracturing combined with horizontal drilling has made it possible to economically produce natural gas from unconventional shale gas reservoirs. An efficient methodology for evaluating hydraulic fracturing operation parameters, such as fluid and proppant properties, injection rates, and wellhead pressure, is essential for the evaluation and efficient design of these processes. Traditional numerical evaluation and optimization approaches are usually based on simulated fracture properties such as the fracture area. In our opinion, a methodology based on simulated production data is better, because production is the goal of hydraulic fracturing and we can calibrate this approach with production data that is already known. This numerical methodology requires a fully-coupled hydraulic fracture propagation and multi-phase flow model. In this paper, we present a general fully-coupled numerical framework to simulate hydraulic fracturing and post-fracture gas well performance. This three-dimensional, multi-phase simulator focuses on: (1) fracture width increase and fracture propagation that occurs as slurry is injected into the fracture, (2) erosion caused by fracture fluids and leakoff, (3) proppant subsidence and flowback, and (4) multi-phase fluid flow through various-scaled anisotropic natural and man-made fractures. Mathematical and numerical details on how to fully couple the fracture propagation and fluid flow parts are discussed. Hydraulic fracturing and production operation parameters, and properties of the reservoir, fluids, and proppants, are taken into account. The well may be horizontal, vertical, or deviated, as well as open-hole or cemented. The simulator is verified based on benchmarks from the literature and we show its application by simulating fracture network (hydraulic and natural fractures) propagation and production data history matching of a field in China. We also conduct a series of real-data modeling studies with different combinations of hydraulic fracturing parameters and present the methodology to design these operations with feedback of simulated production data. The unified model aids in the optimization of hydraulic fracturing design, operations, and production.

  9. Power processing methodology. [computerized design of spacecraft electric power systems

    NASA Technical Reports Server (NTRS)

    Fegley, K. A.; Hansen, I. G.; Hayden, J. H.

    1974-01-01

    Discussion of the interim results of a program to investigate the feasibility of formulating a methodology for the modeling and analysis of aerospace electrical power processing systems. The object of the total program is to develop a flexible engineering tool which will allow the power processor designer to effectively and rapidly assess and analyze the tradeoffs available by providing, in one comprehensive program, a mathematical model, an analysis of expected performance, simulation, and a comparative evaluation with alternative designs. This requires an understanding of electrical power source characteristics and the effects of load control, protection, and total system interaction.

  10. Injector element characterization methodology

    NASA Technical Reports Server (NTRS)

    Cox, George B., Jr.

    1988-01-01

    Characterization of liquid rocket engine injector elements is an important part of the development process for rocket engine combustion devices. Modern nonintrusive instrumentation for flow velocity and spray droplet size measurement, and automated, computer-controlled test facilities allow rapid, low-cost evaluation of injector element performance and behavior. Application of these methods in rocket engine development, paralleling their use in gas turbine engine development, will reduce rocket engine development cost and risk. The Alternate Turbopump (ATP) Hot Gas Systems (HGS) preburner injector elements were characterized using such methods, and the methodology and some of the results obtained will be shown.

  11. Advanced Energy Retrofit Guide: Practical Ways to Improve Energy Performance, K-12 Schools (Book)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The U.S. Department of Energy developed the K-12 Advanced Energy Retrofit Guide to provide specific methodologies, information, and guidance to help energy managers and other stakeholders plan and execute energy efficiency improvements. We emphasize actionable information, practical methodologies, diverse case studies, and unbiased evaluation of the most promising retrofit measure for each building type. K-12 schools were selected as one of the highest priority building sectors, because schools affect the lives of most Americans. They also represent approximately 8% of the energy use and 10% of the floor area in commercial buildings.

  12. Paediatric International Nursing Study: using person-centred key performance indicators to benchmark children's services.

    PubMed

    McCance, Tanya; Wilson, Val; Kornman, Kelly

    2016-07-01

    The aim of the Paediatric International Nursing Study was to explore the utility of key performance indicators in developing person-centred practice across a range of services provided to sick children. The objective addressed in this paper was evaluating the use of these indicators to benchmark services internationally. This study builds on primary research, which produced indicators that were considered novel both in terms of their positive orientation and use in generating data that privileges the patient voice. This study extends this research through wider testing on an international platform within paediatrics. The overall methodological approach was a realistic evaluation used to evaluate the implementation of the key performance indicators, which combined an integrated development and evaluation methodology. The study involved children's wards/hospitals in Australia (six sites across three states) and Europe (seven sites across four countries). Qualitative and quantitative methods were used during the implementation process, however, this paper reports the quantitative data only, which used survey, observations and documentary review. The findings demonstrate the quality of care being delivered to children and their families across different international sites. The benchmarking does, however, highlight some differences between paediatric and general hospitals, and between the different key performance indicators across all the sites. The findings support the use of the key performance indicators as a novel method to benchmark services internationally. Whilst the data collected across 20 paediatric sites suggest services are more similar than different, benchmarking illuminates variations that encourage a critical dialogue about what works and why. The transferability of the key performance indicators and measurement framework across different settings has significant implications for practice. The findings offer an approach to benchmarking and celebrating the successes within practice, while learning from partners across the globe in further developing person-centred cultures. © 2016 John Wiley & Sons Ltd.

  13. [Robotic systems for gait re-education in cases of spinal cord injury: a systematic review].

    PubMed

    Gandara-Sambade, T; Fernandez-Pereira, M; Rodriguez-Sotillo, A

    2017-03-01

    The evidence underlying robotic body weight supported treadmill training in patients with spinal cord injury remains poorly characterized. To perform a qualitative systematic review on the efficacy of this therapy. A search on PubMed, CINAHL, Cochrane Library and PEDro was performed from January 2005 to April 2016. The references in these articles were also reviewed to find papers not identified with the initial search strategy. The methodological level of the articles was evaluated with PEDro and Downs and Black scales. A total of 129 potentially interesting articles were found, of which 10 fulfilled the inclusion criteria. Those studies included 286 patients, who were predominantly young and male. Most of them had an incomplete spinal cord injury and were classified as C or D in ASIA scale. Robotic devices employed in these studies were Lokomat, Gait Trainer and LOPES. Improvement in walking parameters evaluated was more evident in young patients, those with subacute spinal cord injury, and those with high ASIA or LEMS scores. Conversely, factors such as etiology, level of injury or sex were less predictive of improvement. The methodological level of these studies was fair according to PEDro and Downs and Black scales. The evidence of gait training with robotic devices in patients with spinal cord injury is positive, although limited and with fair methodological quality.

  14. Performance of the Lester battery charger in electric vehicles

    NASA Technical Reports Server (NTRS)

    Vivian, H. C.; Bryant, J. A.

    1984-01-01

    Tests are performed on an improved battery charger. The primary purpose of the testing is to develop test methodologies for battery charger evaluation. Tests are developed to characterize the charger in terms of its charge algorithm and to assess the effects of battery initial state of charge and temperature on charger and battery efficiency. Tests show this charger to be a considerable improvement in the state of the art for electric vehicle chargers.

  15. Measurements and Predictions for a Distributed Exhaust Nozzle

    NASA Technical Reports Server (NTRS)

    Kinzie, Kevin W.; Brown, Martha C.; Schein, David B.; Solomon, W. David, Jr.

    2001-01-01

    The acoustic and aerodynamic performance characteristics of a distributed exhaust nozzle (DEN) design concept were evaluated experimentally and analytically with the purpose of developing a design methodology for developing future DEN technology. Aerodynamic and acoustic measurements were made to evaluate the DEN performance and the CFD design tool. While the CFD approach did provide an excellent prediction of the flowfield and aerodynamic performance characteristics of the DEN and 2D reference nozzle, the measured acoustic suppression potential of this particular DEN was low. The measurements and predictions indicated that the mini-exhaust jets comprising the distributed exhaust coalesced back into a single stream jet very shortly after leaving the nozzles. Even so, the database provided here will be useful for future distributed exhaust designs with greater noise reduction and aerodynamic performance potential.

  16. Evaluation of solar thermal power plants using economic and performance simulations

    NASA Technical Reports Server (NTRS)

    El-Gabawali, N.

    1980-01-01

    An energy cost analysis is presented for central receiver power plants with thermal storage and point focusing power plants with electrical storage. The present approach is based on optimizing the size of the plant to give the minimum energy cost (in mills/kWe hr) of an annual plant energy production. The optimization is done by considering the trade-off between the collector field size and the storage capacity for a given engine size. The energy cost is determined by the plant cost and performance. The performance is estimated by simulating the behavior of the plant under typical weather conditions. Plant capital and operational costs are estimated based on the size and performance of different components. This methodology is translated into computer programs for automatic and consistent evaluation.

  17. Dataflow computing approach in high-speed digital simulation

    NASA Technical Reports Server (NTRS)

    Ercegovac, M. D.; Karplus, W. J.

    1984-01-01

    New computational tools and methodologies for the digital simulation of continuous systems were explored. Programmability, and cost effective performance in multiprocessor organizations for real time simulation was investigated. Approach is based on functional style languages and data flow computing principles, which allow for the natural representation of parallelism in algorithms and provides a suitable basis for the design of cost effective high performance distributed systems. The objectives of this research are to: (1) perform comparative evaluation of several existing data flow languages and develop an experimental data flow language suitable for real time simulation using multiprocessor systems; (2) investigate the main issues that arise in the architecture and organization of data flow multiprocessors for real time simulation; and (3) develop and apply performance evaluation models in typical applications.

  18. The Business Benefits of Apprenticeships: The English Employers' Perspective

    ERIC Educational Resources Information Center

    Kenyon, Rod

    2005-01-01

    Purpose - This paper seeks to present the Apprenticeships Task Forces ATFs evaluation of the business case for recruiting and training apprentices. The focus is on whether they provide employers in the UK with a positive return on investment in key performance areas. Design/methodology/approach - The ATF asked nine members, senior executives of…

  19. English Language Teachers' Ideology of ELT Assessment Literacy

    ERIC Educational Resources Information Center

    Hakim, Badia

    2015-01-01

    Deep understanding, clear perception and accurate use of assessment methodology play an integral role in the success of a language program. Use of various assessment techniques to evaluate and improve the performance of learners has been the focal point of interest in the field of English Language Teaching (ELT). Equally researchers are interested…

  20. Useful Interactive Teaching Tool for Learning: Clickers in Higher Education

    ERIC Educational Resources Information Center

    Camacho-Miñano, María-del-Mar; del Campo, Cristina

    2016-01-01

    Many university lecturers are encouraged to implement innovative teaching tools and methodologies such as clickers in order to create an interactive learning environment and improve student learning, but its performance must be evaluated. The aim of this paper is to test empirically the impact of the use of clickers on students' learning…

  1. An Analysis and Plan of Test Development for the Law Enforcement Basic Training Course.

    ERIC Educational Resources Information Center

    Vineberg, Robert; Taylor, John E.

    A test development plan is described to evaluate police enrolled in the law enforcement basic training course developed by California's Commission on Peace Officer Standards and Training (POST). Some general test methodologies are discussed: performance tests, knowledge tests, and situational tests, including role playing simulations and…

  2. Getting State Education Data Right: What We Can Learn from Tennessee

    ERIC Educational Resources Information Center

    Jones, Joseph; Southern, Kyle

    2011-01-01

    Federal education policy in recent years has encouraged state and local education agencies to embrace data use and analysis in decision-making, ranging from policy development and implementation to performance evaluation. The capacity of these agencies to make effective and methodologically sound use of collected data for these purposes remains an…

  3. Multivariable control of a twin lift helicopter system using the LQG/LTR design methodology

    NASA Technical Reports Server (NTRS)

    Rodriguez, A. A.; Athans, M.

    1986-01-01

    Guidelines for developing a multivariable centralized automatic flight control system (AFCS) for a twin lift helicopter system (TLHS) are presented. Singular value ideas are used to formulate performance and stability robustness specifications. A linear Quadratic Gaussian with Loop Transfer Recovery (LQG/LTR) design is obtained and evaluated.

  4. Digital Literacy and New Technological Perspectives

    ERIC Educational Resources Information Center

    Feola, Elvia Ilaria

    2016-01-01

    This paper aims to reflect on the implications and challenges that experts in the field have to deal with when you want to evaluate the performance in the use of digital technologies in teaching. The argument stems from a contextual and social assessment, and then proceeds to an application and methodological connotation of digital literacy…

  5. Estimating annual bole biomass production using uncertainty analysis

    Treesearch

    Travis J. Woolley; Mark E. Harmon; Kari B. O' Connell

    2007-01-01

    Two common sampling methodologies coupled with a simple statistical model were evaluated to determine the accuracy and precision of annual bole biomass production (BBP) and inter-annual variability estimates using this type of approach. We performed an uncertainty analysis using Monte Carlo methods in conjunction with radial growth core data from trees in three Douglas...

  6. Influence of organizational factors on safety

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haber, S.B.; Metlay, D.S.; Crouch, D.A.

    There is a need for a better understanding of exactly how organizational management factors at a nuclear power plant (NPP) affect plant safety performance, either directly or indirectly, and how these factors might be observed, measured, and evaluated. The purpose of this research project is to respond to that need by developing a general methodology for characterizing these organizational and management factors, systematically collecting information on their status and integrating that information into various types of evaluative activities. Research to date has included the development of the Nuclear Organization and Management Analysis Concept (NOMAC) of a NPP, the identification ofmore » key organizational and management factors, and the identification of the methods for systematically measuring and analyzing the influence of these factors on performance. Most recently, two field studies, one at a fossil fuel plant and the other at a NPP, were conducted using the developed methodology. Results are presented from both studies highlighting the acceptability, practicality, and usefulness of the methods used to assess the influence of various organizational and management factors including culture, communication, decision-making, standardization, and oversight. 6 refs., 3 figs., 1 tab.« less

  7. Modeling methodology for supply chain synthesis and disruption analysis

    NASA Astrophysics Data System (ADS)

    Wu, Teresa; Blackhurst, Jennifer

    2004-11-01

    The concept of an integrated or synthesized supply chain is a strategy for managing today's globalized and customer driven supply chains in order to better meet customer demands. Synthesizing individual entities into an integrated supply chain can be a challenging task due to a variety of factors including conflicting objectives, mismatched incentives and constraints of the individual entities. Furthermore, understanding the effects of disruptions occurring at any point in the system is difficult when working toward synthesizing supply chain operations. Therefore, the goal of this research is to present a modeling methodology to manage the synthesis of a supply chain by linking hierarchical levels of the system and to model and analyze disruptions in the integrated supply chain. The contribution of this research is threefold: (1) supply chain systems can be modeled hierarchically (2) the performance of synthesized supply chain system can be evaluated quantitatively (3) reachability analysis is used to evaluate the system performance and verify whether a specific state is reachable, allowing the user to understand the extent of effects of a disruption.

  8. The methodological quality of three foundational law enforcement Drug Influence Evaluation validation studies.

    PubMed

    Kane, Greg

    2013-11-04

    A Drug Influence Evaluation (DIE) is a formal assessment of an impaired driving suspect, performed by a trained law enforcement officer who uses circumstantial facts, questioning, searching, and a physical exam to form an unstandardized opinion as to whether a suspect's driving was impaired by drugs. This paper first identifies the scientific studies commonly cited in American criminal trials as evidence of DIE accuracy, and second, uses the QUADAS tool to investigate whether the methodologies used by these studies allow them to correctly quantify the diagnostic accuracy of the DIEs currently administered by US law enforcement. Three studies were selected for analysis. For each study, the QUADAS tool identified biases that distorted reported accuracies. The studies were subject to spectrum bias, selection bias, misclassification bias, verification bias, differential verification bias, incorporation bias, and review bias. The studies quantified DIE performance with prevalence-dependent accuracy statistics that are internally but not externally valid. The accuracies reported by these studies do not quantify the accuracy of the DIE process now used by US law enforcement. These studies do not validate current DIE practice.

  9. Formulation development and optimization of sustained release matrix tablet of Itopride HCl by response surface methodology and its evaluation of release kinetics

    PubMed Central

    Bose, Anirbandeep; Wong, Tin Wui; Singh, Navjot

    2012-01-01

    The objective of this present investigation was to develop and formulate sustained release (SR) matrix tablets of Itopride HCl, by using different polymer combinations and fillers, to optimize by Central Composite Design response surface methodology for different drug release variables and to evaluate drug release pattern of the optimized product. Sustained release matrix tablets of various combinations were prepared with cellulose-based polymers: hydroxy propyl methyl cellulose (HPMC) and polyvinyl pyrolidine (pvp) and lactose as fillers. Study of pre-compression and post-compression parameters facilitated the screening of a formulation with best characteristics that underwent here optimization study by response surface methodology (Central Composite Design). The optimized tablet was further subjected to scanning electron microscopy to reveal its release pattern. The in vitro study revealed that combining of HPMC K100M (24.65 MG) with pvp(20 mg)and use of LACTOSE as filler sustained the action more than 12 h. The developed sustained release matrix tablet of improved efficacy can perform therapeutically better than a conventional tablet. PMID:23960836

  10. Results from Alloy 600 And Alloy 690 Caustic SCC Model Boiler Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Frederick D.; Thomas, Larry E.

    2009-08-03

    A versatile model boiler test methodology was developed and used to compare caustic stress corrosion cracking (SCC) of mill annealed Alloy 600 and thermally treated Alloy 690. The model boiler included simulated crevice devices that efficiently and consistently concentrated Na2CO3, resulting in volatilization of CO2 with the steam and concentration of NaOH at the tube surfaces. The test methodology also included variation in tube stress, either produced by the primary to secondary side pressure differential, or by a novel method that reproducibly yields a higher stress condition on the tube. The significant effect of residual stress on tube SCC wasmore » also considered. SCC of both Alloy 600 and Alloy 690 were evaluated as a function of temperature and stress. Analytical transmission electron microscopy (ATEM) evaluations of the cracks and the grain boundaries ahead of the cracks were performed, providing insight into the SCC mechanism. This model boiler test methodology may be applicable to a range of bulkwater secondary chemistries that concentrate to produce aggressive crevice environments.« less

  11. Formulation development and optimization of sustained release matrix tablet of Itopride HCl by response surface methodology and its evaluation of release kinetics.

    PubMed

    Bose, Anirbandeep; Wong, Tin Wui; Singh, Navjot

    2013-04-01

    The objective of this present investigation was to develop and formulate sustained release (SR) matrix tablets of Itopride HCl, by using different polymer combinations and fillers, to optimize by Central Composite Design response surface methodology for different drug release variables and to evaluate drug release pattern of the optimized product. Sustained release matrix tablets of various combinations were prepared with cellulose-based polymers: hydroxy propyl methyl cellulose (HPMC) and polyvinyl pyrolidine (pvp) and lactose as fillers. Study of pre-compression and post-compression parameters facilitated the screening of a formulation with best characteristics that underwent here optimization study by response surface methodology (Central Composite Design). The optimized tablet was further subjected to scanning electron microscopy to reveal its release pattern. The in vitro study revealed that combining of HPMC K100M (24.65 MG) with pvp(20 mg)and use of LACTOSE as filler sustained the action more than 12 h. The developed sustained release matrix tablet of improved efficacy can perform therapeutically better than a conventional tablet.

  12. Developing a spectroradiometer data uncertainty methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Josh; Vignola, Frank; Habte, Aron

    The proper calibration and measurement uncertainty of spectral data obtained from spectroradiometers is essential in accurately quantifying the output of photovoltaic (PV) devices. PV cells and modules are initially characterized using solar simulators but field performance is evaluated using natural sunlight. Spectroradiometers are used to measure the spectrum of both these light sources in an effort to understand the spectral dependence of various PV output capabilities. These chains of characterization and measurement are traceable to National Metrology Institutes such as National Institute of Standards and Technology, and therefore there is a need for a comprehensive uncertainty methodology to determine themore » accuracy of spectroradiometer data. In this paper, the uncertainties associated with the responsivity of a spectroradiometer are examined using the Guide to the Expression of Uncertainty in Measurement (GUM) protocols. This is first done for a generic spectroradiometer, and then, to illustrate the methodology, the calibration of a LI-COR 1800 spectroradiometer is performed. The reader should be aware that the implementation of this methodology will be specific to the spectroradiometer being analyzed and the experimental setup that is used. Depending of the characteristics of the spectroradiometer being evaluated additional sources of uncertainty may need to be included, but the general GUM methodology is the same. Several sources of uncertainty are associated with the spectroradiometer responsivity. Major sources of uncertainty associated with the LI-COR spectroradiometer are noise in the signal at wavelengths less than 400 nm. At wavelengths more than 400 nm, the responsivity can vary drastically, and it is dependent on the wavelength of light, the temperature dependence, the angle of incidence, and the azimuthal orientation of the sensor to the light source. As a result, the expanded uncertainties in the responsivity of the LI-COR spectroradiometer in the wavelength range of 400-1050 nm can range from 4% to 14% at the 95% confidence level.« less

  13. Developing a spectroradiometer data uncertainty methodology

    DOE PAGES

    Peterson, Josh; Vignola, Frank; Habte, Aron; ...

    2017-04-11

    The proper calibration and measurement uncertainty of spectral data obtained from spectroradiometers is essential in accurately quantifying the output of photovoltaic (PV) devices. PV cells and modules are initially characterized using solar simulators but field performance is evaluated using natural sunlight. Spectroradiometers are used to measure the spectrum of both these light sources in an effort to understand the spectral dependence of various PV output capabilities. These chains of characterization and measurement are traceable to National Metrology Institutes such as National Institute of Standards and Technology, and therefore there is a need for a comprehensive uncertainty methodology to determine themore » accuracy of spectroradiometer data. In this paper, the uncertainties associated with the responsivity of a spectroradiometer are examined using the Guide to the Expression of Uncertainty in Measurement (GUM) protocols. This is first done for a generic spectroradiometer, and then, to illustrate the methodology, the calibration of a LI-COR 1800 spectroradiometer is performed. The reader should be aware that the implementation of this methodology will be specific to the spectroradiometer being analyzed and the experimental setup that is used. Depending of the characteristics of the spectroradiometer being evaluated additional sources of uncertainty may need to be included, but the general GUM methodology is the same. Several sources of uncertainty are associated with the spectroradiometer responsivity. Major sources of uncertainty associated with the LI-COR spectroradiometer are noise in the signal at wavelengths less than 400 nm. At wavelengths more than 400 nm, the responsivity can vary drastically, and it is dependent on the wavelength of light, the temperature dependence, the angle of incidence, and the azimuthal orientation of the sensor to the light source. As a result, the expanded uncertainties in the responsivity of the LI-COR spectroradiometer in the wavelength range of 400-1050 nm can range from 4% to 14% at the 95% confidence level.« less

  14. The research gap in chronic paediatric pain: A systematic review of randomised controlled trials.

    PubMed

    Boulkedid, R; Abdou, A Y; Desselas, E; Monégat, M; de Leeuw, T G; Avez-Couturier, J; Dugue, S; Mareau, C; Charron, B; Alberti, C; Kaguelidou, F

    2018-02-01

    Chronic pain is associated with significant functional and social impairment. The objective of this review was to assess the characteristics and quality of randomized controlled trials (RCTs) evaluating pain management interventions in children and adolescents with chronic pain. We performed a systematic search of PubMed, Embase and the Cochrane Library up to July 2017. We included RCTs that involved children and adolescents (3 months-18 years) and evaluated the use of pharmacological or non-pharmacological intervention(s) in the context of pain persisting or re-occurring for more than 3 months. Methodological quality was evaluated using the Cochrane Risk of Bias (ROB) Tool. A total of 58 RCTs were identified and numbers steadily increased over time. The majority were conducted in single hospital institutions, with no information on study funding. Median sample size was 47.5 participants (Q1,Q3: 32, 70). Forty-five percent of RCTs included both adults and children and the median of the mean ages at inclusion was 12.9 years (Q1,Q3: 11, 15). Testing of non-pharmacological interventions was predominant and only 5 RCTs evaluated analgesics or co-analgesics. Abdominal pain, headache/migraine and musculoskeletal pain were the most common types of chronic pain among participants. Methodological quality was poor with 90% of RCTs presenting a high or unclear ROB. Evaluation of analgesics targeting chronic pain relief in children and adolescents through RCTs is marginal. Infants and children with long-lasting painful conditions are insufficiently represented in RCTs. We discuss possible research constraints and challenges as well as methodologies to circumvent them. There is a substantial research gap regarding analgesic interventions for children and adolescents with chronic pain. Most clinical trials in the field focus on the evaluation of non-pharmacological interventions and are of low methodological quality. There is also a specific lack of trials involving infants and children and adolescents with long-lasting diseases. © 2017 European Pain Federation - EFIC®.

  15. Analysis of Photovoltaic System Energy Performance Evaluation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, S.; Newmiller, J.; Kimber, A.

    2013-11-01

    Documentation of the energy yield of a large photovoltaic (PV) system over a substantial period can be useful to measure a performance guarantee, as an assessment of the health of the system, for verification of a performance model to then be applied to a new system, or for a variety of other purposes. Although the measurement of this performance metric might appear to be straight forward, there are a number of subtleties associated with variations in weather and imperfect data collection that complicate the determination and data analysis. A performance assessment is most valuable when it is completed with amore » very low uncertainty and when the subtleties are systematically addressed, yet currently no standard exists to guide this process. This report summarizes a draft methodology for an Energy Performance Evaluation Method, the philosophy behind the draft method, and the lessons that were learned by implementing the method.« less

  16. Quantitative evaluation of waste prevention on the level of small and medium sized enterprises (SMEs).

    PubMed

    Laner, David; Rechberger, Helmut

    2009-02-01

    Waste prevention is a principle means of achieving the goals of waste management and a key element for developing sustainable economies. Small and medium sized enterprises (SMEs) contribute substantially to environmental degradation, often not even being aware of their environmental effects. Therefore, several initiatives have been launched in Austria aimed at supporting waste prevention measures on the level of SMEs. To promote the most efficient projects, they have to be evaluated with respect to their contribution to the goals of waste management. It is the aim of this paper to develop a methodology for evaluating waste prevention measures in SMEs based on their goal orientation. At first, conceptual problems of defining and delineating waste prevention activities are briefly discussed. Then an approach to evaluate waste prevention activities with respect to their environmental performance is presented and benchmarks which allow for an efficient use of the available funds are developed. Finally the evaluation method is applied to a number of former projects and the calculated results are analysed with respect to shortcomings and limitations of the model. It is found that the developed methodology can provide a tool for a more objective and comprehensible evaluation of waste prevention measures.

  17. Examining the Statistical Rigor of Test and Evaluation Results in the Live, Virtual and Constructive Environment

    DTIC Science & Technology

    2011-06-01

    Committee Meeting. 23 June 2008. Bjorkman, Eileen A. and Frank B. Gray . “Testing in a Joint Environment 2004-2008: Findings, Conclusions and...the LVC joint test environment to evaluate system performance and joint mission effectiveness (Bjorkman and Gray 2009a). The LVC battlespace...attack (Bjorkman and Gray 2009b). Figure 3 - JTEM Methodology (Bjorkman 2008) A key INTEGRAL FIRE lesson learned was realizing the need for each

  18. Diabetes-related emotional distress instruments: a systematic review of measurement properties.

    PubMed

    Lee, Jiyeon; Lee, Eun-Hyun; Kim, Chun-Ja; Moon, Seung Hei

    2015-12-01

    The objectives of this study were to identify all available diabetes-related emotional distress instruments and evaluate the evidence regarding their measurement properties to help in the selection of the most appropriate instrument for use in practice and research. A systematic literature search was performed. PubMed, Embase, CINAHL, and PsycINFO were searched systematically for articles on diabetes-related emotional distress instruments. The Consensus-based Standards for the Selection of Health Measurement Instruments checklist was used to evaluate the methodological quality of the identified studies. The quality of results with respect to the measurement properties of each study was evaluated using Terwee's quality criteria. An ancillary meta-analysis was performed. Of the 2345 articles yielded by the search, 19 full-text articles evaluating 6 diabetes-related emotional distress instruments were included in this study. No instrument demonstrated evidence for all measurement properties. The Problem Areas in Diabetes scale (PAID) was the most frequently studied and the best validated of the instruments. Pooled summary estimates of the correlation coefficient between the PAID and serum glycated hemoglobin revealed a positive but weak correlation. No diabetes-related emotional distress instrument demonstrated evidence for all measurement properties. No instrument was better than another, although the PAID was the best validated and is thus recommended for use. Further psychometric studies of the diabetes-related emotional distress instruments with rigorous methodologies are required. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. [The GIPSY-RECPAM model: a versatile approach for integrated evaluation in cardiologic care].

    PubMed

    Carinci, F

    2009-01-01

    Tree-structured methodology applied for the GISSI-PSICOLOGIA project, although performed in the framework of earliest GISSI studies, represents a powerful tool to analyze different aspects of cardiologic care. The GISSI-PSICOLOGIA project has delivered a novel methodology based on the joint application of psychometric tools and sophisticated statistical techniques. Its prospective use could allow building effective epidemiological models relevant to the prognosis of the cardiologic patient. The various features of the RECPAM method allow a versatile use in the framework of modern e-health projects. The study used the Cognitive Behavioral Assessment H Form (CBA-H) psychometrics scales. The potential for its future application in the framework of Italian cardiology is relevant and particularly indicated to assist planning of systems for integrated care and routine evaluation of the cardiologic patient.

  20. Applications of cost-effectiveness methodologies in behavioral medicine.

    PubMed

    Kaplan, Robert M; Groessl, Erik J

    2002-06-01

    In 1996, the Panel on Cost-Effectiveness in Health and Medicine developed standards for cost-effectiveness analysis. The standards include the use of a societal perspective, that treatments be evaluated in comparison with the best available alternative (rather than with no care at all), and that health benefits be expressed in standardized units. Guidelines for cost accounting were also offered. Among 24,562 references on cost-effectiveness in Medline between 1995 and 2000, only a handful were relevant to behavioral medicine. Only 19 studies published between 1983 and 2000 met criteria for further evaluation. Among analyses that were reported, only 2 studies were found consistent with the Panel's criteria for high-quality analyses, although more recent studies were more likely to meet methodological standards. There are substantial opportunities to advance behavioral medicine by performing standardized cost-effectiveness analyses.

  1. Systematic design of membership functions for fuzzy-logic control: A case study on one-stage partial nitritation/anammox treatment systems.

    PubMed

    Boiocchi, Riccardo; Gernaey, Krist V; Sin, Gürkan

    2016-10-01

    A methodology is developed to systematically design the membership functions of fuzzy-logic controllers for multivariable systems. The methodology consists of a systematic derivation of the critical points of the membership functions as a function of predefined control objectives. Several constrained optimization problems corresponding to different qualitative operation states of the system are defined and solved to identify, in a consistent manner, the critical points of the membership functions for the input variables. The consistently identified critical points, together with the linguistic rules, determine the long term reachability of the control objectives by the fuzzy logic controller. The methodology is highlighted using a single-stage side-stream partial nitritation/Anammox reactor as a case study. As a result, a new fuzzy-logic controller for high and stable total nitrogen removal efficiency is designed. Rigorous simulations are carried out to evaluate and benchmark the performance of the controller. The results demonstrate that the novel control strategy is capable of rejecting the long-term influent disturbances, and can achieve a stable and high TN removal efficiency. Additionally, the controller was tested, and showed robustness, against measurement noise levels typical for wastewater sensors. A feedforward-feedback configuration using the present controller would give even better performance. In comparison, a previously developed fuzzy-logic controller using merely expert and intuitive knowledge performed worse. This proved the importance of using a systematic methodology for the derivation of the membership functions for multivariable systems. These results are promising for future applications of the controller in real full-scale plants. Furthermore, the methodology can be used as a tool to help systematically design fuzzy logic control applications for other biological processes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. A Framework for Preliminary Design of Aircraft Structures Based on Process Information. Part 1

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    1998-01-01

    This report discusses the general framework and development of a computational tool for preliminary design of aircraft structures based on process information. The described methodology is suitable for multidisciplinary design optimization (MDO) activities associated with integrated product and process development (IPPD). The framework consists of three parts: (1) product and process definitions; (2) engineering synthesis, and (3) optimization. The product and process definitions are part of input information provided by the design team. The backbone of the system is its ability to analyze a given structural design for performance as well as manufacturability and cost assessment. The system uses a database on material systems and manufacturing processes. Based on the identified set of design variables and an objective function, the system is capable of performing optimization subject to manufacturability, cost, and performance constraints. The accuracy of the manufacturability measures and cost models discussed here depend largely on the available data on specific methods of manufacture and assembly and associated labor requirements. As such, our focus in this research has been on the methodology itself and not so much on its accurate implementation in an industrial setting. A three-tier approach is presented for an IPPD-MDO based design of aircraft structures. The variable-complexity cost estimation methodology and an approach for integrating manufacturing cost assessment into design process are also discussed. This report is presented in two parts. In the first part, the design methodology is presented, and the computational design tool is described. In the second part, a prototype model of the preliminary design Tool for Aircraft Structures based on Process Information (TASPI) is described. Part two also contains an example problem that applies the methodology described here for evaluation of six different design concepts for a wing spar.

  3. Instruments to assess patients with rotator cuff pathology: a systematic review of measurement properties.

    PubMed

    Longo, Umile Giuseppe; Saris, Daniël; Poolman, Rudolf W; Berton, Alessandra; Denaro, Vincenzo

    2012-10-01

    The aims of this study were to obtain an overview of the methodological quality of studies on the measurement properties of rotator cuff questionnaires and to describe how well various aspects of the design and statistical analyses of studies on measurement properties are performed. A systematic review of published studies on the measurement properties of rotator cuff questionnaires was performed. Two investigators independently rated the quality of the studies using the Consensus-based Standards for the selection of health Measurement Instruments checklist. This checklist was developed in an international Delphi consensus study. Sixteen studies were included, in which two measurement instruments were evaluated, namely the Western Ontario Rotator Cuff Index and the Rotator Cuff Quality-of-Life Measure. The methodological quality of the included studies was adequate on some properties (construct validity, reliability, responsiveness, internal consistency, and translation) but need to be improved on other aspects. The most important methodological aspects that need to be developed are as follows: measurement error, content validity, structural validity, cross-cultural validity, criterion validity, and interpretability. Considering the importance of adequate measurement properties, it is concluded that, in the field of rotator cuff pathology, there is room for improvement in the methodological quality of studies measurement properties. II.

  4. Quantifying biopsychosocial aspects in everyday contexts: an integrative methodological approach from the behavioral sciences

    PubMed Central

    Portell, Mariona; Anguera, M Teresa; Hernández-Mendo, Antonio; Jonsson, Gudberg K

    2015-01-01

    Contextual factors are crucial for evaluative research in psychology, as they provide insights into what works, for whom, in what circumstances, in what respects, and why. Studying behavior in context, however, poses numerous methodological challenges. Although a comprehensive framework for classifying methods seeking to quantify biopsychosocial aspects in everyday contexts was recently proposed, this framework does not contemplate contributions from observational methodology. The aim of this paper is to justify and propose a more general framework that includes observational methodology approaches. Our analysis is rooted in two general concepts: ecological validity and methodological complementarity. We performed a narrative review of the literature on research methods and techniques for studying daily life and describe their shared properties and requirements (collection of data in real time, on repeated occasions, and in natural settings) and classification criteria (eg, variables of interest and level of participant involvement in the data collection process). We provide several examples that illustrate why, despite their higher costs, studies of behavior and experience in everyday contexts offer insights that complement findings provided by other methodological approaches. We urge that observational methodology be included in classifications of research methods and techniques for studying everyday behavior and advocate a renewed commitment to prioritizing ecological validity in behavioral research seeking to quantify biopsychosocial aspects. PMID:26089708

  5. Methodology for evaluation of railroad technology research projects

    DOT National Transportation Integrated Search

    1981-04-01

    This Project memorandum presents a methodology for evaluating railroad research projects. The methodology includes consideration of industry and societal benefits, with special attention given to technical risks, implementation considerations, and po...

  6. Field validation of protocols developed to evaluate in-line mastitis detection systems.

    PubMed

    Kamphuis, C; Dela Rue, B T; Eastwood, C R

    2016-02-01

    This paper reports on a field validation of previously developed protocols for evaluating the performance of in-line mastitis-detection systems. The protocols outlined 2 requirements of these systems: (1) to detect cows with clinical mastitis (CM) promptly and accurately to enable timely and appropriate treatment and (2) to identify cows with high somatic cell count (SCC) to manage bulk milk SCC levels. Gold standard measures, evaluation tests, performance measures, and performance targets were proposed. The current study validated the protocols on commercial dairy farms with automated in-line mastitis-detection systems using both electrical conductivity (EC) and SCC sensor systems that both monitor at whole-udder level. The protocol for requirement 1 was applied on 3 commercial farms. For requirement 2, the protocol was applied on 6 farms; 3 of them had low bulk milk SCC (128×10(3) cells/mL) and were the same farms as used for field evaluation of requirement 1. Three farms with high bulk milk SCC (270×10(3) cells/mL) were additionally enrolled. The field evaluation methodology and results were presented at a workshop including representation from 7 international suppliers of in-line mastitis-detection systems. Feedback was sought on the acceptance of standardized performance evaluation protocols and recommended refinements to the protocols. Although the methodology for requirement 1 was relatively labor intensive and required organizational skills over an extended period, no major issues were encountered during the field validation of both protocols. The validation, thus, proved the protocols to be practical. Also, no changes to the data collection process were recommended by the technology supplier representatives. However, 4 recommendations were made to refine the protocols: inclusion of an additional analysis that ignores small (low-density) clot observations in the definition of CM, extension of the time window from 4 to 5 milkings for timely alerts for CM, setting a maximum number of 10 milkings for the time window to detect a CM episode, and presentation of sensitivity for a larger range of false alerts per 1,000 milkings replacing minimum performance targets. The recommended refinements are discussed with suggested changes to the original protocols. The information presented is intended to inform further debate toward achieving international agreement on standard protocols to evaluate performance of in-line mastitis-detection systems. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. A comprehensive evaluation of tyrosol and hydroxytyrosol derivatives in extra virgin olive oil by microwave-assisted hydrolysis and HPLC-MS/MS.

    PubMed

    Bartella, Lucia; Mazzotti, Fabio; Napoli, Anna; Sindona, Giovanni; Di Donna, Leonardo

    2018-03-01

    A rapid and reliable method to assay the total amount of tyrosol and hydroxytyrosol derivatives in extra virgin olive oil has been developed. The methodology intends to establish the nutritional quality of this edible oil addressing recent international health claim legislations (the European Commission Regulation No. 432/2012) and changing the classification of extra virgin olive oil to the status of nutraceutical. The method is based on the use of high-performance liquid chromatography coupled with tandem mass spectrometry and labeled internal standards preceded by a fast hydrolysis reaction step performed through the aid of microwaves under acid conditions. The overall process is particularly time saving, much shorter than any methodology previously reported. The developed approach represents a mix of rapidity and accuracy whose values have been found near 100% on different fortified vegetable oils, while the RSD% values, calculated from repeatability and reproducibility experiments, are in all cases under 7%. Graphical abstract Schematic of the methodology applied to the determination of tyrosol and hydroxytyrosol ester conjugates.

  8. Automatic Reacquisition of Satellite Positions by Detecting Their Expected Streaks in Astronomical Images

    NASA Astrophysics Data System (ADS)

    Levesque, M.

    Artificial satellites, and particularly space junk, drift continuously from their known orbits. In the surveillance-of-space context, they must be observed frequently to ensure that the corresponding orbital parameter database entries are up-to-date. Autonomous ground-based optical systems are periodically tasked to observe these objects, calculate the difference between their predicted and real positions and update object orbital parameters. The real satellite positions are provided by the detection of the satellite streaks in the astronomical images specifically acquired for this purpose. This paper presents the image processing techniques used to detect and extract the satellite positions. The methodology includes several processing steps including: image background estimation and removal, star detection and removal, an iterative matched filter for streak detection, and finally false alarm rejection algorithms. This detection methodology is able to detect very faint objects. Simulated data were used to evaluate the methodology's performance and determine the sensitivity limits where the algorithm can perform detection without false alarm, which is essential to avoid corruption of the orbital parameter database.

  9. Improving environmental impact and cost assessment for supplier evaluation

    NASA Astrophysics Data System (ADS)

    Beucker, Severin; Lang, Claus

    2004-02-01

    Improving a company"s environmental and financial performance necessitates the evaluation of environmental impacts deriving from the production and cost effects of corporate actions. These effects have to be made transparent and concrete targets have to be developed. Such an evaluation has to be done on a regular basis but with limited expenses. To achieve this, different instruments of environmental controlling such as LCA and environmental performance indicators have to be combined with methods from cost accounting. Within the research project CARE (Computer Aided Resource Efficiency Accounting for Medium-Sized Enterprises), the method Resource Efficiency Accounting (REA) is used to give the participating companies new insights into hidden costs and environmental effects of their production and products. The method combines process based cost accounting with environmental impact assessment methodology and offers results that can be integrated into a company"s environmental controlling system and business processes like cost accounting, supplier assessment, etc. Much of the data necessary for the combined assessment can be available within a company"s IT system and therefore can be efficiently used for the assessment process. The project CARE puts a strong focus on the use of company data and information systems for the described assessment process and offers a methodological background for the evaluation and the structuring of such data. Besides the general approach of the project CARE the paper will present results from a case study in which the described approach is used for the evaluation of suppliers.

  10. COTS Ceramic Chip Capacitors: An Evaluation of the Parts and Assurance Methodologies

    NASA Technical Reports Server (NTRS)

    Brusse, Jay A.; Sampson, Michael J.

    2004-01-01

    Commercial-Off-The-Shelf (COTS) multilayer ceramic chip capacitors (MLCCs) are continually evolving to reduce physical size and increase volumetric efficiency. Designers of high reliability aerospace and military systems are attracted to these attributes of COTS MLCCs and would like to take advantage of them while maintaining the high standards for long-term reliable operation they are accustomed io when selecting military qualified established reliability (MIL-ER) MLCCs. However, MIL-ER MLCCs are not available in the full range of small chip sizes with high capacitance as found in today's COTS MLCCs. The objectives for this evaluation were to assess the long-term performance of small case size COTS MLCCs and to identify effective, lower-cost product assurance methodologies. Fifteen (15) lots of COTS X7R dielectric MLCCs from four (4) different manufacturers and two (2) MIL-ER BX dielectric MLCCs from two (2) of the same manufacturers were evaluated. Both 0805 and 0402 chip sizes were included. Several voltage ratings were tested ranging from a high of 50 volts to a low of 6.3 volts. The evaluation consisted of a comprehensive screening and qualification test program based upon MIL-PRF-55681 (i.e., voltage conditioning, thermal shock, moisture resistance, 2000-hour life test, etc.). In addition, several lot characterization tests were performed including Destructive Physical Analysis (DPA), Highly Accelerated Life Test (HALT) and Dielectric Voltage Breakdown Strength. The data analysis included a comparison of the 2000-hour life test results (used as a metric for long-term performance) relative to the screening and characterization test results. Results of this analysis indicate that the long-term life performance of COTS MLCCs is variable -- some lots perform well, some lots perform poorly. DPA and HALT were found to be promising lot characterization tests to identify substandard COTS MLCC lots prior to conducting more expensive screening and qualification tests. The results indicate that lot- specific screening and qualification are still recommended for high reliability applications. One significant and concerning observation is that MIL- type voltage conditioning (100 hours at twice rated voltage, 125 C) was not an effective screen in removing infant mortality parts for the particular lots of COTS MLCCs evaluated.

  11. Development of analytical methodologies to assess recalcitrant pesticide bioremediation in biobeds at laboratory scale.

    PubMed

    Rivero, Anisleidy; Niell, Silvina; Cerdeiras, M Pía; Heinzen, Horacio; Cesio, María Verónica

    2016-06-01

    To assess recalcitrant pesticide bioremediation it is necessary to gradually increase the complexity of the biological system used in order to design an effective biobed assembly. Each step towards this effective biobed design needs a suitable, validated analytical methodology that allows a correct evaluation of the dissipation and bioconvertion. Low recovery yielding methods could give a false idea of a successful biodegradation process. To address this situation, different methods were developed and validated for the simultaneous determination of endosulfan, its main three metabolites, and chlorpyrifos in increasingly complex matrices where the bioconvertor basidiomycete Abortiporus biennis could grow. The matrices were culture media, bran, and finally a laboratory biomix composed of bran, peat and soil. The methodology for the analysis of the first evaluated matrix has already been reported. The methodologies developed for the other two systems are presented in this work. The targeted analytes were extracted from fungi growing over bran in semisolid media YNB (Yeast Nitrogen Based) with acetonitrile using shaker assisted extraction, The salting-out step was performed with MgSO4 and NaCl, and the extracts analyzed by GC-ECD. The best methodology was fully validated for all the evaluated analytes at 1 and 25mgkg(-1) yielding recoveries between 72% and 109% and RSDs <11% in all cases. The application of this methodology proved that A. biennis is able to dissipate 94% of endosulfan and 87% of chlorpyrifos after 90 days. Having assessed that A. biennis growing over bran can metabolize the studied pesticides, the next step faced was the development and validation of an analytical procedure to evaluate the analytes in a laboratory scale biobed composed of 50% of bran, 25% of peat and 25% of soil together with fungal micelium. From the different procedures assayed, only ultrasound assisted extraction with ethyl acetate allowed recoveries between 80% and 110% with RSDs <18%. Linearity, recovery, precision, matrix effect and LODs/LOQs of each method were studied for all the analytes: endosulfan isomers (α & β) and its metabolites (endosulfan sulfate, ether and diol) as well as for chlorpyrifos. In the first laboratory evaluation of these biobeds endosulfan was bioconverted up to 87% and chlorpyrifos more than 79% after 27 days. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Early warning systems for the management of chronic heart failure: a systematic literature review of cost-effectiveness models.

    PubMed

    Albuquerque De Almeida, Fernando; Al, Maiwenn; Koymans, Ron; Caliskan, Kadir; Kerstens, Ankie; Severens, Johan L

    2018-04-01

    Describing the general and methodological characteristics of decision-analytical models used in the economic evaluation of early warning systems for the management of chronic heart failure patients and performing a quality assessment of their methodological characteristics is expected to provide concise and useful insight to inform the future development of decision-analytical models in the field of heart failure management. Areas covered: The literature on decision-analytical models for the economic evaluation of early warning systems for the management of chronic heart failure patients was systematically reviewed. Nine electronic databases were searched through the combination of synonyms for heart failure and sensitive filters for cost-effectiveness and early warning systems. Expert commentary: The retrieved models show some variability with regards to their general study characteristics. Overall, they display satisfactory methodological quality, even though some points could be improved, namely on the consideration and discussion of any competing theories regarding model structure and disease progression, identification of key parameters and the use of expert opinion, and uncertainty analyses. A comprehensive definition of early warning systems and further research under this label should be pursued. To improve the transparency of economic evaluation publications, authors should make available detailed technical information regarding the published models.

  13. Evaluation of Structural Robustness against Column Loss: Methodology and Application to RC Frame Buildings

    PubMed Central

    Bao, Yihai; Main, Joseph A.; Noh, Sam-Young

    2017-01-01

    A computational methodology is presented for evaluating structural robustness against column loss. The methodology is illustrated through application to reinforced concrete (RC) frame buildings, using a reduced-order modeling approach for three-dimensional RC framing systems that includes the floor slabs. Comparisons with high-fidelity finite-element model results are presented to verify the approach. Pushdown analyses of prototype buildings under column loss scenarios are performed using the reduced-order modeling approach, and an energy-based procedure is employed to account for the dynamic effects associated with sudden column loss. Results obtained using the energy-based approach are found to be in good agreement with results from direct dynamic analysis of sudden column loss. A metric for structural robustness is proposed, calculated by normalizing the ultimate capacities of the structural system under sudden column loss by the applicable service-level gravity loading and by evaluating the minimum value of this normalized ultimate capacity over all column removal scenarios. The procedure is applied to two prototype 10-story RC buildings, one employing intermediate moment frames (IMFs) and the other employing special moment frames (SMFs). The SMF building, with its more stringent seismic design and detailing, is found to have greater robustness. PMID:28890599

  14. Methodologic European external quality assurance for DNA sequencing: the EQUALseq program.

    PubMed

    Ahmad-Nejad, Parviz; Dorn-Beineke, Alexandra; Pfeiffer, Ulrike; Brade, Joachim; Geilenkeuser, Wolf-Jochen; Ramsden, Simon; Pazzagli, Mario; Neumaier, Michael

    2006-04-01

    DNA sequencing is a key technique in molecular diagnostics, but to date no comprehensive methodologic external quality assessment (EQA) programs have been instituted. Between 2003 and 2005, the European Union funded, as specific support actions, the EQUAL initiative to develop methodologic EQA schemes for genotyping (EQUALqual), quantitative PCR (EQUALquant), and sequencing (EQUALseq). Here we report on the results of the EQUALseq program. The participating laboratories received a 4-sample set comprising 2 DNA plasmids, a PCR product, and a finished sequencing reaction to be analyzed. Data and information from detailed questionnaires were uploaded online and evaluated by use of a scoring system for technical skills and proficiency of data interpretation. Sixty laboratories from 21 European countries registered, and 43 participants (72%) returned data and samples. Capillary electrophoresis was the predominant platform (n = 39; 91%). The median contiguous correct sequence stretch was 527 nucleotides with considerable variation in quality of both primary data and data evaluation. The association between laboratory performance and the number of sequencing assays/year was statistically significant (P <0.05). Interestingly, more than 30% of participants neither added comments to their data nor made efforts to identify the gene sequences or mutational positions. Considerable variations exist even in a highly standardized methodology such as DNA sequencing. Methodologic EQAs are appropriate tools to uncover strengths and weaknesses in both technique and proficiency, and our results emphasize the need for mandatory EQAs. The results of EQUALseq should help improve the overall quality of molecular genetics findings obtained by DNA sequencing.

  15. Evaluation of probable maximum snow accumulation: Development of a methodology for climate change studies

    NASA Astrophysics Data System (ADS)

    Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick

    2016-06-01

    Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.

  16. Quality indicators for hip fracture care, a systematic review.

    PubMed

    Voeten, S C; Krijnen, P; Voeten, D M; Hegeman, J H; Wouters, M W J M; Schipper, I B

    2018-05-17

    Quality indicators are used to measure quality of care and enable benchmarking. An overview of all existing hip fracture quality indicators is lacking. The primary aim was to identify quality indicators for hip fracture care reported in literature, hip fracture audits, and guidelines. The secondary aim was to compose a set of methodologically sound quality indicators for the evaluation of hip fracture care in clinical practice. A literature search according to the PRISMA guidelines and an internet search were performed to identify hip fracture quality indicators. The indicators were subdivided into process, structure, and outcome indicators. The methodological quality of the indicators was judged using the Appraisal of Indicators through Research and Evaluation (AIRE) instrument. For structure and process indicators, the construct validity was assessed. Sixteen publications, nine audits and five guidelines were included. In total, 97 unique quality indicators were found: 9 structure, 63 process, and 25 outcome indicators. Since detailed methodological information about the indicators was lacking, the AIRE instrument could not be applied. Seven indicators correlated with an outcome measure. A set of nine quality indicators was extracted from the literature, audits, and guidelines. Many quality indicators are described and used. Not all of them correlate with outcomes of care and have been assessed methodologically. As methodological evidence is lacking, we recommend the extracted set of nine indicators to be used as the starting point for further clinical research. Future research should focus on assessing the clinimetric properties of the existing quality indicators.

  17. Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation

    PubMed Central

    Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B.

    2016-01-01

    Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field. PMID:27853419

  18. Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation.

    PubMed

    Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B

    2016-01-01

    Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field.

  19. Evaluation of Inter-Mountain Labs infrasound sensors : July 2007.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Darren M.

    2007-10-01

    Sandia National Laboratories has tested and evaluated three Inter Mountain Labs infrasound sensors. The test results included in this report were in response to static and tonal-dynamic input signals. Most test methodologies used were based on IEEE Standards 1057 for Digitizing Waveform Recorders and 1241 for Analog to Digital Converters; others were designed by Sandia specifically for infrasound application evaluation and for supplementary criteria not addressed in the IEEE standards. The objective of this work was to evaluate the overall technical performance of the Inter Mountain Labs (IML) infrasound sensor model SS. The results of this evaluation were only comparedmore » to relevant noise models; due to a lack of manufactures documentation notes on the sensors under test prior to testing. The tests selected for this system were chosen to demonstrate different performance aspects of the components under test.« less

  20. Evaluation Framework and Analyses for Thermal Energy Storage Integrated with Packaged Air Conditioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kung, F.; Deru, M.; Bonnema, E.

    2013-10-01

    Few third-party guidance documents or tools are available for evaluating thermal energy storage (TES) integrated with packaged air conditioning (AC), as this type of TES is relatively new compared to TES integrated with chillers or hot water systems. To address this gap, researchers at the National Renewable Energy Laboratory conducted a project to improve the ability of potential technology adopters to evaluate TES technologies. Major project outcomes included: development of an evaluation framework to describe key metrics, methodologies, and issues to consider when assessing the performance of TES systems integrated with packaged AC; application of multiple concepts from the evaluationmore » framework to analyze performance data from four demonstration sites; and production of a new simulation capability that enables modeling of TES integrated with packaged AC in EnergyPlus. This report includes the evaluation framework and analysis results from the project.« less

  1. Evaluating performance of stormwater sampling approaches using a dynamic watershed model.

    PubMed

    Ackerman, Drew; Stein, Eric D; Ritter, Kerry J

    2011-09-01

    Accurate quantification of stormwater pollutant levels is essential for estimating overall contaminant discharge to receiving waters. Numerous sampling approaches exist that attempt to balance accuracy against the costs associated with the sampling method. This study employs a novel and practical approach of evaluating the accuracy of different stormwater monitoring methodologies using stormflows and constituent concentrations produced by a fully validated continuous simulation watershed model. A major advantage of using a watershed model to simulate pollutant concentrations is that a large number of storms representing a broad range of conditions can be applied in testing the various sampling approaches. Seventy-eight distinct methodologies were evaluated by "virtual samplings" of 166 simulated storms of varying size, intensity and duration, representing 14 years of storms in Ballona Creek near Los Angeles, California. The 78 methods can be grouped into four general strategies: volume-paced compositing, time-paced compositing, pollutograph sampling, and microsampling. The performances of each sampling strategy was evaluated by comparing the (1) median relative error between the virtually sampled and the true modeled event mean concentration (EMC) of each storm (accuracy), (2) median absolute deviation about the median or "MAD" of the relative error or (precision), and (3) the percentage of storms where sampling methods were within 10% of the true EMC (combined measures of accuracy and precision). Finally, costs associated with site setup, sampling, and laboratory analysis were estimated for each method. Pollutograph sampling consistently outperformed the other three methods both in terms of accuracy and precision, but was the most costly method evaluated. Time-paced sampling consistently underestimated while volume-paced sampling over estimated the storm EMCs. Microsampling performance approached that of pollutograph sampling at a substantial cost savings. The most efficient method for routine stormwater monitoring in terms of a balance between performance and cost was volume-paced microsampling, with variable sample pacing to ensure that the entirety of the storm was captured. Pollutograph sampling is recommended if the data are to be used for detailed analysis of runoff dynamics.

  2. Probabilistic sizing of laminates with uncertainties

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Liaw, D. G.; Chamis, C. C.

    1993-01-01

    A reliability based design methodology for laminate sizing and configuration for a special case of composite structures is described. The methodology combines probabilistic composite mechanics with probabilistic structural analysis. The uncertainties of constituent materials (fiber and matrix) to predict macroscopic behavior are simulated using probabilistic theory. Uncertainties in the degradation of composite material properties are included in this design methodology. A multi-factor interaction equation is used to evaluate load and environment dependent degradation of the composite material properties at the micromechanics level. The methodology is integrated into a computer code IPACS (Integrated Probabilistic Assessment of Composite Structures). Versatility of this design approach is demonstrated by performing a multi-level probabilistic analysis to size the laminates for design structural reliability of random type structures. The results show that laminate configurations can be selected to improve the structural reliability from three failures in 1000, to no failures in one million. Results also show that the laminates with the highest reliability are the least sensitive to the loading conditions.

  3. A realist evaluation of the management of a well- performing regional hospital in Ghana

    PubMed Central

    2010-01-01

    Background Realist evaluation offers an interesting approach to evaluation of interventions in complex settings, but has been little applied in health care. We report on a realist case study of a well performing hospital in Ghana and show how such a realist evaluation design can help to overcome the limited external validity of a traditional case study. Methods We developed a realist evaluation framework for hypothesis formulation, data collection, data analysis and synthesis of the findings. Focusing on the role of human resource management in hospital performance, we formulated our hypothesis around the high commitment management concept. Mixed methods were used in data collection, including individual and group interviews, observations and document reviews. Results We found that the human resource management approach (the actual intervention) included induction of new staff, training and personal development, good communication and information sharing, and decentralised decision-making. We identified 3 additional practices: ensuring optimal physical working conditions, access to top managers and managers' involvement on the work floor. Teamwork, recognition and trust emerged as key elements of the organisational climate. Interviewees reported high levels of organisational commitment. The analysis unearthed perceived organisational support and reciprocity as underlying mechanisms that link the management practices with commitment. Methodologically, we found that realist evaluation can be fruitfully used to develop detailed case studies that analyse how management interventions work and in which conditions. Analysing the links between intervention, mechanism and outcome increases the explaining power, while identification of essential context elements improves the usefulness of the findings for decision-makers in other settings (external validity). We also identified a number of practical difficulties and priorities for further methodological development. Conclusion This case suggests that a well-balanced HRM bundle can stimulate organisational commitment of health workers. Such practices can be implemented even with narrow decision spaces. Realist evaluation provides an appropriate approach to increase the usefulness of case studies to managers and policymakers. PMID:20100330

  4. A realist evaluation of the management of a well-performing regional hospital in Ghana.

    PubMed

    Marchal, Bruno; Dedzo, McDamien; Kegels, Guy

    2010-01-25

    Realist evaluation offers an interesting approach to evaluation of interventions in complex settings, but has been little applied in health care. We report on a realist case study of a well performing hospital in Ghana and show how such a realist evaluation design can help to overcome the limited external validity of a traditional case study. We developed a realist evaluation framework for hypothesis formulation, data collection, data analysis and synthesis of the findings. Focusing on the role of human resource management in hospital performance, we formulated our hypothesis around the high commitment management concept. Mixed methods were used in data collection, including individual and group interviews, observations and document reviews. We found that the human resource management approach (the actual intervention) included induction of new staff, training and personal development, good communication and information sharing, and decentralised decision-making. We identified 3 additional practices: ensuring optimal physical working conditions, access to top managers and managers' involvement on the work floor. Teamwork, recognition and trust emerged as key elements of the organisational climate. Interviewees reported high levels of organisational commitment. The analysis unearthed perceived organisational support and reciprocity as underlying mechanisms that link the management practices with commitment. Methodologically, we found that realist evaluation can be fruitfully used to develop detailed case studies that analyse how management interventions work and in which conditions. Analysing the links between intervention, mechanism and outcome increases the explaining power, while identification of essential context elements improves the usefulness of the findings for decision-makers in other settings (external validity). We also identified a number of practical difficulties and priorities for further methodological development. This case suggests that a well-balanced HRM bundle can stimulate organisational commitment of health workers. Such practices can be implemented even with narrow decision spaces. Realist evaluation provides an appropriate approach to increase the usefulness of case studies to managers and policymakers.

  5. Evaluation Methodologies for Estimating the Likelihood of Program Implementation Failure

    ERIC Educational Resources Information Center

    Durand, Roger; Decker, Phillip J.; Kirkman, Dorothy M.

    2014-01-01

    Despite our best efforts as evaluators, program implementation failures abound. A wide variety of valuable methodologies have been adopted to explain and evaluate the "why" of these failures. Yet, typically these methodologies have been employed concurrently (e.g., project monitoring) or to the post-hoc assessment of program activities.…

  6. Advancing Usability Evaluation through Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; David I. Gertman

    2005-07-01

    This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probabilitymore » of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.« less

  7. Evaluation methodology for query-based scene understanding systems

    NASA Astrophysics Data System (ADS)

    Huster, Todd P.; Ross, Timothy D.; Culbertson, Jared L.

    2015-05-01

    In this paper, we are proposing a method for the principled evaluation of scene understanding systems in a query-based framework. We can think of a query-based scene understanding system as a generalization of typical sensor exploitation systems where instead of performing a narrowly defined task (e.g., detect, track, classify, etc.), the system can perform general user-defined tasks specified in a query language. Examples of this type of system have been developed as part of DARPA's Mathematics of Sensing, Exploitation, and Execution (MSEE) program. There is a body of literature on the evaluation of typical sensor exploitation systems, but the open-ended nature of the query interface introduces new aspects to the evaluation problem that have not been widely considered before. In this paper, we state the evaluation problem and propose an approach to efficiently learn about the quality of the system under test. We consider the objective of the evaluation to be to build a performance model of the system under test, and we rely on the principles of Bayesian experiment design to help construct and select optimal queries for learning about the parameters of that model.

  8. UNIX-based operating systems robustness evaluation

    NASA Technical Reports Server (NTRS)

    Chang, Yu-Ming

    1996-01-01

    Robust operating systems are required for reliable computing. Techniques for robustness evaluation of operating systems not only enhance the understanding of the reliability of computer systems, but also provide valuable feed- back to system designers. This thesis presents results from robustness evaluation experiments on five UNIX-based operating systems, which include Digital Equipment's OSF/l, Hewlett Packard's HP-UX, Sun Microsystems' Solaris and SunOS, and Silicon Graphics' IRIX. Three sets of experiments were performed. The methodology for evaluation tested (1) the exception handling mechanism, (2) system resource management, and (3) system capacity under high workload stress. An exception generator was used to evaluate the exception handling mechanism of the operating systems. Results included exit status of the exception generator and the system state. Resource management techniques used by individual operating systems were tested using programs designed to usurp system resources such as physical memory and process slots. Finally, the workload stress testing evaluated the effect of the workload on system performance by running a synthetic workload and recording the response time of local and remote user requests. Moderate to severe performance degradations were observed on the systems under stress.

  9. Evaluation of ultrasonic array imaging algorithms for inspection of a coarse grained material

    NASA Astrophysics Data System (ADS)

    Van Pamel, A.; Lowe, M. J. S.; Brett, C. R.

    2014-02-01

    Improving the ultrasound inspection capability for coarse grain metals remains of longstanding interest to industry and the NDE research community and is expected to become increasingly important for next generation power plants. A test sample of coarse grained Inconel 625 which is representative of future power plant components has been manufactured to test the detectability of different inspection techniques. Conventional ultrasonic A, B, and C-scans showed the sample to be extraordinarily difficult to inspect due to its scattering behaviour. However, in recent years, array probes and Full Matrix Capture (FMC) imaging algorithms, which extract the maximum amount of information possible, have unlocked exciting possibilities for improvements. This article proposes a robust methodology to evaluate the detection performance of imaging algorithms, applying this to three FMC imaging algorithms; Total Focusing Method (TFM), Phase Coherent Imaging (PCI), and Decomposition of the Time Reversal Operator with Multiple Scattering (DORT MSF). The methodology considers the statistics of detection, presenting the detection performance as Probability of Detection (POD) and probability of False Alarm (PFA). The data is captured in pulse-echo mode using 64 element array probes at centre frequencies of 1MHz and 5MHz. All three algorithms are shown to perform very similarly when comparing their flaw detection capabilities on this particular case.

  10. Evaluating conflation methods using uncertainty modeling

    NASA Astrophysics Data System (ADS)

    Doucette, Peter; Dolloff, John; Canavosio-Zuzelski, Roberto; Lenihan, Michael; Motsko, Dennis

    2013-05-01

    The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline, or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features. The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for conflation methods. Performance results are compiled for DCGIS street centerline features.

  11. Neurofeedback as supplementary training for optimizing athletes' performance: A systematic review with implications for future research.

    PubMed

    Mirifar, Arash; Beckmann, Jürgen; Ehrlenspiel, Felix

    2017-04-01

    Self-regulation plays an important role in enhancing human performance. Neurofeedback is a promising noninvasive approach for modifying human brain oscillation and can be utilized in developing skills for self-regulation of brain activity. So far, the effectiveness of neurofeedback has been evaluated with regard to not only its application in clinical populations but also the enhancement of performance in general. However, reviews of the application of neurofeedback training in the sports domain are absent, although this application goes back to 1991, when it was first applied in archery. Sport scientists have shown an increasing interest in this topic in recent years. This article provides an overview of empirical studies examining the effects of neurofeedback in sports and evaluates these studies against cardinal and methodological criteria. Furthermore, it includes guidelines and suggestions for future evaluations of neurofeedback training in sports. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. [Municipalities Stratification for Health Performance Evaluation].

    PubMed

    Calvo, Maria Cristina Marino; Lacerda, Josimari Telino de; Colussi, Claudia Flemming; Schneider, Ione Jayce Ceola; Rocha, Thiago Augusto Hernandes

    2016-01-01

    to propose and present a stratification of Brazilian municipalities into homogeneous groups for evaluation studies of health management performance. this was a methodological study, with selected indicators which classify municipalities according to conditions that influence the health management and population size; data for the year 2010 were collected from demographic and health databases; correlation tests and factor analysis were used. seven strata were identified - Large-sized; Medium-sized with favorable, regular or unfavorable influences; and Small-sized with favorable, regular or unfavorable influences -; there was a concentration of municipalities with favorable influences in strata with better purchasing power and funding, as well as a concentration of municipalities with unfavorable influences in the North and Northeast regions. the proposed classification grouped similar municipalities regarding influential factors in health management, which allowed the identification of comparable groups of municipalities, setting up a consistent alternative to performance evaluation studies.

  13. Nonlinear Time Domain Seismic Soil-Structure Interaction (SSI) Deep Soil Site Methodology Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spears, Robert Edward; Coleman, Justin Leigh

    Currently the Department of Energy (DOE) and the nuclear industry perform seismic soil-structure interaction (SSI) analysis using equivalent linear numerical analysis tools. For lower levels of ground motion, these tools should produce reasonable in-structure response values for evaluation of existing and new facilities. For larger levels of ground motion these tools likely overestimate the in-structure response (and therefore structural demand) since they do not consider geometric nonlinearities (such as gaping and sliding between the soil and structure) and are limited in the ability to model nonlinear soil behavior. The current equivalent linear SSI (SASSI) analysis approach either joins the soilmore » and structure together in both tension and compression or releases the soil from the structure for both tension and compression. It also makes linear approximations for material nonlinearities and generalizes energy absorption with viscous damping. This produces the potential for inaccurately establishing where the structural concerns exist and/or inaccurately establishing the amplitude of the in-structure responses. Seismic hazard curves at nuclear facilities have continued to increase over the years as more information has been developed on seismic sources (i.e. faults), additional information gathered on seismic events, and additional research performed to determine local site effects. Seismic hazard curves are used to develop design basis earthquakes (DBE) that are used to evaluate nuclear facility response. As the seismic hazard curves increase, the input ground motions (DBE’s) used to numerically evaluation nuclear facility response increase causing larger in-structure response. As ground motions increase so does the importance of including nonlinear effects in numerical SSI models. To include material nonlinearity in the soil and geometric nonlinearity using contact (gaping and sliding) it is necessary to develop a nonlinear time domain methodology. This methodology will be known as, NonLinear Soil-Structure Interaction (NLSSI). In general NLSSI analysis should provide a more accurate representation of the seismic demands on nuclear facilities their systems and components. INL, in collaboration with a Nuclear Power Plant Vender (NPP-V), will develop a generic Nuclear Power Plant (NPP) structural design to be used in development of the methodology and for comparison with SASSI. This generic NPP design has been evaluated for the INL soil site because of the ease of access and quality of the site specific data. It is now being evaluated for a second site at Vogtle which is located approximately 15 miles East-Northeast of Waynesboro, Georgia and adjacent to Savanna River. The Vogtle site consists of many soil layers spanning down to a depth of 1058 feet. The reason that two soil sites are chosen is to demonstrate the methodology across multiple soil sites. The project will drive the models (soil and structure) using successively increasing acceleration time histories with amplitudes. The models will be run in time domain codes such as ABAQUS, LS-DYNA, and/or ESSI and compared with the same models run in SASSI. The project is focused on developing and documenting a method for performing time domain, non-linear seismic soil structure interaction (SSI) analysis. Development of this method will provide the Department of Energy (DOE) and industry with another tool to perform seismic SSI analysis.« less

  14. Algorithms for detecting and predicting influenza outbreaks: metanarrative review of prospective evaluations

    PubMed Central

    Spreco, A; Timpka, T

    2016-01-01

    Objectives Reliable monitoring of influenza seasons and pandemic outbreaks is essential for response planning, but compilations of reports on detection and prediction algorithm performance in influenza control practice are largely missing. The aim of this study is to perform a metanarrative review of prospective evaluations of influenza outbreak detection and prediction algorithms restricted settings where authentic surveillance data have been used. Design The study was performed as a metanarrative review. An electronic literature search was performed, papers selected and qualitative and semiquantitative content analyses were conducted. For data extraction and interpretations, researcher triangulation was used for quality assurance. Results Eight prospective evaluations were found that used authentic surveillance data: three studies evaluating detection and five studies evaluating prediction. The methodological perspectives and experiences from the evaluations were found to have been reported in narrative formats representing biodefence informatics and health policy research, respectively. The biodefence informatics narrative having an emphasis on verification of technically and mathematically sound algorithms constituted a large part of the reporting. Four evaluations were reported as health policy research narratives, thus formulated in a manner that allows the results to qualify as policy evidence. Conclusions Awareness of the narrative format in which results are reported is essential when interpreting algorithm evaluations from an infectious disease control practice perspective. PMID:27154479

  15. Organ Donation European Quality System: ODEQUS project methodology.

    PubMed

    Manyalich, M; Guasch, X; Gomez, M P; Páez, G; Teixeira, L

    2013-01-01

    Differences in the number of organ donors among hospitals cannot be explained only by the number of intensive care unit beds used or neurologic patients treated. The figures obtained are influenced by the organizational structure of the donation process and how efficient it is. The Organ Donation European Quality System (ODEQUS) is a 3-year project (from October 2010 to September 2013) co-financed by the European Agency for Health and Consumers (EAHC20091108) which aims to define a methodology to evaluate organ procurement performance at the hospital level. ODEQUS's specific objectives are to identify quality criteria and to develop quality indicators in three types of organ donation (after brain death, after cardiac death, and living donation). Those tools will be useful for hospitals' self-assessment as well as for developing an international auditing model. A consortium has been established involving 14 associated partners from Austria, Croatia, France, Germany, Italy, Poland, Portugal, Romania, Spain, Sweden, and the United Kingdom, as well as five collaborating partners from Greece, Hungary, Malta, Slovenia, and Turkey. The project has been established in three steps: 1) Design of a survey about the use of quality tools in a wide sample of European hospitals; 2) Development of quality criteria and quality indicators by the project experts. The main fields considered have been organizational structures, clinical procedures, and outcomes; and 3) Elaboration of an evaluation system to test the quality indicators in 11 European hospitals. Two types of training have been designed and performed: one concerns the development of quality criteria and quality indicators, whereas another is focused on how to use evaluation tools. Following this methodology, the project has so far identified 131 quality criteria and developed 31 quality indicators. Currently, the quality indicators are being tested in 11 selected hospitals. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Health economic evaluation: important principles and methodology.

    PubMed

    Rudmik, Luke; Drummond, Michael

    2013-06-01

    To discuss health economic evaluation and improve the understanding of common methodology. This article discusses the methodology for the following types of economic evaluations: cost-minimization, cost-effectiveness, cost-utility, cost-benefit, and economic modeling. Topics include health-state utility measures, the quality-adjusted life year (QALY), uncertainty analysis, discounting, decision tree analysis, and Markov modeling. Economic evaluation is the comparative analysis of alternative courses of action in terms of both their costs and consequences. With increasing health care expenditure and limited resources, it is important for physicians to consider the economic impact of their interventions. Understanding common methodology involved in health economic evaluation will improve critical appraisal of the literature and optimize future economic evaluations. Copyright © 2012 The American Laryngological, Rhinological and Otological Society, Inc.

  17. Evaluation of the HARDMAN comparability methodology for manpower, personnel and training

    NASA Technical Reports Server (NTRS)

    Zimmerman, W.; Butler, R.; Gray, V.; Rosenberg, L.

    1984-01-01

    The methodology evaluation and recommendation are part of an effort to improve Hardware versus Manpower (HARDMAN) methodology for projecting manpower, personnel, and training (MPT) to support new acquisition. Several different validity tests are employed to evaluate the methodology. The methodology conforms fairly well with both the MPT user needs and other accepted manpower modeling techniques. Audits of three completed HARDMAN applications reveal only a small number of potential problem areas compared to the total number of issues investigated. The reliability study results conform well with the problem areas uncovered through the audits. The results of the accuracy studies suggest that the manpower life-cycle cost component is only marginally sensitive to changes in other related cost variables. Even with some minor problems, the methodology seem sound and has good near term utility to the Army. Recommendations are provided to firm up the problem areas revealed through the evaluation.

  18. Stochastic HKMDHE: A multi-objective contrast enhancement algorithm

    NASA Astrophysics Data System (ADS)

    Pratiher, Sawon; Mukhopadhyay, Sabyasachi; Maity, Srideep; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2018-02-01

    This contribution proposes a novel extension of the existing `Hyper Kurtosis based Modified Duo-Histogram Equalization' (HKMDHE) algorithm, for multi-objective contrast enhancement of biomedical images. A novel modified objective function has been formulated by joint optimization of the individual histogram equalization objectives. The optimal adequacy of the proposed methodology with respect to image quality metrics such as brightness preserving abilities, peak signal-to-noise ratio (PSNR), Structural Similarity Index (SSIM) and universal image quality metric has been experimentally validated. The performance analysis of the proposed Stochastic HKMDHE with existing histogram equalization methodologies like Global Histogram Equalization (GHE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) has been given for comparative evaluation.

  19. An almost-parameter-free harmony search algorithm for groundwater pollution source identification.

    PubMed

    Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui

    2013-01-01

    The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.

  20. Assessing the Impact of Continuous Evaluation Strategies: Tradeoff between Student Performance and Instructor Effort

    ERIC Educational Resources Information Center

    Poza-Lujan, Jose-Luis; Calafate, Carlos T.; Posadas-Yagüe, Juan-Luis; Cano, Juan-Carlos

    2016-01-01

    Current opinion on undergraduate studies has led to a reformulation of teaching methodologies to base them not just on learning, but also on skills and competencies. In this approach, the teaching/learning process should accomplish both knowledge assimilation and skill development. Previous works demonstrated that a strategy that uses continuous…

  1. Teaching Groups as Foci for Evaluating Performance in Cost-Effectiveness of GCE Advanced Level Provision: Some Practical Methodological Innovations.

    ERIC Educational Resources Information Center

    Fielding, Antony

    2002-01-01

    Analyzes subject teaching-group effectiveness in English and Welsh General Certification of Education (GCE) Advanced Level prior to a linking to resources; suggests cross-classified multilevel models with weighted random effects for disentangling student, group, and teacher effects; finds that teacher effects are considerable, but cannot find…

  2. A Comparison of Educational "Value-Added" Methodologies for Classifying Teacher Effectiveness: Value Tables vs. Covariate Regression

    ERIC Educational Resources Information Center

    Dwyer, Theodore J.

    2016-01-01

    There is a great deal of concern regarding teacher impacts on student achievement being used as a substantial portion of a teacher's performance evaluation. This study investigated the degree of concordance and discordance between mathematics teacher ranking using value tables and covariate regression, which have both been used as measures for…

  3. Reaction-to-fire testing and modeling for wood products

    Treesearch

    Mark A. Dietenberger; Robert H. White

    2001-01-01

    In this review we primarily discuss our use of the oxygen consumption calorimeter (ASTM E1354 for cone calorimeter and ISO9705 for room/corner tests) and fire growth modeling to evaluate treated wood products. With recent development towards performance-based building codes, new methodology requires engineering calculations of various fire growth scenarios. The initial...

  4. 75 FR 62913 - 30-Day Notice of Proposed Information Collection: Form DS-3053, Statement of Consent or Special...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-13

    ... properly perform our functions. Evaluate the accuracy of our estimate of the burden of the proposed... issuance of passports to U.S. citizens and nationals under the age of 16. The primary purpose of soliciting... family circumstances. Methodology: Passport Services collects information from U.S. citizens and non...

  5. Student Real-Time Visualization System in Classroom Using RFID Based on UTAUT Model

    ERIC Educational Resources Information Center

    Raja Yusof, Raja Jamilah; Qazi, Atika; Inayat, Irum

    2017-01-01

    Purpose: The purpose of this paper is to monitor in-class activities and the performance of the students. Design/methodology/approach: A pilot study was conducted to evaluate the proposed system using a questionnaire with 132 participants (teachers and non-teachers) in a presentation style to record the participant's perception about performance…

  6. 75 FR 65039 - Submission for Review: Performance Measurement Surveys, OMB Control No. 3206-NEW

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-21

    ... June 21, 2010 at 75 FR 35092 allowing for a 60-day public comment period. No comments were received for... comments. The Office of Management and Budget is particularly interested in comments that: 1. Evaluate... of the methodology and assumptions used; 3. Enhance the quality, utility, and clarity of the...

  7. Effect of Guided Collaboration on General and Special Educators' Perceptions of Collaboration and Student Achievement

    ERIC Educational Resources Information Center

    Laine, Sandra

    2013-01-01

    This study investigated the effects of a guided collaboration approach during professional learning community meetings (PLC's) on the perceptions of general and special educators as well as the effect on student performance as measured by benchmark evaluation. A mixed methodology approach was used to collect data through surveys, weekly…

  8. Response Time Analysis and Test of Protection System Instrument Channels for APR1400 and OPR1000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Chang Jae; Han, Seung; Yun, Jae Hee

    2015-07-01

    Safety limits are required to maintain the integrity of physical barriers designed to prevent the uncontrolled release of radioactive materials in nuclear power plants. The safety analysis establishes two critical constraints that include an analytical limit in terms of a measured or calculated variable, and a specific time after the analytical limit is reached to begin protective action. Keeping with the nuclear regulations and industry standards, satisfying these two requirements will ensure that the safety limit will not be exceeded during the design basis event, either an anticipated operational occurrence or a postulated accident. Various studies on the setpoint determinationmore » methodology for the safety-related instrumentation have been actively performed to ensure that the requirement of the analytical limit is satisfied. In particular, the protection setpoint methodology for the advanced power reactor 1400 (APP1400) and the optimized power reactor 1000 (OPR1000) has been recently developed to cover both the design basis event and the beyond design basis event. The developed setpoint methodology has also been quantitatively validated using specific computer programs and setpoint calculations. However, the safety of nuclear power plants cannot be fully guaranteed by satisfying the requirement of the analytical limit. In spite of the response time verification requirements of nuclear regulations and industry standards, it is hard to find the studies on the systematically integrated methodology regarding the response time evaluation. In cases of APR1400 and OPR1000, the response time analysis for the plant protection system is partially included in the setpoint calculation and the response time test is separately performed via the specific plant procedure. The test technique has a drawback which is the difficulty to demonstrate completeness of timing test. The analysis technique has also a demerit of resulting in extreme times that not actually possible. Thus, the establishment of the systematic response time evaluation methodology is needed to justify the conformance to the response time requirement used in the safety analysis. This paper proposes the response time evaluation methodology for APR1400 and OPR1000 using the combined analysis and test technique to confirm that the plant protection system can meet the analytical response time assumed in the safety analysis. In addition, the results of the quantitative evaluation performed for APR1400 and OPR1000 are presented in this paper. The proposed response time analysis technique consists of defining the response time requirement, determining the critical signal path for the trip parameter, allocating individual response time to each component on the signal path, and analyzing the total response time for the trip parameter, and demonstrates that the total analyzed response time does not exceed the response time requirement. The proposed response time test technique is composed of defining the response time requirement, determining the critical signal path for the trip parameter, determining the test method for each component on the signal path, performing the response time test, and demonstrates that the total test result does not exceed the response time requirement. The total response time should be tested in a single test that covers from the sensor to the final actuation device on the instrument channel. When the total channel is not tested in a single test, separate tests on groups of components or single components including the total instrument channel shall be combined to verify the total channel response. For APR1400 and OPR1000, the ramp test technique is used for the pressure and differential pressure transmitters and the step function testing technique is applied to the signal processing equipment and final actuation device. As a result, it can be demonstrated that the response time requirement is satisfied by the combined analysis and test technique. Therefore, the proposed methodology in this paper plays a crucial role in guaranteeing the safety of the nuclear power plants systematically satisfying one of two critical requirements from the safety analysis. (authors)« less

  9. Simplified procedures for correlation of experimentally measured and predicted thrust chamber performance

    NASA Technical Reports Server (NTRS)

    Powell, W. B.

    1973-01-01

    Thrust chamber performance is evaluated in terms of an analytical model incorporating all the loss processes that occur in a real rocket motor. The important loss processes in the real thrust chamber were identified, and a methodology and recommended procedure for predicting real thrust chamber vacuum specific impulse were developed. Simplified equations for the calculation of vacuum specific impulse are developed to relate the delivered performance (both vacuum specific impulse and characteristic velocity) to the ideal performance as degraded by the losses corresponding to a specified list of loss processes. These simplified equations enable the various performance loss components, and the corresponding efficiencies, to be quantified separately (except that interaction effects are arbitrarily assigned in the process). The loss and efficiency expressions presented can be used to evaluate experimentally measured thrust chamber performance, to direct development effort into the areas most likely to yield improvements in performance, and as a basis to predict performance of related thrust chamber configurations.

  10. Reliability and Validity of the Turkish Version of the Job Performance Scale Instrument.

    PubMed

    Harmanci Seren, Arzu Kader; Tuna, Rujnan; Eskin Bacaksiz, Feride

    2018-02-01

    Objective measurement of the job performance of nursing staff using valid and reliable instruments is important in the evaluation of healthcare quality. A current, valid, and reliable instrument that specifically measures the performance of nurses is required for this purpose. The aim of this study was to determine the validity and reliability of the Turkish version of the Job Performance Instrument. This study used a methodological design and a sample of 240 nurses working at different units in four hospitals in Istanbul, Turkey. A descriptive data form, the Job Performance Scale, and the Employee Performance Scale were used to collect data. Data were analyzed using IBM SPSS Statistics Version 21.0 and LISREL Version 8.51. On the basis of the data analysis, the instrument was revised. Some items were deleted, and subscales were combined. The Turkish version of the Job Performance Instrument was determined to be valid and reliable to measure the performance of nurses. The instrument is suitable for evaluating current nursing roles.

  11. Tools and Techniques for Evaluating the Effects of Maintenance Resource Management (MRM) in Air Safety

    NASA Technical Reports Server (NTRS)

    Taylor, James C.

    2002-01-01

    This research project was designed as part of a larger effort to help Human Factors (HF) implementers, and others in the aviation maintenance community, understand, evaluate, and validate the impact of Maintenance Resource Management (MRM) training programs, and other MRM interventions; on participant attitudes, opinions, behaviors, and ultimately on enhanced safety performance. It includes research and development of evaluation methodology as well as examination of psychological constructs and correlates of maintainer performance. In particular, during 2001, three issues were addressed. First a prototype process for measuring performance was developed and used. Second an automated calculator was developed to aid the HF implementer user in analyzing and evaluating local survey data. These results include being automatically compared with the experience from all MRM programs studied since 1991. Third the core survey (the Maintenance Resource Management Technical Operations Questionnaire, or 'MRM/TOQ') was further developed and tested to include topics of added relevance to the industry.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, Alberta; Mann, Margaret; Gelman, Rachel

    In evaluating next-generation materials and processes, the supply chain can have a large impact on the life cycle energy impacts. The Materials Flow through Industry (MFI) tool was developed for the Department of Energy's Advanced Manufacturing Office to be able to evaluate the energy impacts of the U.S. supply chain. The tool allows users to perform process comparisons, material substitutions, and grid modifications, and to see the effects of implementing sector efficiency potentials (Masanet, et al. 2009). This paper reviews the methodology of the tool and provides results around specific scenarios.

  13. Integrated payload and mission planning, phase 3. Volume 1: Integrated payload and mission planning process evaluation

    NASA Technical Reports Server (NTRS)

    Sapp, T. P.; Davin, D. E.

    1977-01-01

    The integrated payload and mission planning process for STS payloads was defined, and discrete tasks which evaluate performance and support initial implementation of this process were conducted. The scope of activity was limited to NASA and NASA-related payload missions only. The integrated payload and mission planning process was defined in detail, including all related interfaces and scheduling requirements. Related to the payload mission planning process, a methodology for assessing early Spacelab mission manager assignment schedules was defined.

  14. Work Group on American Indian Research and Program Evaluation Methodology, Symposium on Research and Evaluation Methodology: Lifespan Issues Related to American Indians/Alaska Natives with Disabilities (Washington, DC, April 26-27, 2002).

    ERIC Educational Resources Information Center

    Davis, Jamie D., Ed.; Erickson, Jill Shepard, Ed.; Johnson, Sharon R., Ed.; Marshall, Catherine A., Ed.; Running Wolf, Paulette, Ed.; Santiago, Rolando L., Ed.

    This first symposium of the Work Group on American Indian Research and Program Evaluation Methodology (AIRPEM) explored American Indian and Alaska Native cultural considerations in relation to "best practices" in research and program evaluation. These cultural considerations include the importance of tribal consultation on research…

  15. An Integrated Optimization Design Method Based on Surrogate Modeling Applied to Diverging Duct Design

    NASA Astrophysics Data System (ADS)

    Hanan, Lu; Qiushi, Li; Shaobin, Li

    2016-12-01

    This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.

  16. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-05-01

    Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.

  17. Performance-based, cost- and time-effective pcb analytical methodology.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alvarado, J. S.

    1998-06-11

    Laboratory applications for the analysis of PCBs (polychlorinated biphenyls) in environmental matrices such as soil/sediment/sludge and oil/waste oil were evaluated for potential reduction in waste, source reduction, and alternative techniques for final determination. As a consequence, new procedures were studied for solvent substitution, miniaturization of extraction and cleanups, minimization of reagent consumption, reduction of cost per analysis, and reduction of time. These new procedures provide adequate data that meet all the performance requirements for the determination of PCBs. Use of the new procedures reduced costs for all sample preparation techniques. Time and cost were also reduced by combining the newmore » sample preparation procedures with the power of fast gas chromatography. Separation of Aroclor 1254 was achieved in less than 6 min by using DB-1 and SPB-608 columns. With the greatly shortened run times, reproducibility can be tested quickly and consequently with low cost. With performance-based methodology, the applications presented here can be applied now, without waiting for regulatory approval.« less

  18. Stress Optical Coefficient, Test Methodology, and Glass Standard Evaluation

    DTIC Science & Technology

    2016-05-01

    identifying and mapping flaw size distributions on glass surfaces for predicting mechanical response. International Journal of Applied Glass ...ARL-TN-0756 ● MAY 2016 US Army Research Laboratory Stress Optical Coefficient, Test Methodology, and Glass Standard Evaluation...Stress Optical Coefficient, Test Methodology, and Glass Standard Evaluation by Clayton M Weiss Oak Ridge Institute for Science and Education

  19. Evaluation of Urban After-School Programs: Effective Methodologies for a Diverse and Political Environment.

    ERIC Educational Resources Information Center

    Frank, Martina W.; Walker-Moffat, Wendy

    This study considered how 25 highly diverse after-school programs with funding of $5.6 million were evaluated during a 10-month period. The paper describes the evaluation methodologies used and determined which methodologies were most effective within a diverse and political context. The Bayview Fund for Youth Development (name assumed for…

  20. ARCHITECT: The architecture-based technology evaluation and capability tradeoff method

    NASA Astrophysics Data System (ADS)

    Griendling, Kelly A.

    The use of architectures for the design, development, and documentation of system-of-systems engineering has become a common practice in recent years. This practice became mandatory in the defense industry in 2004 when the Department of Defense Architecture Framework (DoDAF) Promulgation Memo mandated that all Department of Defense (DoD) architectures must be DoDAF compliant. Despite this mandate, there has been significant confusion and a lack of consistency in the creation and the use of the architecture products. Products are typically created as static documents used for communication and documentation purposes that are difficult to change and do not support engineering design activities and acquisition decision making. At the same time, acquisition guidance has been recently reformed to move from the bottom-up approach of the Requirements Generation System (RGS) to the top-down approach mandated by the Joint Capabilities Integration and Devel- opment System (JCIDS), which requires the use of DoDAF to support acquisition. Defense agencies have had difficulty adjusting to this new policy, and are struggling to determine how to meet new acquisition requirements. This research has developed the Architecture-based Technology Evaluation and Capability Tradeoff (ARCHITECT) Methodology to respond to these challenges and address concerns raised about the defense acquisition process, particularly the time required to implement parts of the process, the need to evaluate solutions across capability and mission areas, and the need to use a rigorous, traceable, repeatable method that utilizes modeling and simulation to better substantiate early-phase acquisition decisions. The objective is to create a capability-based systems engineering methodology for the early phases of design and acquisition (specifically Pre-Milestone A activities) which improves agility in defense acquisition by (1) streamlining the development of key elements of JCIDS and DoDAF, (2) moving the creation of DoDAF products forward in the defense acquisition process, and (3) using DoDAF products for more than documentation by integrating them into the problem definition and analysis of alternatives phases and applying executable architecting. This research proposes and demonstrates the plausibility of a prescriptive methodology for developing executable DoDAF products which will explicitly support decision-making in the early phases of JCIDS. A set of criteria by which CBAs should be judged is proposed, and the methodology is developed with these criteria in mind. The methodology integrates existing tools and techniques for systems engineering and system of systems engineering with several new modeling and simulation tools and techniques developed as part of this research to fill gaps noted in prior CBAs. A suppression of enemy air defenses (SEAD) mission is used to demonstrate the ap- plication of ARCHITECT and to show the plausibility of the approach. For the SEAD study, metrics are derived and a gap analysis is performed. The study then identifies and quantitatively compares system and operational architecture alternatives for performing SEAD. A series of down-selections is performed to identify promising architectures, and these promising solutions are subject to further analysis where the impacts of force structure and network structure are examined. While the numerical results of the SEAD study are notional and could not be applied to an actual SEAD CBA, the example served to highlight many of the salient features of the methodology. The SEAD study presented enabled pre-Milestone A tradeoffs to be performed quantitatively across a large number of architectural alternatives in a traceable and repeatable manner. The alternatives considered included variations on operations, systems, organizational responsibilities (through the assignment of systems to tasks), network (or collaboration) structure, interoperability level, and force structure. All of the information used in the study is preserved in the environment, which is dynamic and allows for on-the-fly analysis. The assumptions used were consistent, which was assured through the use of single file documenting all inputs, which was shared across all models. Furthermore, a model was made of the ARCHITECT methodology itself, and was used to demonstrate that even if the steps took twice as long to perform as they did in the case of the SEAD example, the methodology still provides the ability to conduct CBA analyses in less time than prior CBAs to date. Overall, it is shown that the ARCHITECT methodology results in an improvement over current CBAs in the criteria developed here.

Top