Detection methods and performance criteria for genetically modified organisms.
Bertheau, Yves; Diolez, Annick; Kobilinsky, André; Magin, Kimberly
2002-01-01
Detection methods for genetically modified organisms (GMOs) are necessary for many applications, from seed purity assessment to compliance of food labeling in several countries. Numerous analytical methods are currently used or under development to support these needs. The currently used methods are bioassays and protein- and DNA-based detection protocols. To avoid discrepancy of results between such largely different methods and, for instance, the potential resulting legal actions, compatibility of the methods is urgently needed. Performance criteria of methods allow evaluation against a common standard. The more-common performance criteria for detection methods are precision, accuracy, sensitivity, and specificity, which together specifically address other terms used to describe the performance of a method, such as applicability, selectivity, calibration, trueness, precision, recovery, operating range, limit of quantitation, limit of detection, and ruggedness. Performance criteria should provide objective tools to accept or reject specific methods, to validate them, to ensure compatibility between validated methods, and be used on a routine basis to reject data outside an acceptable range of variability. When selecting a method of detection, it is also important to consider its applicability, its field of applications, and its limitations, by including factors such as its ability to detect the target analyte in a given matrix, the duration of the analyses, its cost effectiveness, and the necessary sample sizes for testing. Thus, the current GMO detection methods should be evaluated against a common set of performance criteria.
Extraction of decision rules via imprecise probabilities
NASA Astrophysics Data System (ADS)
Abellán, Joaquín; López, Griselda; Garach, Laura; Castellano, Javier G.
2017-05-01
Data analysis techniques can be applied to discover important relations among features. This is the main objective of the Information Root Node Variation (IRNV) technique, a new method to extract knowledge from data via decision trees. The decision trees used by the original method were built using classic split criteria. The performance of new split criteria based on imprecise probabilities and uncertainty measures, called credal split criteria, differs significantly from the performance obtained using the classic criteria. This paper extends the IRNV method using two credal split criteria: one based on a mathematical parametric model, and other one based on a non-parametric model. The performance of the method is analyzed using a case study of traffic accident data to identify patterns related to the severity of an accident. We found that a larger number of rules is generated, significantly supplementing the information obtained using the classic split criteria.
An evaluation of performance criteria for US Environmental Protection Agency Compendium Method TO-17 for monitoring volatile organic compounds (VOCs) in air has been accomplished. The method is a solid adsorbent-based sampling and analytical procedure including performance crit...
Operator performance evaluation using multi criteria decision making methods
NASA Astrophysics Data System (ADS)
Rani, Ruzanita Mat; Ismail, Wan Rosmanira; Razali, Siti Fatihah
2014-06-01
Operator performance evaluation is a very important operation in labor-intensive manufacturing industry because the company's productivity depends on the performance of its operators. The aims of operator performance evaluation are to give feedback to operators on their performance, to increase company's productivity and to identify strengths and weaknesses of each operator. In this paper, six multi criteria decision making methods; Analytical Hierarchy Process (AHP), fuzzy AHP (FAHP), ELECTRE, PROMETHEE II, Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) are used to evaluate the operators' performance and to rank the operators. The performance evaluation is based on six main criteria; competency, experience and skill, teamwork and time punctuality, personal characteristics, capability and outcome. The study was conducted at one of the SME food manufacturing companies in Selangor. From the study, it is found that AHP and FAHP yielded the "outcome" criteria as the most important criteria. The results of operator performance evaluation showed that the same operator is ranked the first using all six methods.
NASA Astrophysics Data System (ADS)
Wu, Hsin-Hung; Tsai, Ya-Ning
2012-11-01
This study uses both analytic hierarchy process (AHP) and decision-making trial and evaluation laboratory (DEMATEL) methods to evaluate the criteria in auto spare parts industry in Taiwan. Traditionally, AHP does not consider indirect effects for each criterion and assumes that criteria are independent without further addressing the interdependence between or among the criteria. Thus, the importance computed by AHP can be viewed as short-term improvement opportunity. On the contrary, DEMATEL method not only evaluates the importance of criteria but also depicts the causal relations of criteria. By observing the causal diagrams, the improvement based on cause-oriented criteria might improve the performance effectively and efficiently for the long-term perspective. As a result, the major advantage of integrating AHP and DEMATEL methods is that the decision maker can continuously improve suppliers' performance from both short-term and long-term viewpoints.
Using hybrid method to evaluate the green performance in uncertainty.
Tseng, Ming-Lang; Lan, Lawrence W; Wang, Ray; Chiu, Anthony; Cheng, Hui-Ping
2011-04-01
Green performance measure is vital for enterprises in making continuous improvements to maintain sustainable competitive advantages. Evaluation of green performance, however, is a challenging task due to the dependence complexity of the aspects, criteria, and the linguistic vagueness of some qualitative information and quantitative data together. To deal with this issue, this study proposes a novel approach to evaluate the dependence aspects and criteria of firm's green performance. The rationale of the proposed approach, namely green network balanced scorecard, is using balanced scorecard to combine fuzzy set theory with analytical network process (ANP) and importance-performance analysis (IPA) methods, wherein fuzzy set theory accounts for the linguistic vagueness of qualitative criteria and ANP converts the relations among the dependence aspects and criteria into an intelligible structural modeling used IPA. For the empirical case study, four dependence aspects and 34 green performance criteria for PCB firms in Taiwan were evaluated. The managerial implications are discussed.
Niaksu, Olegas; Zaptorius, Jonas
2014-01-01
This paper presents the methodology suitable for creation of a performance related remuneration system in healthcare sector, which would meet requirements for efficiency and sustainable quality of healthcare services. Methodology for performance indicators selection, ranking and a posteriori evaluation has been proposed and discussed. Priority Distribution Method is applied for unbiased performance criteria weighting. Data mining methods are proposed to monitor and evaluate the results of motivation system.We developed a method for healthcare specific criteria selection consisting of 8 steps; proposed and demonstrated application of Priority Distribution Method for the selected criteria weighting. Moreover, a set of data mining methods for evaluation of the motivational system outcomes was proposed. The described methodology for calculating performance related payment needs practical approbation. We plan to develop semi-automated tools for institutional and personal performance indicators monitoring. The final step would be approbation of the methodology in a healthcare facility.
ERIC Educational Resources Information Center
Shokoohi, Mostafa; Nedjat, Saharnaz; Golestan, Banafsheh; Soltani, Akbar; Majdzadeh, Reza
2011-01-01
Introduction: There are published criteria for identifying educational influentials (EIs). These criteria are based on studies that have been performed in developed countries. This study was performed to identify criteria and characteristics of EIs in Iran. Methods: The study was conducted on residents, interns, and clerks at a major educational…
Tsai, Sang-Bing; Chien, Min-Fang; Xue, Youzhi; Li, Lei; Jiang, Xiaodong; Chen, Quan; Zhou, Jie; Wang, Lei
2015-01-01
The method by which high-technology product manufacturers balance profits and environmental performance is of crucial concern for governments and enterprises. To examine the environmental performance of manufacturers, the present study applied Fuzzy-DEMATEL model to examine environmental performance of the PCB industry in Taiwan. Fuzzy theory was employed to examine the environmental performance criteria of manufacturers and analyse fuzzy linguistics. The fuzzy-DEMATEL model was then employed to assess the direction and level of interaction between environmental performance criteria. The core environmental performance criteria which were critical for enhancing environmental performance of the PCB industry in Taiwan were identified and presented. The present study revealed that green design (a1), green material procurement (a2), and energy consumption (b3) constitute crucial reason criteria, the core criteria influencing other criteria, and the driving factors for resolving problems. PMID:26052710
Tsai, Sang-Bing; Chien, Min-Fang; Xue, Youzhi; Li, Lei; Jiang, Xiaodong; Chen, Quan; Zhou, Jie; Wang, Lei
2015-01-01
The method by which high-technology product manufacturers balance profits and environmental performance is of crucial concern for governments and enterprises. To examine the environmental performance of manufacturers, the present study applied Fuzzy-DEMATEL model to examine environmental performance of the PCB industry in Taiwan. Fuzzy theory was employed to examine the environmental performance criteria of manufacturers and analyse fuzzy linguistics. The fuzzy-DEMATEL model was then employed to assess the direction and level of interaction between environmental performance criteria. The core environmental performance criteria which were critical for enhancing environmental performance of the PCB industry in Taiwan were identified and presented. The present study revealed that green design (a1), green material procurement (a2), and energy consumption (b3) constitute crucial reason criteria, the core criteria influencing other criteria, and the driving factors for resolving problems.
Performance assessment of an irreversible nano Brayton cycle operating with Maxwell-Boltzmann gas
NASA Astrophysics Data System (ADS)
Açıkkalp, Emin; Caner, Necmettin
2015-05-01
In the last decades, nano-technology has been developed very fast. According to this, nano-cycle thermodynamics should improve with a similar rate. In this paper, a nano-scale irreversible Brayton cycle working with helium is evaluated for different thermodynamic criteria. These are maximum work output, ecological function, ecological coefficient of performance, exergetic performance criteria and energy efficiency. Thermodynamic analysis was performed for these criteria and results were submitted numerically. In addition, these criteria are compared with each other and the most convenient methods for the optimum conditions are suggested.
An information theory criteria based blind method for enumerating active users in DS-CDMA system
NASA Astrophysics Data System (ADS)
Samsami Khodadad, Farid; Abed Hodtani, Ghosheh
2014-11-01
In this paper, a new and blind algorithm for active user enumeration in asynchronous direct sequence code division multiple access (DS-CDMA) in multipath channel scenario is proposed. The proposed method is based on information theory criteria. There are two main categories of information criteria which are widely used in active user enumeration, Akaike Information Criterion (AIC) and Minimum Description Length (MDL) information theory criteria. The main difference between these two criteria is their penalty functions. Due to this difference, MDL is a consistent enumerator which has better performance in higher signal-to-noise ratios (SNR) but AIC is preferred in lower SNRs. In sequel, we propose a SNR compliance method based on subspace and training genetic algorithm to have the performance of both of them. Moreover, our method uses only a single antenna, in difference to the previous methods which decrease hardware complexity. Simulation results show that the proposed method is capable of estimating the number of active users without any prior knowledge and the efficiency of the method.
New developments in transit noise and vibration criteria
NASA Astrophysics Data System (ADS)
Hanson, Carl E.
2004-05-01
Federal Transit Administration (FTA) noise and vibration impact criteria were developed in the early 1990's. Noise criteria are ambient-based, developed from the Schultz curve and fundamental research performed by the U.S. Environmental Protection Agency in the 1970's. Vibration criteria are single-value rms vibration velocity levels. After 10 years of experience applying the criteria in assessments of new transit projects throughout the United States, FTA is updating its methods. Approach to assessment of new projects in existing high-noise environments will be clarified. Method for assessing noise impacts due to horn blowing at grade crossings will be provided. Vibration criteria will be expanded to include spectral information. This paper summarizes the background of the current criteria, discusses examples where existing methods are lacking, and describes the planned remedies to improve criteria and methods.
Fuzzy decision-making framework for treatment selection based on the combined QUALIFLEX-TODIM method
NASA Astrophysics Data System (ADS)
Ji, Pu; Zhang, Hong-yu; Wang, Jian-qiang
2017-10-01
Treatment selection is a multi-criteria decision-making problem of significant concern in the medical field. In this study, a fuzzy decision-making framework is established for treatment selection. The framework mitigates information loss by introducing single-valued trapezoidal neutrosophic numbers to denote evaluation information. Treatment selection has multiple criteria that remarkably exceed the alternatives. In consideration of this characteristic, the framework utilises the idea of the qualitative flexible multiple criteria method. Furthermore, it considers the risk-averse behaviour of a decision maker by employing a concordance index based on TODIM (an acronym in Portuguese of interactive and multi-criteria decision-making) method. A sensitivity analysis is performed to illustrate the robustness of the framework. Finally, a comparative analysis is conducted to compare the framework with several extant methods. Results indicate the advantages of the framework and its better performance compared with the extant methods.
Assessing Equating Results on Different Equating Criteria
ERIC Educational Resources Information Center
Tong, Ye; Kolen, Michael
2005-01-01
The performance of three equating methods--the presmoothed equipercentile method, the item response theory (IRT) true score method, and the IRT observed score method--were examined based on three equating criteria: the same distributions property, the first-order equity property, and the second-order equity property. The magnitude of the…
Stefanović, Stefica Cerjan; Bolanča, Tomislav; Luša, Melita; Ukić, Sime; Rogošić, Marko
2012-02-24
This paper describes the development of ad hoc methodology for determination of inorganic anions in oilfield water, since their composition often significantly differs from the average (concentration of components and/or matrix). Therefore, fast and reliable method development has to be performed in order to ensure the monitoring of desired properties under new conditions. The method development was based on computer assisted multi-criteria decision making strategy. The used criteria were: maximal value of objective functions used, maximal robustness of the separation method, minimal analysis time, and maximal retention distance between two nearest components. Artificial neural networks were used for modeling of anion retention. The reliability of developed method was extensively tested by the validation of performance characteristics. Based on validation results, the developed method shows satisfactory performance characteristics, proving the successful application of computer assisted methodology in the described case study. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Linley, L. J.; Luper, A. B.; Dunn, J. H.
1982-01-01
The Bureau of Mines, U.S. Department of the Interior, is reviewing explosion protection methods for use in gassy coal mines. This performance criteria guideline is an evaluation of three explosion protection methods of machines electrically powered with voltages up to 15,000 volts ac. A sufficient amount of basic research has been accomplished to verify that the explosion proof and pressurized enclosure methods can provide adequate explosion protection with the present state of the art up to 15,000 volts ac. This routine application of the potted enclosure as a stand alone protection method requires further investigation or development in order to clarify performance criteria and verification certification requirements. An extensive literature search, a series of high voltage tests, and a design evaluation of the three explosion protection methods indicate that the explosion proof, pressurized, and potted enclosures can all be used to enclose up to 15,000 volts ac.
10 CFR 963.13 - Preclosure suitability evaluation method.
Code of Federal Regulations, 2010 CFR
2010-01-01
... of the structures, systems, components, equipment, and operator actions intended to mitigate or... and the criteria in § 963.14. DOE will consider the performance of the system in terms of the criteria... protection standard. (b) The preclosure safety evaluation method, using preliminary engineering...
Performance Indicators for Accountability and Improvement.
ERIC Educational Resources Information Center
Banta, Trudy W.; Borden, Victor M. H.
1994-01-01
Five criteria for judging college or university performance indicators (PIs) used to guide strategic decision making are outlined. The criteria address: purpose; alignment of PIs throughout the organization or system; alignment of PIs across inputs, processes, and outcomes; capacity to accommodate a variety of evaluation methods; and utility in…
NASA Astrophysics Data System (ADS)
Rolita, Lisa; Surarso, Bayu; Gernowo, Rahmat
2018-02-01
In order to improve airport safety management system (SMS) performance, an evaluation system is required to improve on current shortcomings and maximize safety. This study suggests the integration of the DEMATEL and ANP methods in decision making processes by analyzing causal relations between the relevant criteria and taking effective analysis-based decision. The DEMATEL method builds on the ANP method in identifying the interdependencies between criteria. The input data consists of questionnaire data obtained online and then stored in an online database. Furthermore, the questionnaire data is processed using DEMATEL and ANP methods to obtain the results of determining the relationship between criteria and criteria that need to be evaluated. The study cases on this evaluation system were Adi Sutjipto International Airport, Yogyakarta (JOG); Ahmad Yani International Airport, Semarang (SRG); and Adi Sumarmo International Airport, Surakarta (SOC). The integration grades SMS performance criterion weights in a descending order as follow: safety and destination policy, safety risk management, healthcare, and safety awareness. Sturges' formula classified the results into nine grades. JOG and SMG airports were in grade 8, while SOG airport was in grade 7.
ERIC Educational Resources Information Center
Waller, Niels; Jones, Jeff
2011-01-01
We describe methods for assessing all possible criteria (i.e., dependent variables) and subsets of criteria for regression models with a fixed set of predictors, x (where x is an n x 1 vector of independent variables). Our methods build upon the geometry of regression coefficients (hereafter called regression weights) in n-dimensional space. For a…
Samuel V. Glass; Stanley D. Gatland II; Kohta Ueno; Christopher J. Schumacher
2017-01-01
ASHRAE Standard 160, Criteria for Moisture-Control Design Analysis in Buildings, was published in 2009. The standard sets criteria for moisture design loads, hygrothermal analysis methods, and satisfactory moisture performance of the building envelope. One of the evaluation criteria specifies conditions necessary to avoid mold growth. The current standard requires that...
NASA Astrophysics Data System (ADS)
Sabri, Karim; Colson, Gérard E.; Mbangala, Augustin M.
2008-10-01
Multi-period differences of technical and financial performances are analysed by comparing five North African railways over the period (1990-2004). A first approach is based on the Malmquist DEA TFP index for measuring the total factors productivity change, decomposed into technical efficiency change and technological changes. A multiple criteria analysis is also performed using the PROMETHEE II method and the software ARGOS. These methods provide complementary detailed information, especially by discriminating the technological and management progresses by Malmquist and the two dimensions of performance by Promethee: that are the service to the community and the enterprises performances, often in conflict.
Bondi, Mark W.; Edmonds, Emily C.; Jak, Amy J.; Clark, Lindsay R.; Delano-Wood, Lisa; McDonald, Carrie R.; Nation, Daniel A.; Libon, David J.; Au, Rhoda; Galasko, Douglas; Salmon, David P.
2014-01-01
We compared two methods of diagnosing mild cognitive impairment (MCI): conventional Petersen/Winblad criteria as operationalized by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and an actuarial neuropsychological method put forward by Jak and Bondi designed to balance sensitivity and reliability. 1,150 ADNI participants were diagnosed at baseline as cognitively normal (CN) or MCI via ADNI criteria (MCI: n = 846; CN: n = 304) or Jak/Bondi criteria (MCI: n = 401; CN: n = 749), and the two MCI samples were submitted to cluster and discriminant function analyses. Resulting cluster groups were then compared and further examined for APOE allelic frequencies, cerebrospinal fluid (CSF) Alzheimer’s disease (AD) biomarker levels, and clinical outcomes. Results revealed that both criteria produced a mildly impaired Amnestic subtype and a more severely impaired Dysexecutive/Mixed subtype. The neuropsychological Jak/Bondi criteria uniquely yielded a third Impaired Language subtype, whereas conventional Petersen/Winblad ADNI criteria produced a third subtype comprising nearly one-third of the sample that performed within normal limits across the cognitive measures, suggesting this method’s susceptibility to false positive diagnoses. MCI participants diagnosed via neuropsychological criteria yielded dissociable cognitive phenotypes, significant CSF AD biomarker associations, more stable diagnoses, and identified greater percentages of participants who progressed to dementia than conventional MCI diagnostic criteria. Importantly, the actuarial neuropsychological method did not produce a subtype that performed within normal limits on the cognitive testing, unlike the conventional diagnostic method. Findings support the need for refinement of MCI diagnoses to incorporate more comprehensive neuropsychological methods, with resulting gains in empirical characterization of specific cognitive phenotypes, biomarker associations, stability of diagnoses, and prediction of progression. Refinement of MCI diagnostic methods may also yield gains in biomarker and clinical trial study findings because of improvements in sample compositions of ‘true positive’ cases and removal of ‘false positive’ cases. PMID:24844687
Final waste forms project: Performance criteria for phase I treatability studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliam, T.M.; Hutchins, D.A.; Chodak, P. III
1994-06-01
This document defines the product performance criteria to be used in Phase I of the Final Waste Forms Project. In Phase I, treatability studies will be performed to provide {open_quotes}proof-of-principle{close_quotes} data to establish the viability of stabilization/solidification (S/S) technologies. This information is required by March 1995. In Phase II, further treatability studies, some at the pilot scale, will be performed to provide sufficient data to allow treatment alternatives identified in Phase I to be more fully developed and evaluated, as well as to reduce performance uncertainties for those methods chosen to treat a specific waste. Three main factors influence themore » development and selection of an optimum waste form formulation and hence affect selection of performance criteria. These factors are regulatory, process-specific, and site-specific waste form standards or requirements. Clearly, the optimum waste form formulation will require consideration of performance criteria constraints from each of the three categories. Phase I will focus only on the regulatory criteria. These criteria may be considered the minimum criteria for an acceptable waste form. In other words, a S/S technology is considered viable only if it meet applicable regulatory criteria. The criteria to be utilized in the Phase I treatability studies were primarily taken from Environmental Protection Agency regulations addressed in 40 CFR 260 through 265 and 268; and Nuclear Regulatory Commission regulations addressed in 10 CFR 61. Thus the majority of the identified criteria are independent of waste form matrix composition (i.e., applicable to cement, glass, organic binders etc.).« less
Supplier Selection based on the Performance by using PROMETHEE Method
NASA Astrophysics Data System (ADS)
Sinaga, T. S.; Siregar, K.
2017-03-01
Generally, companies faced problem to identify vendors that can provide excellent service in availability raw material and on time delivery. The performance of suppliers in a company have to be monitored to ensure the availability to fulfill the company needs. This research is intended to explain how to assess suppliers to improve manufacturing performance. The criteria that considered in evaluating suppliers is criteria of Dickson. There are four main criteria which further split into seven sub-criteria, namely compliance with accuracy, consistency, on-time delivery, right quantity order, flexibility and negotiation, timely of order confirmation, and responsiveness. This research uses PROMETHEE methodology in assessing the supplier performances and obtaining a selected supplier as the best one that shown from the degree of alternative comparison preference between suppliers.
NASA Astrophysics Data System (ADS)
Mulyanto, A.; Amalia, T. H.; Novian, D.; Kaluku, M. R. A.
2017-03-01
Performance assessment on the supplier by the supermarket manager is relatively difficult to conduct and implies subjectivity, because there is no measureable and objective performance indicator. This study aims to assist in the decision making process and to look for alternative solutions in assessing the performance of each supplier, so that the service towards the customers will improve as well. ANP method is used to find the weight of each sub-criteria that will be used to measure the supplier performance. The weight result of each sub-criteria derived from the ANP method is used again in measuring the performance and to rank the performance of each supplier by using TOPSIS method. Performance measuring by using the ANP and TOPSIS that generates the highest value of the supplier is 0.71666 while the lowest value is 0.24825. The result of this study shows that the ANP and TOPSIS methods can be used to measure the supplier performance therefore it can assist the selection of supplier which can increase service towards the mart’s consumers.
NASA Astrophysics Data System (ADS)
Karlitasari, L.; Suhartini, D.; Benny
2017-01-01
The process of determining the employee remuneration for PT Sepatu Mas Idaman currently are still using Microsoft Excel-based spreadsheet where in the spreadsheet there is the value of criterias that must be calculated for every employee. This can give the effect of doubt during the assesment process, therefore resulting in the process to take much longer time. The process of employee remuneration determination is conducted by the assesment team based on some criterias that have been predetermined. The criteria used in the assessment process are namely the ability to work, human relations, job responsibility, discipline, creativity, work, achievement of targets, and absence. To ease the determination of employee remuneration to be more efficient and effective, the Simple Additive Weighting (SAW) method is used. SAW method can help in decision making for a certain case, and the calculation that generates the greatest value will be chosen as the best alternative. Other than SAW, also by using another method was the CPI method which is one of the calculating method in decision making based on performance index. Where SAW method was more faster by 89-93% compared to CPI method. Therefore it is expected that this application can be an evaluation material for the need of training and development for employee performances to be more optimal.
Wind/tornado design criteria, development to achieve required probabilistic performance goals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, D.S.
1991-06-01
This paper describes the strategy for developing new design criteria for a critical facility to withstand loading induced by the wind/tornado hazard. The proposed design requirements for resisting wind/tornado loads are based on probabilistic performance goals. The proposed design criteria were prepared by a Working Group consisting of six experts in wind/tornado engineering and meteorology. Utilizing their best technical knowledge and judgment in the wind/tornado field, they met and discussed the methodologies and reviewed available data. A review of the available wind/tornado hazard model for the site, structural response evaluation methods, and conservative acceptance criteria lead to proposed design criteriamore » that has a high probability of achieving the required performance goals.« less
Wind/tornado design criteria, development to achieve required probabilistic performance goals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, D.S.
This paper describes the strategy for developing new design criteria for a critical facility to withstand loading induced by the wind/tornado hazard. The proposed design requirements for resisting wind/tornado loads are based on probabilistic performance goals. The proposed design criteria were prepared by a Working Group consisting of six experts in wind/tornado engineering and meteorology. Utilizing their best technical knowledge and judgment in the wind/tornado field, they met and discussed the methodologies and reviewed available data. A review of the available wind/tornado hazard model for the site, structural response evaluation methods, and conservative acceptance criteria lead to proposed design criteriamore » that has a high probability of achieving the required performance goals.« less
Markó, Lajos; Molnár, Gergo Attila; Wagner, Zoltán; Koszegi, Tamás; Matus, Zoltán; Mohás, Márton; Kuzma, Mónika; Szijártó, István András; Wittmann, István
2008-01-13
Hypertension as well as type 2 diabetes mellitus is a major factor in population mortality. Both diseases damage the endothelium, the early sign of which is microalbuminuria, which can be screened by dipstick and can be diagnosed by using immuno-based and high performance liquid chromatography methods. Using high performance liquid chromatography, the non-immunoreactive albumin can be detected as well. The authors aimed at the examination of albuminuria in the case of immunonephelometrically negative patients with high performance liquid chromatography, in diabetic and hypertensive and non-diabetic hypertensive populations. The authors also wanted to compare the present (albumin-creatinine ratio: male: > or =2.5 mg/mmol, female: > or =3.5 mg/mmol) and a new criteria of the Heart Outcomes Prevention Evaluation study (patients without diabetes: immunological method, > or =0.7 mg/mmol; high performance liquid chromatography, > or =3.1 mg/mmol; individuals with diabetes: immunological method, > or =1.4 mg/mmol; high performance liquid chromatography, > or =5.2 mg/mmol) of microalbuminuria. Examination of fresh urines of 469 microalbuminuria negative patients by dipstick were performed by immunonephelometry. Patients, who were microalbuminuria negative by immunonephelometry as well, were further analyzed by high performance liquid chromatography using the Accumintrade mark Kit, based on size-exclusion chromatography. Three times higher albuminuria were found with high performance liquid chromatography than with immunonephelometry. The intraindividual coefficient of variation did not differ in the two methods (37 +/- 31% vs. 40 +/- 31%, p = 0.869; immunonephelometry vs. high performance liquid chromatography; mean +/- standard deviation). Using the present criteria for microalbuminuria, 43% of immunonephelometrically negative patients proved to be microalbuminuric by high performance liquid chromatography. Using the new criteria of the Heart Outcomes Prevention Evaluation study, the rate of microalbuminuria positivity among the immunonephelometrically negative patients decreased to 14.5% by high performance liquid chromatography and the decrease in the number of microalbuminuria positive cases by high performance liquid chromatography could be observed mainly in the diabetic and hypertensive group (49% vs. 7.5%), while slighter decrease could be observed in the non-diabetic hypertensive group (37% vs. 26.5%). Applying the traditional criteria, the strongest predictor was the male gender by the logistic regression analysis. In 28% of microalbuminuria negative patients by immunonephelometry the diagnosis of microalbuminuria can be established using high performance liquid chromatography. Almost in one-third of microalbuminuria negative patients by immunonephelometry the diagnosis of microalbuminuria can be established by high performance liquid chromatography for which diagnosis three constitutive urine examinations are still needed. New criteria determined by the Heart Outcomes Prevention Evaluation study can be used neither in case of diabetic and hypertensive patients, nor in the case of non-diabetic hypertensive patients. The gender as the most important predictor of microalbuminuria cannot be ignored.
Kaneko, Hiromasa; Funatsu, Kimito
2013-09-23
We propose predictive performance criteria for nonlinear regression models without cross-validation. The proposed criteria are the determination coefficient and the root-mean-square error for the midpoints between k-nearest-neighbor data points. These criteria can be used to evaluate predictive ability after the regression models are updated, whereas cross-validation cannot be performed in such a situation. The proposed method is effective and helpful in handling big data when cross-validation cannot be applied. By analyzing data from numerical simulations and quantitative structural relationships, we confirm that the proposed criteria enable the predictive ability of the nonlinear regression models to be appropriately quantified.
Development of a Flammability Test Method for Aircraft Blankets
DOT National Transportation Integrated Search
1996-03-01
Flammability testing of aircraft blankets was conducted in order to develop a fire performance test method and performance criteria for blankets supplied to commercial aircraft operators. Aircraft blankets were subjected to vertical Bunsen burner tes...
Assessment of Communications-related Admissions Criteria in a Three-year Pharmacy Program
Tejada, Frederick R.; Lang, Lynn A.; Purnell, Miriam; Acedera, Lisa; Ngonga, Ferdinand
2015-01-01
Objective. To determine if there is a correlation between TOEFL and other admissions criteria that assess communications skills (ie, PCAT variables: verbal, reading, essay, and composite), interview, and observational scores and to evaluate TOEFL and these admissions criteria as predictors of academic performance. Methods. Statistical analyses included two sample t tests, multiple regression and Pearson’s correlations for parametric variables, and Mann-Whitney U for nonparametric variables, which were conducted on the retrospective data of 162 students, 57 of whom were foreign-born. Results. The multiple regression model of the other admissions criteria on TOEFL was significant. There was no significant correlation between TOEFL scores and academic performance. However, significant correlations were found between the other admissions criteria and academic performance. Conclusion. Since TOEFL is not a significant predictor of either communication skills or academic success of foreign-born PharmD students in the program, it may be eliminated as an admissions criterion. PMID:26430273
ERIC Educational Resources Information Center
Plough, India C.; Briggs, Sarah L.; Van Bonn, Sarah
2010-01-01
The study reported here examined the evaluation criteria used to assess the proficiency and effectiveness of the language produced in an oral performance test of English conducted in an American university context. Empirical methods were used to analyze qualitatively and quantitatively transcriptions of the Oral English Tests (OET) of 44…
NASA Astrophysics Data System (ADS)
Garambois, Pierre; Besset, Sebastien; Jézéquel, Louis
2015-07-01
This paper presents a methodology for the multi-objective (MO) shape optimization of plate structure under stress criteria, based on a mixed Finite Element Model (FEM) enhanced with a sub-structuring method. The optimization is performed with a classical Genetic Algorithm (GA) method based on Pareto-optimal solutions and considers thickness distributions parameters and antagonist objectives among them stress criteria. We implement a displacement-stress Dynamic Mixed FEM (DM-FEM) for plate structure vibrations analysis. Such a model gives a privileged access to the stress within the plate structure compared to primal classical FEM, and features a linear dependence to the thickness parameters. A sub-structuring reduction method is also computed in order to reduce the size of the mixed FEM and split the given structure into smaller ones with their own thickness parameters. Those methods combined enable a fast and stress-wise efficient structure analysis, and improve the performance of the repetitive GA. A few cases of minimizing the mass and the maximum Von Mises stress within a plate structure under a dynamic load put forward the relevance of our method with promising results. It is able to satisfy multiple damage criteria with different thickness distributions, and use a smaller FEM.
NASA Astrophysics Data System (ADS)
Nath, Surajit; Sarkar, Bijan
2017-08-01
Advanced Manufacturing Technologies (AMTs) offer opportunities for the manufacturing organizations to excel their competitiveness and in turn their effectiveness in manufacturing. Proper selection and evaluation of AMTs is the most significant task in today's modern world. But this involves a lot of uncertainty and vagueness as it requires many conflicting criteria to deal with. So the task of selection and evaluation of AMTs becomes very tedious for the evaluators as they are not able to provide crisp data for the criteria. Different Fuzzy Multi-criteria Decision Making (MCDM) methods help greatly in dealing with this problem. This paper focuses on the application of two very much potential Fuzzy MCDM methods namely COPRAS-G, EVAMIX and a comparative study between them on some rarely mentioned criteria. Each of the two methods is very powerful evaluation tool and has beauty in its own. Although, performance wise these two methods are almost at same level, but, the approach of each one of them are quite unique. This uniqueness is revealed by introducing a numerical example of selection of AMT.
ERIC Educational Resources Information Center
Connelly, Edward A.; And Others
A new approach to deriving human performance measures and criteria for use in automatically evaluating trainee performance is documented in this report. The ultimate application of the research is to provide methods for automatically measuring pilot performance in a flight simulator or from recorded in-flight data. An efficient method of…
Evaluation and construction of diagnostic criteria for inclusion body myositis
Mammen, Andrew L.; Amato, Anthony A.; Weiss, Michael D.; Needham, Merrilee
2014-01-01
Objective: To use patient data to evaluate and construct diagnostic criteria for inclusion body myositis (IBM), a progressive disease of skeletal muscle. Methods: The literature was reviewed to identify all previously proposed IBM diagnostic criteria. These criteria were applied through medical records review to 200 patients diagnosed as having IBM and 171 patients diagnosed as having a muscle disease other than IBM by neuromuscular specialists at 2 institutions, and to a validating set of 66 additional patients with IBM from 2 other institutions. Machine learning techniques were used for unbiased construction of diagnostic criteria. Results: Twenty-four previously proposed IBM diagnostic categories were identified. Twelve categories all performed with high (≥97%) specificity but varied substantially in their sensitivities (11%–84%). The best performing category was European Neuromuscular Centre 2013 probable (sensitivity of 84%). Specialized pathologic features and newly introduced strength criteria (comparative knee extension/hip flexion strength) performed poorly. Unbiased data-directed analysis of 20 features in 371 patients resulted in construction of higher-performing data-derived diagnostic criteria (90% sensitivity and 96% specificity). Conclusions: Published expert consensus–derived IBM diagnostic categories have uniformly high specificity but wide-ranging sensitivities. High-performing IBM diagnostic category criteria can be developed directly from principled unbiased analysis of patient data. Classification of evidence: This study provides Class II evidence that published expert consensus–derived IBM diagnostic categories accurately distinguish IBM from other muscle disease with high specificity but wide-ranging sensitivities. PMID:24975859
A new web-based framework development for fuzzy multi-criteria group decision-making.
Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik
2016-01-01
Fuzzy multi-criteria group decision making (FMCGDM) process is usually used when a group of decision-makers faces imprecise data or linguistic variables to solve the problems. However, this process contains many methods that require many time-consuming calculations depending on the number of criteria, alternatives and decision-makers in order to reach the optimal solution. In this study, a web-based FMCGDM framework that offers decision-makers a fast and reliable response service is proposed. The proposed framework includes commonly used tools for multi-criteria decision-making problems such as fuzzy Delphi, fuzzy AHP and fuzzy TOPSIS methods. The integration of these methods enables taking advantages of the strengths and complements each method's weakness. Finally, a case study of location selection for landfill waste in Morocco is performed to demonstrate how this framework can facilitate decision-making process. The results demonstrate that the proposed framework can successfully accomplish the goal of this study.
NASA Astrophysics Data System (ADS)
Kasim, Maznah Mat; Abdullah, Siti Rohana Goh
2014-07-01
Many average methods are available to aggregate a set of numbers to become single number. However these methods do not consider the interdependencies between the criteria of the related numbers. This paper is highlighting the Choquet Integral method as an alternative aggregation method where the interdependency estimates between the criteria are comprised in the aggregation process. The interdependency values can be estimated by using lambda fuzzy measure method. By considering the interdependencies or interaction between the criteria, the resulted aggregated values are more meaningful as compared to the ones obtained by normal average methods. The application of the Choquet Integral is illustrated in a case study of finding the overall academic achievement of year six pupils in a selected primary school in a northern state of Malaysia.
Evaluating supplier quality performance using fuzzy analytical hierarchy process
NASA Astrophysics Data System (ADS)
Ahmad, Nazihah; Kasim, Maznah Mat; Rajoo, Shanmugam Sundram Kalimuthu
2014-12-01
Evaluating supplier quality performance is vital in ensuring continuous supply chain improvement, reducing the operational costs and risks towards meeting customer's expectation. This paper aims to illustrate an application of Fuzzy Analytical Hierarchy Process to prioritize the evaluation criteria in a context of automotive manufacturing in Malaysia. Five main criteria were identified which were quality, cost, delivery, customer serviceand technology support. These criteria had been arranged into hierarchical structure and evaluated by an expert. The relative importance of each criteria was determined by using linguistic variables which were represented as triangular fuzzy numbers. The Center of Gravity defuzzification method was used to convert the fuzzy evaluations into their corresponding crisps values. Such fuzzy evaluation can be used as a systematic tool to overcome the uncertainty evaluation of suppliers' performance which usually associated with human being subjective judgments.
Consistency of the Performance and Nonperformance Methods in Gifted Identification
ERIC Educational Resources Information Center
Acar, Selcuk; Sen, Sedat; Cayirdag, Nur
2016-01-01
Current approaches to gifted identification suggest collecting multiple sources of evidence. Some gifted identification guidelines allow for the interchangeable use of "performance" and "nonperformance" identification methods. This multiple criteria approach lacks a strong overlap between the assessment tools; however,…
Chadwick, R G; McCabe, J F; Walls, A W; Mitchell, H L; Storer, R
1991-02-01
This paper describes monitoring the wear of restorations borne by partial dentures over a 12 months period using a novel photogrammetric technique and modified United States Public Health Service (USPHS) criteria. The performance of Class II restorations of Dispersalloy was compared with that of similar restorations of either KetacFil or Occlusin. The photogrammetric technique highlighted differences in performance not detected by the modified USPHS criteria. It is concluded that the photogrammetric technique should prove valuable in the in vivo assessment of the performance of restorative materials but that further refinement of the method is required particularly with regard to the orientation of replicas for sequential measurements.
Study of advanced techniques for determining the long-term performance of components
NASA Technical Reports Server (NTRS)
1972-01-01
A study was conducted of techniques having the capability of determining the performance and reliability of components for spacecraft liquid propulsion applications for long term missions. The study utilized two major approaches; improvement in the existing technology, and the evolution of new technology. The criteria established and methods evolved are applicable to valve components. Primary emphasis was placed on the propellants oxygen difluoride and diborane combination. The investigation included analysis, fabrication, and tests of experimental equipment to provide data and performance criteria.
A Comparison of Two Scoring Methods for an Automated Speech Scoring System
ERIC Educational Resources Information Center
Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David
2012-01-01
This paper compares two alternative scoring methods--multiple regression and classification trees--for an automated speech scoring system used in a practice environment. The two methods were evaluated on two criteria: construct representation and empirical performance in predicting human scores. The empirical performance of the two scoring models…
Assessment of active methods for removal of LEO debris
NASA Astrophysics Data System (ADS)
Hakima, Houman; Emami, M. Reza
2018-03-01
This paper investigates the applicability of five active methods for removal of large low Earth orbit debris. The removal methods, namely net, laser, electrodynamic tether, ion beam shepherd, and robotic arm, are selected based on a set of high-level space mission constraints. Mission level criteria are then utilized to assess the performance of each redirection method in light of the results obtained from a Monte Carlo simulation. The simulation provides an insight into the removal time, performance robustness, and propellant mass criteria for the targeted debris range. The remaining attributes are quantified based on the models provided in the literature, which take into account several important parameters pertaining to each removal method. The means of assigning attributes to each assessment criterion is discussed in detail. A systematic comparison is performed using two different assessment schemes: Analytical Hierarchy Process and utility-based approach. A third assessment technique, namely the potential-loss analysis, is utilized to highlight the effect of risks in each removal methods.
Gimenez, Thais; Braga, Mariana Minatel; Raggio, Daniela Procida; Deery, Chris; Ricketts, David N; Mendes, Fausto Medeiros
2013-01-01
Fluorescence-based methods have been proposed to aid caries lesion detection. Summarizing and analysing findings of studies about fluorescence-based methods could clarify their real benefits. We aimed to perform a comprehensive systematic review and meta-analysis to evaluate the accuracy of fluorescence-based methods in detecting caries lesions. Two independent reviewers searched PubMed, Embase and Scopus through June 2012 to identify papers/articles published. Other sources were checked to identify non-published literature. STUDY ELIGIBILITY CRITERIA, PARTICIPANTS AND DIAGNOSTIC METHODS: The eligibility criteria were studies that: (1) have assessed the accuracy of fluorescence-based methods of detecting caries lesions on occlusal, approximal or smooth surfaces, in both primary or permanent human teeth, in the laboratory or clinical setting; (2) have used a reference standard; and (3) have reported sufficient data relating to the sample size and the accuracy of methods. A diagnostic 2×2 table was extracted from included studies to calculate the pooled sensitivity, specificity and overall accuracy parameters (Diagnostic Odds Ratio and Summary Receiver-Operating curve). The analyses were performed separately for each method and different characteristics of the studies. The quality of the studies and heterogeneity were also evaluated. Seventy five studies met the inclusion criteria from the 434 articles initially identified. The search of the grey or non-published literature did not identify any further studies. In general, the analysis demonstrated that the fluorescence-based method tend to have similar accuracy for all types of teeth, dental surfaces or settings. There was a trend of better performance of fluorescence methods in detecting more advanced caries lesions. We also observed moderate to high heterogeneity and evidenced publication bias. Fluorescence-based devices have similar overall performance; however, better accuracy in detecting more advanced caries lesions has been observed.
NASA Technical Reports Server (NTRS)
Syed, S. A.; Chiappetta, L. M.
1985-01-01
A methodological evaluation for two-finite differencing schemes for computer-aided gas turbine design is presented. The two computational schemes include; a Bounded Skewed Finite Differencing Scheme (BSUDS); and a Quadratic Upwind Differencing Scheme (QSDS). In the evaluation, the derivations of the schemes were incorporated into two-dimensional and three-dimensional versions of the Teaching Axisymmetric Characteristics Heuristically (TEACH) computer code. Assessments were made according to performance criteria for the solution of problems of turbulent, laminar, and coannular turbulent flow. The specific performance criteria used in the evaluation were simplicity, accuracy, and computational economy. It is found that the BSUDS scheme performed better with respect to the criteria than the QUDS. Some of the reasons for the more successful performance BSUDS are discussed.
NASA Astrophysics Data System (ADS)
Ramli, Rohaini; Kasim, Maznah Mat; Ramli, Razamin; Kayat, Kalsom; Razak, Rafidah Abd
2014-12-01
Ministry of Tourism and Culture Malaysia has long introduced homestay programs across the country to enhance the quality of life of people, especially those living in rural areas. This type of program is classified as a community-based tourism (CBT) as it is expected to economically improve livelihood through cultural and community associated activities. It is the aspiration of the ministry to see that the income imbalance between people in the rural and urban areas is reduced, thus would contribute towards creating more developed states of Malaysia. Since 1970s, there are 154 homestay programs registered with the ministry. However, the performance and sustainability of the programs are still not satisfying. There are only a number of homestay programs that perform well and able to sustain. Thus, the aim of this paper is to identify relevant criteria contributing to the sustainability of a homestay program. The criteria are evaluated for their levels of importance via the use of a modified pairwise method and analyzed for other potentials. The findings will help the homestay operators to focus on the necessary criteria and thus, effectively perform as the CBT business initiative.
Gis-Based Site Selection for Underground Natural Resources Using Fuzzy Ahp-Owa
NASA Astrophysics Data System (ADS)
Sabzevari, A. R.; Delavar, M. R.
2017-09-01
Fuel consumption has significantly increased due to the growth of the population. A solution to address this problem is the underground storage of natural gas. The first step to reach this goal is to select suitable places for the storage. In this study, site selection for the underground natural gas reservoirs has been performed using a multi-criteria decision-making in a GIS environment. The "Ordered Weighted Average" (OWA) operator is one of the multi-criteria decision-making methods for ranking the criteria and consideration of uncertainty in the interaction among the criteria. In this paper, Fuzzy AHP_OWA (FAHP_OWA) is used to determine optimal sites for the underground natural gas reservoirs. Fuzzy AHP_OWA considers the decision maker's risk taking and risk aversion during the decision-making process. Gas consumption rate, temperature, distance from main transportation network, distance from gas production centers, population density and distance from gas distribution networks are the criteria used in this research. Results show that the northeast and west of Iran and the areas around Tehran (Tehran and Alborz Provinces) have a higher attraction for constructing a natural gas reservoir. The performance of the used method was also evaluated. This evaluation was performed using the location of the existing natural gas reservoirs in the country and the site selection maps for each of the quantifiers. It is verified that the method used in this study is capable of modeling different decision-making strategies used by the decision maker with about 88 percent of agreement between the modeling and test data.
Evaluation and selection of 3PL provider using fuzzy AHP and grey TOPSIS in group decision making
NASA Astrophysics Data System (ADS)
Garside, Annisa Kesy; Saputro, Thomy Eko
2017-11-01
Selection of a 3PL provider is a problem of multi criteria decision making, where the decision maker has to select several 3PL provider alternatives based on several evaluation criteria. A decision maker will have difficulty to express judgments in exact numerical values due to the fact that information is often incomplete and the decision environment is uncertain. This paper presents an integrated fuzzy AHP and Grey TOPSIS for the evaluation and selection of 3PL provider method. Fuzzy AHP is used to determine the importance weight of evaluation criteria. For final selection, grey TOPSIS is used to evaluate the alternatives and obtain the overall performance which is measured as closeness coefficient. This method is applied to solve the selection of 3PL provider at PT. X. Five criterias and twelve sub-criterias were determined and then the best alternative among four 3PL providers was selected by proposed method.
Vandenabeele-Trambouze, O; Claeys-Bruno, M; Dobrijevic, M; Rodier, C; Borruat, G; Commeyras, A; Garrelly, L
2005-02-01
The need for criteria to compare different analytical methods for measuring extraterrestrial organic matter at ultra-trace levels in relatively small and unique samples (e.g., fragments of meteorites, micrometeorites, planetary samples) is discussed. We emphasize the need to standardize the description of future analyses, and take the first step toward a proposed international laboratory network for performance testing.
Criteria for Evaluating the Performance of Compilers
1974-10-01
cannot be made to fit, then an auxiliary mechanism outside the parser might be used . Finally, changing the choice of parsing tech - nique to a...was not useful in providing a basic for compiler evaluation. The study of the first question eztablished criteria and methodb for assigning four...program. The study of the second question estab- lished criteria for defining a "compiler Gibson mix", and established methods for using this "mix" to
A parametric method for determining the number of signals in narrow-band direction finding
NASA Astrophysics Data System (ADS)
Wu, Qiang; Fuhrmann, Daniel R.
1991-08-01
A novel and more accurate method to determine the number of signals in the multisource direction finding problem is developed. The information-theoretic criteria of Yin and Krishnaiah (1988) are applied to a set of quantities which are evaluated from the log-likelihood function. Based on proven asymptotic properties of the maximum likelihood estimation, these quantities have the properties required by the criteria. Since the information-theoretic criteria use these quantities instead of the eigenvalues of the estimated correlation matrix, this approach possesses the advantage of not requiring a subjective threshold, and also provides higher performance than when eigenvalues are used. Simulation results are presented and compared to those obtained from the nonparametric method given by Wax and Kailath (1985).
Hasslacher, Christoph; Kulozik, Felix; Platten, Isabel
2014-05-01
We investigated the analytical accuracy of 27 glucose monitoring systems (GMS) in a clinical setting, using the new ISO accuracy limits. In addition to measuring accuracy at blood glucose (BG) levels < 100 mg/dl and > 100 mg/dl, we also analyzed devices performance with respect to these criteria at 5 specific BG level ranges, making it possible to further differentiate between devices with regard to overall performance. Carbohydrate meals and insulin injections were used to induce an increase or decrease in BG levels in 37 insulin-dependent patients. Capillary blood samples were collected at 10-minute intervals, and BG levels determined simultaneously using GMS and a laboratory-based method. Results obtained via both methods were analyzed according to the new ISO criteria. Only 12 of 27 devices tested met overall requirements of the new ISO accuracy limits. When accuracy was assessed at BG levels < 100 mg/dl and > 100 mg/dl, criteria were met by 14 and 13 devices, respectively. A more detailed analysis involving 5 different BG level ranges revealed that 13 (48.1%) devices met the required criteria at BG levels between 50 and 150 mg/dl, whereas 19 (70.3%) met these criteria at BG levels above 250 mg/dl. The overall frequency of outliers was low. The assessment of analytical accuracy of GMS at a number of BG level ranges made it possible to further differentiate between devices with regard to overall performance, a process that is of particular importance given the user-centered nature of the devices' intended use. © 2014 Diabetes Technology Society.
Comparison of pre-processing methods for multiplex bead-based immunoassays.
Rausch, Tanja K; Schillert, Arne; Ziegler, Andreas; Lüking, Angelika; Zucht, Hans-Dieter; Schulz-Knappe, Peter
2016-08-11
High throughput protein expression studies can be performed using bead-based protein immunoassays, such as the Luminex® xMAP® technology. Technical variability is inherent to these experiments and may lead to systematic bias and reduced power. To reduce technical variability, data pre-processing is performed. However, no recommendations exist for the pre-processing of Luminex® xMAP® data. We compared 37 different data pre-processing combinations of transformation and normalization methods in 42 samples on 384 analytes obtained from a multiplex immunoassay based on the Luminex® xMAP® technology. We evaluated the performance of each pre-processing approach with 6 different performance criteria. Three performance criteria were plots. All plots were evaluated by 15 independent and blinded readers. Four different combinations of transformation and normalization methods performed well as pre-processing procedure for this bead-based protein immunoassay. The following combinations of transformation and normalization were suitable for pre-processing Luminex® xMAP® data in this study: weighted Box-Cox followed by quantile or robust spline normalization (rsn), asinh transformation followed by loess normalization and Box-Cox followed by rsn.
10 CFR 963.16 - Postclosure suitability evaluation method.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Postclosure suitability evaluation method. 963.16 Section... Determination, Methods, and Criteria § 963.16 Postclosure suitability evaluation method. (a) DOE will evaluate postclosure suitability using the total system performance assessment method. DOE will conduct a total system...
10 CFR 963.16 - Postclosure suitability evaluation method.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Postclosure suitability evaluation method. 963.16 Section... Determination, Methods, and Criteria § 963.16 Postclosure suitability evaluation method. (a) DOE will evaluate postclosure suitability using the total system performance assessment method. DOE will conduct a total system...
10 CFR 963.16 - Postclosure suitability evaluation method.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Postclosure suitability evaluation method. 963.16 Section... Determination, Methods, and Criteria § 963.16 Postclosure suitability evaluation method. (a) DOE will evaluate postclosure suitability using the total system performance assessment method. DOE will conduct a total system...
76 FR 21985 - Notice of Final Priorities, Requirements, Definitions, and Selection Criteria
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-19
... only after a research base has been established to support the use of the assessments for such purposes..., research-based assessment practices. Discussion: We agree that the selection criteria should address the... selection criterion, which addresses methods of scoring, to allow for self-scoring of student performance on...
Determination of criteria weights in solving multi-criteria problems
NASA Astrophysics Data System (ADS)
Kasim, Maznah Mat
2014-12-01
A multi-criteria (MC) problem comprises of units to be analyzed under a set of evaluation criteria. Solving a MC problem is basically the process of finding the overall performance or overall quality of the units of analysis by using certain aggregation method. Based on these overall measures of each unit, a decision can be made whether to sort them, to select the best or to group them according to certain ranges. Prior to solving the MC problems, the weights of the related criteria have to be determined with the assumption that the weights represent the degree of importance or the degree of contribution towards the overall performance of the units. This paper presents two main approaches which are called as subjective and objective approaches, where the first one involves evaluator(s) while the latter approach depends on the intrinsic information contained in each criterion. The subjective and objective weights are defined if the criteria are assumed to be independent with each other, but if they are dependent, there is another type of weight, which is called as monotone measure weight or compound weights which represent degree of interaction among the criteria. The measure of individual weights or compound weights must be addressed in solving multi-criteria problems so that the solutions are more reliable since in the real world, evaluation criteria always come with different degree of importance or are dependent with each other. As the real MC problems have their own uniqueness, it is up to the decision maker(s) to decide which type of weights and which method are the most applicable ones for the problem under study.
Linder, Roland; Orth, Isabelle; Hagen, E Christian; van der Woude, Fokko J; Schmitt, Wilhelm H
2011-06-01
To investigate the operating characteristics of the American College of Rheumatology (ACR) traditional format criteria for Wegener's granulomatosis (WG), the Sørensen criteria for WG and microscopic polyangiitis (MPA), and the Chapel Hill nomenclature for WG and MPA. Further, to develop and validate improved criteria for distinguishing WG from MPA by an artificial neural network (ANN) and by traditional approaches [classification tree (CT), logistic regression (LR)]. All criteria were applied to 240 patients with WG and 78 patients with MPA recruited by a multicenter study. To generate new classification criteria (ANN, CT, LR), 23 clinical measurements were assessed. Validation was performed by applying the same approaches to an independent monocenter cohort of 46 patients with WG and 21 patients with MPA. A total of 70.8% of the patients with WG and 7.7% of the patients with MPA from the multicenter cohort fulfilled the ACR criteria for WG (accuracy 76.1%). The accuracy of the Chapel Hill criteria for WG and MPA was only 35.0% and 55.3% (Sørensen criteria: 67.2% and 92.4%). In contrast, the ANN and CT achieved an accuracy of 94.3%, based on 4 measurements (involvement of nose, sinus, ear, and pulmonary nodules), all associated with WG. LR led to an accuracy of 92.8%. Inclusion of antineutrophil cytoplasmic antibodies did not improve the allocation. Validation of methods resulted in accuracy of 91.0% (ANN and CT) and 88.1% (LR). The ACR, Sørensen, and Chapel Hill criteria did not reliably separate WG from MPA. In contrast, an appropriately trained ANN and a CT differentiated between these disorders and performed better than LR.
Using a Malcolm Baldrige framework to understand high-performing clinical microsystems.
Foster, Tina C; Johnson, Julie K; Nelson, Eugene C; Batalden, Paul B
2007-10-01
BACKGROUND, OBJECTIVES AND METHOD: The Malcolm Baldrige National Quality Award (MBNQA) provides a set of criteria for organisational quality assessment and improvement that has been used by thousands of business, healthcare and educational organisations for more than a decade. The criteria can be used as a tool for self-evaluation, and are widely recognised as a robust framework for design and evaluation of healthcare systems. The clinical microsystem, as an organisational construct, is a systems approach for providing clinical care based on theories from organisational development, leadership and improvement. This study compared the MBNQA criteria for healthcare and the success factors of high-performing clinical microsystems to (1) determine whether microsystem success characteristics cover the same range of issues addressed by the Baldrige criteria and (2) examine whether this comparison might better inform our understanding of either framework. Both Baldrige criteria and microsystem success characteristics cover a wide range of areas crucial to high performance. Those particularly called out by this analysis are organisational leadership, work systems and service processes from a Baldrige standpoint, and leadership, performance results, process improvement, and information and information technology from the microsystem success characteristics view. Although in many cases the relationship between Baldrige criteria and microsystem success characteristics are obvious, in others the analysis points to ways in which the Baldrige criteria might be better understood and worked with by a microsystem through the design of work systems and a deep understanding of processes. Several tools are available for those who wish to engage in self-assessment based on MBNQA criteria and microsystem characteristics.
Using a Malcolm Baldrige framework to understand high‐performing clinical microsystems
Foster, Tina C; Johnson, Julie K; Nelson, Eugene C; Batalden, Paul B
2007-01-01
Background, objectives and method The Malcolm Baldrige National Quality Award (MBNQA) provides a set of criteria for organisational quality assessment and improvement that has been used by thousands of business, healthcare and educational organisations for more than a decade. The criteria can be used as a tool for self‐evaluation, and are widely recognised as a robust framework for design and evaluation of healthcare systems. The clinical microsystem, as an organisational construct, is a systems approach for providing clinical care based on theories from organisational development, leadership and improvement. This study compared the MBNQA criteria for healthcare and the success factors of high‐performing clinical microsystems to (1) determine whether microsystem success characteristics cover the same range of issues addressed by the Baldrige criteria and (2) examine whether this comparison might better inform our understanding of either framework. Results and conclusions Both Baldrige criteria and microsystem success characteristics cover a wide range of areas crucial to high performance. Those particularly called out by this analysis are organisational leadership, work systems and service processes from a Baldrige standpoint, and leadership, performance results, process improvement, and information and information technology from the microsystem success characteristics view. Although in many cases the relationship between Baldrige criteria and microsystem success characteristics are obvious, in others the analysis points to ways in which the Baldrige criteria might be better understood and worked with by a microsystem through the design of work systems and a deep understanding of processes. Several tools are available for those who wish to engage in self‐assessment based on MBNQA criteria and microsystem characteristics. PMID:17913773
Assessing the reliability of ecotoxicological studies: An overview of current needs and approaches.
Moermond, Caroline; Beasley, Amy; Breton, Roger; Junghans, Marion; Laskowski, Ryszard; Solomon, Keith; Zahner, Holly
2017-07-01
In general, reliable studies are well designed and well performed, and enough details on study design and performance are reported to assess the study. For hazard and risk assessment in various legal frameworks, many different types of ecotoxicity studies need to be evaluated for reliability. These studies vary in study design, methodology, quality, and level of detail reported (e.g., reviews, peer-reviewed research papers, or industry-sponsored studies documented under Good Laboratory Practice [GLP] guidelines). Regulators have the responsibility to make sound and verifiable decisions and should evaluate each study for reliability in accordance with scientific principles regardless of whether they were conducted in accordance with GLP and/or standardized methods. Thus, a systematic and transparent approach is needed to evaluate studies for reliability. In this paper, 8 different methods for reliability assessment were compared using a number of attributes: categorical versus numerical scoring methods, use of exclusion and critical criteria, weighting of criteria, whether methods are tested with case studies, domain of applicability, bias toward GLP studies, incorporation of standard guidelines in the evaluation method, number of criteria used, type of criteria considered, and availability of guidance material. Finally, some considerations are given on how to choose a suitable method for assessing reliability of ecotoxicity studies. Integr Environ Assess Manag 2017;13:640-651. © 2016 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC). © 2016 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC).
Application of Multi-Criteria Decision Making (MCDM) Technique for Gradation of Jute Fibres
NASA Astrophysics Data System (ADS)
Choudhuri, P. K.
2014-12-01
Multi-Criteria Decision Making is a branch of Operation Research (OR) having a comparatively short history of about 40 years. It is being popularly used in the field of engineering, banking, fixing policy matters etc. It can also be applied for taking decisions in daily life like selecting a car to purchase, selecting bride or groom and many others. Various MCDM methods namely Weighted Sum Model (WSM), Weighted Product Model (WPM), Analytic Hierarchy Process (AHP), Technique for Order Preference by Similarity to Ideal Solutions (TOPSIS) and Elimination and Choice Translating Reality (ELECTRE) are there to solve many decision making problems, each having its own limitations. However it is very difficult to decide which MCDM method is the best. MCDM methods are prospective quantitative approaches for solving decision problems involving finite number of alternatives and criteria. Very few research works in textiles have been carried out with the help of this technique particularly where decision taking among several alternatives becomes the major problem based on some criteria which are conflicting in nature. Gradation of jute fibres on the basis of the criteria like strength, root content, defects, colour, density, fineness etc. is an important task to perform. The MCDM technique provides enough scope to be applied for the gradation of jute fibres or ranking among several varieties keeping in view a particular object and on the basis of some selection criteria and their relative weightage. The present paper is an attempt to explore the scope of applying the multiplicative AHP method of multi-criteria decision making technique to determine the quality values of selected jute fibres on the basis of some above stated important criteria and ranking them accordingly. A good agreement in ranking is observed between the existing Bureau of Indian Standards (BIS) grading and proposed method.
The qualitative assessment of pneumatic actuators operation in terms of vibration criteria
NASA Astrophysics Data System (ADS)
Hetmanczyk, M. P.; Michalski, P.
2015-11-01
The work quality of pneumatic actuators can be assessed in terms of multiple criteria. In the case of complex systems with pneumatic actuators retained at end positions (with occurrence of piston impact in cylinder covers) the vibration criteria constitute the most reliable indicators. The paper presents an impact assessment on the operating condition of the rodless pneumatic cylinder regarding to selected vibrational symptoms. On the basis of performed analysis the authors had shown meaningful premises allowing an evaluation of the performance and tuning of end position damping piston movement with usage the most common diagnostic tools (portable vibration analyzers). The presented method is useful in tuning of parameters in industrial conditions.
NASA Astrophysics Data System (ADS)
Wang, Chun-mei; Zhang, Chong-ming; Zou, Jun-zhong; Zhang, Jian
2012-02-01
The diagnosis of several neurological disorders is based on the detection of typical pathological patterns in electroencephalograms (EEGs). This is a time-consuming task requiring significant training and experience. A lot of effort has been devoted to developing automatic detection techniques which might help not only in accelerating this process but also in avoiding the disagreement among readers of the same record. In this work, Neyman-Pearson criteria and a support vector machine (SVM) are applied for detecting an epileptic EEG. Decision making is performed in two stages: feature extraction by computing the wavelet coefficients and the approximate entropy (ApEn) and detection by using Neyman-Pearson criteria and an SVM. Then the detection performance of the proposed method is evaluated. Simulation results demonstrate that the wavelet coefficients and the ApEn are features that represent the EEG signals well. By comparison with Neyman-Pearson criteria, an SVM applied on these features achieved higher detection accuracies.
Hong, Na; Li, Dingcheng; Yu, Yue; Xiu, Qiongying; Liu, Hongfang; Jiang, Guoqian
2016-10-01
Constructing standard and computable clinical diagnostic criteria is an important but challenging research field in the clinical informatics community. The Quality Data Model (QDM) is emerging as a promising information model for standardizing clinical diagnostic criteria. To develop and evaluate automated methods for converting textual clinical diagnostic criteria in a structured format using QDM. We used a clinical Natural Language Processing (NLP) tool known as cTAKES to detect sentences and annotate events in diagnostic criteria. We developed a rule-based approach for assigning the QDM datatype(s) to an individual criterion, whereas we invoked a machine learning algorithm based on the Conditional Random Fields (CRFs) for annotating attributes belonging to each particular QDM datatype. We manually developed an annotated corpus as the gold standard and used standard measures (precision, recall and f-measure) for the performance evaluation. We harvested 267 individual criteria with the datatypes of Symptom and Laboratory Test from 63 textual diagnostic criteria. We manually annotated attributes and values in 142 individual Laboratory Test criteria. The average performance of our rule-based approach was 0.84 of precision, 0.86 of recall, and 0.85 of f-measure; the performance of CRFs-based classification was 0.95 of precision, 0.88 of recall and 0.91 of f-measure. We also implemented a web-based tool that automatically translates textual Laboratory Test criteria into the QDM XML template format. The results indicated that our approaches leveraging cTAKES and CRFs are effective in facilitating diagnostic criteria annotation and classification. Our NLP-based computational framework is a feasible and useful solution in developing diagnostic criteria representation and computerization. Copyright © 2016 Elsevier Inc. All rights reserved.
Alternative microbial methods: An overview and selection criteria.
Jasson, Vicky; Jacxsens, Liesbeth; Luning, Pieternel; Rajkovic, Andreja; Uyttendaele, Mieke
2010-09-01
This study provides an overview and criteria for the selection of a method, other than the reference method, for microbial analysis of foods. In a first part an overview of the general characteristics of rapid methods available, both for enumeration and detection, is given with reference to relevant bibliography. Perspectives on future development and the potential of the rapid method for routine application in food diagnostics are discussed. As various alternative "rapid" methods in different formats are available on the market, it can be very difficult for a food business operator or for a control authority to select the most appropriate method which fits its purpose. Validation of a method by a third party, according to international accepted protocol based upon ISO 16140, may increase the confidence in the performance of a method. A list of at the moment validated methods for enumeration of both utility indicators (aerobic plate count) and hygiene indicators (Enterobacteriaceae, Escherichia coli, coagulase positive Staphylococcus) as well as for detection of the four major pathogens (Salmonella spp., Listeria monocytogenes, E. coli O157 and Campylobacter spp.) is included with reference to relevant websites to check for updates. In a second part of this study, selection criteria are introduced to underpin the choice of the appropriate method(s) for a defined application. The selection criteria link the definition of the context in which the user of the method functions - and thus the prospective use of the microbial test results - with the technical information on the method and its operational requirements and sustainability. The selection criteria can help the end user of the method to obtain a systematic insight into all relevant factors to be taken into account for selection of a method for microbial analysis. Copyright 2010 Elsevier Ltd. All rights reserved.
A Compact Review of Multi-criteria Decision Analysis Uncertainty Techniques
2013-02-01
9 3.4 PROMETHEE -GAIA Method...obtained (74). 3.4 PROMETHEE -GAIA Method Preference Ranking Organization Method for Enrichment Evaluation ( PROMETHEE ) and Geometrical Analysis for...greater understanding of the importance of their selections. The PROMETHEE method was designed to perform MCDA while accounting for each of these
The effect of uncertainties in distance-based ranking methods for multi-criteria decision making
NASA Astrophysics Data System (ADS)
Jaini, Nor I.; Utyuzhnikov, Sergei V.
2017-08-01
Data in the multi-criteria decision making are often imprecise and changeable. Therefore, it is important to carry out sensitivity analysis test for the multi-criteria decision making problem. The paper aims to present a sensitivity analysis for some ranking techniques based on the distance measures in multi-criteria decision making. Two types of uncertainties are considered for the sensitivity analysis test. The first uncertainty is related to the input data, while the second uncertainty is towards the Decision Maker preferences (weights). The ranking techniques considered in this study are TOPSIS, the relative distance and trade-off ranking methods. TOPSIS and the relative distance method measure a distance from an alternative to the ideal and antiideal solutions. In turn, the trade-off ranking calculates a distance of an alternative to the extreme solutions and other alternatives. Several test cases are considered to study the performance of each ranking technique in both types of uncertainties.
Selection of suitable e-learning approach using TOPSIS technique with best ranked criteria weights
NASA Astrophysics Data System (ADS)
Mohammed, Husam Jasim; Kasim, Maznah Mat; Shaharanee, Izwan Nizal Mohd
2017-11-01
This paper compares the performances of four rank-based weighting assessment techniques, Rank Sum (RS), Rank Reciprocal (RR), Rank Exponent (RE), and Rank Order Centroid (ROC) on five identified e-learning criteria to select the best weights method. A total of 35 experts in a public university in Malaysia were asked to rank the criteria and to evaluate five e-learning approaches which include blended learning, flipped classroom, ICT supported face to face learning, synchronous learning, and asynchronous learning. The best ranked criteria weights are defined as weights that have the least total absolute differences with the geometric mean of all weights, were then used to select the most suitable e-learning approach by using TOPSIS method. The results show that RR weights are the best, while flipped classroom approach implementation is the most suitable approach. This paper has developed a decision framework to aid decision makers (DMs) in choosing the most suitable weighting method for solving MCDM problems.
Denis, Cécile; Fatséas, Mélina; Auriacombe, Marc
2012-04-01
The DSM-5 Substance-Related Disorders Work Group proposed to include Pathological Gambling within the current Substance-Related Disorders section. The objective of the current report was to assess four possible sets of diagnostic criteria for Pathological Gambling. Gamblers (N=161) were defined as either Pathological or Non-Pathological according to four classification methods. (a) Option 1: the current DSM-IV criteria for Pathological Gambling; (b) Option 2: dropping the "Illegal Acts" criterion, while keeping the threshold at 5 required criteria endorsed; (c) Option 3: the proposed DSM-5 approach, i.e., deleting "Illegal Acts" and lowering the threshold of required criteria from 5 to 4; (d) Option 4: to use a set of Pathological Gambling criteria modeled on the DSM-IV Substance Dependence criteria. Cronbach's alpha and eigenvalues were calculated for reliability, Phi, discriminant function analyses, correlations and multivariate regression models were performed for validity and kappa coefficients were calculated for diagnostic consistency of each option. All criteria sets were reliable and valid. Some criteria had higher discriminant properties than others. The proposed DSM-5 criteria in Options 2 and 3 performed well and did not appear to alter the meanings of the diagnoses of Pathological Gambling from DSM-IV. Future work should further explore if Pathological Gambling might be assessed using the same criteria as those used for Substance Use Disorders. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Drouin, Jillian L.; McAlpine, Caitlin T.; Primak, Kari A.; Kissel, Jaclyn
2013-01-01
Context: The effect of the application of kinesiotape to skin overlying musculature on measurable athletic-based performance outcomes in healthy individuals has not been well established. Objective: To systematically search and assess the quality of the literature on the effect of kinesiotape on athletic-based performance outcomes in healthy, active individuals. Methods: An electronic search strategy was conducted in MANTIS, Cochrane Library and EBSCO databases. Retrieved articles that met the eligibility criteria were rated for methodological quality by using an adaption of the critical appraisal criteria in Clinical Epidemiology by Sackett et al. Results: Ten articles met the inclusion criteria. Seven articles had positive results in at least one athletic-based performance measure compared to controls. Conclusion: Evidence is lacking to support the use of kinesiotape as a successful measure for improving athletic-based performance outcomes in healthy individuals. However, there is no evidence to show that kinesiotape has a negative effect on any of the performace measures. PMID:24302784
Espinosa de los Monteros, A; Parra, A; Hidalgo, R; Zambrana, M
1999-04-01
To study the sensitivity and specificity of the 50-g, 1-hour gestational glucose challenge test performed 1 to 2 hours after a non-standardized home breakfast in urban Mexican women by using three different gestational diabetes mellitus diagnostic criteria. Four hundred and forty-five consecutive women of 24-28 weeks gestation were studied. The glucose challenge test was performed in the fed state and a week later a fasting 100-g, 3-hours oral glucose tolerance test was carried out in all of them. Duplicate serum glucose concentrations were determined by a glucose-oxidase method. Sensitivity and specificity were calculated using three different diagnostic criteria for gestational diabetes mellitus. The glucose challenge test performed as indicated, with a cutoff of 7.8 mmol/L, had 88-89% sensitivity and 85-87% specificity when using as diagnostic criteria those proposed by the National Diabetes Data Group and by Carpenter & Coustan; by using Sacks et al. criteria, the values were 82% and 88%, respectively. Considering only pregnant women > or = 25 years of age, the sensitivity increased to 92% with the National Diabetes Data Group criteria. Pregnant women < 25 years of age had significantly lower blood glucose values than those with age > or = 25 years during the glucose tolerance test. For the general group the sensitivity of the glucose challenge test performed 1 to 2 hours after breakfast was similar, based on the National Diabetes Data Group and the Carpenter & Coustan's diagnostic criteria for gestational diabetes mellitus. However, when pregnant women > or = 25 years of age were considered, the use of the former criteria yielded a slightly better sensitivity.
ERIC Educational Resources Information Center
Connelly, E. M.; And Others
A new approach to deriving human performance measures and criteria for use in automatically evaluating trainee performance is described. Ultimately, this approach will allow automatic measurement of pilot performance in a flight simulator or from recorded in-flight data. An efficient method of representing performance data within a computer is…
Wiuf, Carsten; Schaumburg-Müller Pallesen, Jonatan; Foldager, Leslie; Grove, Jakob
2016-08-01
In many areas of science it is custom to perform many, potentially millions, of tests simultaneously. To gain statistical power it is common to group tests based on a priori criteria such as predefined regions or by sliding windows. However, it is not straightforward to choose grouping criteria and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method was demonstrated using simulations and real data analyses. Our method may be a useful supplement to standard procedures relying on evaluation of test statistics individually. Moreover, by being agnostic and not relying on predefined selected regions, it might be a practical alternative to conventionally used methods of aggregation of p-values over regions. The method is implemented in Python and freely available online (through GitHub, see the Supplementary information).
Cheung, Winston; Myburgh, John; McGuinness, Shay; Chalmers, Debra; Parke, Rachael; Blyth, Fiona; Seppelt, Ian; Parr, Michael; Hooker, Claire; Blackwell, Nikki; DeMonte, Shannon; Gandhi, Kalpesh; Kol, Mark; Kerridge, Ian; Nair, Priya; Saunders, Nicholas M; Saxena, Manoj K; Thanakrishnan, Govindasamy; Naganathan, Vasi
2017-09-01
An influenza pandemic has the potential to overwhelm intensive care resources, but the views of the general public on how resources should be allocated in such a scenario were unknown. We aimed to determine Australian and New Zealand public opinion on how intensive care unit beds should be allocated during an influenza pandemic. A postal questionnaire was sent to 4000 randomly selected registered voters; 2000 people each from the Australian Electoral Commission and New Zealand Electoral Commission rolls. The respondents' preferred method to triage ICU patients in an influenza pandemic. Respondents chose from six methods: use a "first in, first served" approach; allow a senior doctor to decide; use pre-determined health department criteria; use random selection; use the patient's ability to pay; use the importance of the patient to decide. Respondents also rated each of the triage methods for fairness. Australian respondents preferred that patients be triaged to the ICU either by a senior doctor (43.2%) or by pre-determined health department criteria (38.7%). New Zealand respondents preferred that triage be performed by a senior doctor (45.9%). Respondents from both countries perceived triage by a senior doctor and by pre-determined health department criteria to be fair, and the other four methods of triage to be unfair. In an influenza pandemic, when ICU resources would be overwhelmed, survey respondents preferred that ICU triage be performed by a senior doctor, but also perceived the use of pre-determined triage criteria to be fair.
Mapp, Latisha; Klonicki, Patricia; Takundwa, Prisca; Hill, Vincent R; Schneeberger, Chandra; Knee, Jackie; Raynor, Malik; Hwang, Nina; Chambers, Yildiz; Miller, Kenneth; Pope, Misty
2015-11-01
The U.S. Environmental Protection Agency's (EPA) Water Laboratory Alliance (WLA) currently uses ultrafiltration (UF) for concentration of biosafety level 3 (BSL-3) agents from large volumes (up to 100-L) of drinking water prior to analysis. Most UF procedures require comprehensive training and practice to achieve and maintain proficiency. As a result, there was a critical need to develop quality control (QC) criteria. Because select agents are difficult to work with and pose a significant safety hazard, QC criteria were developed using surrogates, including Enterococcus faecalis and Bacillus atrophaeus. This article presents the results from the QC criteria development study and results from a subsequent demonstration exercise in which E. faecalis was used to evaluate proficiency using UF to concentrate large volume drinking water samples. Based on preliminary testing EPA Method 1600 and Standard Methods 9218, for E. faecalis and B. atrophaeus respectively, were selected for use during the QC criteria development study. The QC criteria established for Method 1600 were used to assess laboratory performance during the demonstration exercise. Based on the results of the QC criteria study E. faecalis and B. atrophaeus can be used effectively to demonstrate and maintain proficiency using ultrafiltration. Published by Elsevier B.V.
Criteria for Developing a Successful Privatization Project
1989-05-01
conceptualization and planning are required when pursuing privatization projects. In fact, privatization project proponents need to know how to...selection of projects for analysis, methods of acquiring information about these projects, and the analysis framwork . Chapter IV includes the analysis. A...performed an analysis to determine cormion conceptual and creative approaches and lessons learned. This analysis was then used to develop criteria for
NASA Astrophysics Data System (ADS)
Tang, Zhongqian; Zhang, Hua; Yi, Shanzhen; Xiao, Yangfan
2018-03-01
GIS-based multi-criteria decision analysis (MCDA) is increasingly used to support flood risk assessment. However, conventional GIS-MCDA methods fail to adequately represent spatial variability and are accompanied with considerable uncertainty. It is, thus, important to incorporate spatial variability and uncertainty into GIS-based decision analysis procedures. This research develops a spatially explicit, probabilistic GIS-MCDA approach for the delineation of potentially flood susceptible areas. The approach integrates the probabilistic and the local ordered weighted averaging (OWA) methods via Monte Carlo simulation, to take into account the uncertainty related to criteria weights, spatial heterogeneity of preferences and the risk attitude of the analyst. The approach is applied to a pilot study for the Gucheng County, central China, heavily affected by the hazardous 2012 flood. A GIS database of six geomorphological and hydrometeorological factors for the evaluation of susceptibility was created. Moreover, uncertainty and sensitivity analysis were performed to investigate the robustness of the model. The results indicate that the ensemble method improves the robustness of the model outcomes with respect to variation in criteria weights and identifies which criteria weights are most responsible for the variability of model outcomes. Therefore, the proposed approach is an improvement over the conventional deterministic method and can provides a more rational, objective and unbiased tool for flood susceptibility evaluation.
Development of ultrasonic methods for hemodynamic measurements
NASA Technical Reports Server (NTRS)
Histand, M. B.; Miller, C. W.; Wells, M. K.; Mcleod, F. D.; Greene, E. R.; Winter, D.
1975-01-01
A transcutanous method to measure instantaneous mean blood flow in peripheral arteries of the human body was defined. Transcutanous and implanted cuff ultrasound velocity measurements were evaluated, and the accuracies of velocity, flow, and diameter measurements were assessed for steady flow. Performance criteria were established for the pulsed Doppler velocity meter (PUDVM), and performance tests were conducted. Several improvements are suggested.
Alhadlaq, Adel M; Alshammari, Osama F; Alsager, Saleh M; Neel, Khalid A Fouda; Mohamed, Ashry G
2015-06-01
The aim of this study was to evaluate the ability of admissions criteria at King Saud University (KSU), Riyadh, Saudi Arabia, to predict students' early academic performance at three health science colleges (medicine, dentistry, and pharmacy). A retrospective cohort study was conducted with data from the records of students enrolled in the three colleges from the 2008-09 to 2010-11 academic years. The admissions criteria-high school grade average (HSGA), aptitude test (APT) score, and achievement test (ACT) score-were the independent variables. The dependent variable was the average of students' first- and second-year grade point average (GPA). The results showed that the ACT was a better predictor of the students' early academic performance than the HSGA (β=0.368, β=0.254, respectively). No significant relationship was found between the APT and students' early academic performance (β=-0.019, p>0.01). The ACT was most predictive for pharmacy students (β=0.405), followed by dental students (β =0.392) and medical students (β=0.195). Overall, the current admissions criteria explained only 25.5% of the variance in the students' early academic performance. While the ACT and HSGA were found to be predictive of students' early academic performance in health colleges at KSU, the APT was not a strong predictor. Since the combined current admissions criteria for the health science colleges at KSU were weak predictors of the variance in early academic performance, it may be necessary to consider noncognitive evaluation methods during the admission process.
Multi-criteria decision making approaches for quality control of genome-wide association studies.
Malovini, Alberto; Rognoni, Carla; Puca, Annibale; Bellazzi, Riccardo
2009-03-01
Experimental errors in the genotyping phases of a Genome-Wide Association Study (GWAS) can lead to false positive findings and to spurious associations. An appropriate quality control phase could minimize the effects of this kind of errors. Several filtering criteria can be used to perform quality control. Currently, no formal methods have been proposed for taking into account at the same time these criteria and the experimenter's preferences. In this paper we propose two strategies for setting appropriate genotyping rate thresholds for GWAS quality control. These two approaches are based on the Multi-Criteria Decision Making theory. We have applied our method on a real dataset composed by 734 individuals affected by Arterial Hypertension (AH) and 486 nonagenarians without history of AH. The proposed strategies appear to deal with GWAS quality control in a sound way, as they lead to rationalize and make explicit the experimenter's choices thus providing more reproducible results.
Fuzzy MCDM Technique for Planning the Environment Watershed
NASA Astrophysics Data System (ADS)
Chen, Yi-Chun; Lien, Hui-Pang; Tzeng, Gwo-Hshiung; Yang, Lung-Shih; Yen, Leon
In the real word, the decision making problems are very vague and uncertain in a number of ways. The most criteria have interdependent and interactive features so they cannot be evaluated by conventional measures method. Such as the feasibility, thus, to approximate the human subjective evaluation process, it would be more suitable to apply a fuzzy method in environment-watershed plan topic. This paper describes the design of a fuzzy decision support system in multi-criteria analysis approach for selecting the best plan alternatives or strategies in environmentwatershed. The Fuzzy Analytic Hierarchy Process (FAHP) method is used to determine the preference weightings of criteria for decision makers by subjective perception. A questionnaire was used to find out from three related groups comprising fifteen experts. Subjectivity and vagueness analysis is dealt with the criteria and alternatives for selection process and simulation results by using fuzzy numbers with linguistic terms. Incorporated the decision makers’ attitude towards preference, overall performance value of each alternative can be obtained based on the concept of Fuzzy Multiple Criteria Decision Making (FMCDM). This research also gives an example of evaluating consisting of five alternatives, solicited from a environmentwatershed plan works in Taiwan, is illustrated to demonstrate the effectiveness and usefulness of the proposed approach.
Chen, Chi-Kuan; Lee, Ming-Yung; Lin, Wea-Lung; Wang, Yu-Ting; Han, Chih-Ping; Yu, Cheng-Ping; Chao, Wan-Ru
2014-01-01
Abstract The remarkable success of trastuzumab and other newly developed anti-HER2 (human epidermal growth factor receptor 2) therapies in breast, gastric, or gastroesophageal junction cancer patients has supported us to investigate the HER2 status and its possible therapeutic implication in mucinous epithelial ovarian cancer (EOC). However, there is currently no standardization of HER2 scoring criteria in mucinous EOC. In this study, we aimed to compare both the assay performance characteristics of the 2007 and the 2013 American Society for Clinical Oncology and College of American Pathologists scoring methods. Forty-nine tissue microarray samples of mucinous EOC from Asian women were analyzed by immunohistochemistry (IHC) and fluorescence in situ hybridization (FISH) tests using the 2007 and the 2013 criteria, respectively. The overall concordance between IHC and FISH by the 2007 criteria was 97.92 % (kappa = 0.921), and that by the 2013 criteria was 100% (kappa = 1.000). The percentage of Her2 FISH-amplified cases showed an increasing trend significantly through their corresponding HER2 IHC ordinals by the 2007 and the 2013 criteria, respectively (P < 0.001, P < 0.001). After excluding equivocal cases, the specificity (100%) and positive predictive value (100%) were unchanged under either the 2007 or the 2013 criteria. The sensitivity (100%), negative predictive value (NPV) (100%), and accuracy (100%) of HER2 IHC were higher under the 2013 criteria than those (sensitivity 87.5%, NPV 97.6%, and accuracy 97.9%) under the 2007 criteria. Of the total 49 cases, the number (n = 4) of HER2 IHC equivocal results under the 2013 criteria was 4-fold higher than that (n = 1) under the 2007 criteria (8.16% vs 2.04%). Conclusively, if first tested by IHC, the 2013 criteria caused more equivocal HER2 IHC cases to be referred to Her2 FISH testing than the 2007 criteria. That decreased the false-negative rate of HER2 status and increased the detection rates of HER2 positivity in mucinous EOC. PMID:25501060
2015-01-01
Background The project selection process is a crucial step for healthcare organizations at the moment of implementing six sigma programs in both administrative and caring processes. However, six-sigma project selection is often defined as a decision making process with interaction and feedback between criteria; so that it is necessary to explore different methods to help healthcare companies to determine the Six-sigma projects that provide the maximum benefits. This paper describes the application of both ANP (Analytic Network process) and DEMATEL (Decision Making trial and evaluation laboratory)-ANP in a public medical centre to establish the most suitable six sigma project and finally, these methods were compared to evaluate their performance in the decision making process. Methods ANP and DEMATEL-ANP were used to evaluate 6 six sigma project alternatives under an evaluation model composed by 3 strategies, 4 criteria and 15 sub-criteria. Judgement matrixes were completed by the six sigma team whose participants worked in different departments of the medical centre. Results The improving of care opportunity in obstetric outpatients was elected as the most suitable six sigma project with a score of 0,117 as contribution to the organization goals. DEMATEL-ANP performed better at decision making process since it reduced the error probability due to interactions and feedback. Conclusions ANP and DEMATEL-ANP effectively supported six sigma project selection processes, helping to create a complete framework that guarantees the prioritization of projects that provide maximum benefits to healthcare organizations. As DEMATEL- ANP performed better, it should be used by practitioners involved in decisions related to the implementation of six sigma programs in healthcare sector accompanied by the adequate identification of the evaluation criteria that support the decision making model. Thus, this comparative study contributes to choosing more effective approaches in this field. Suggestions of further work are also proposed so that these methods can be applied more adequate in six sigma project selection processes in healthcare. PMID:26391445
16 CFR 1000.29 - Directorate for Engineering Sciences.
Code of Federal Regulations, 2010 CFR
2010-01-01
... standards, product safety tests and test methods, performance criteria, design specifications, and quality control standards for consumer products, based on engineering and scientific methods. It conducts... consumer interest groups. The Directorate conducts human factors studies and research of consumer product...
16 CFR 1000.29 - Directorate for Engineering Sciences.
Code of Federal Regulations, 2012 CFR
2012-01-01
... standards, product safety tests and test methods, performance criteria, design specifications, and quality control standards for consumer products, based on engineering and scientific methods. It conducts... consumer interest groups. The Directorate conducts human factors studies and research of consumer product...
Iranian Expert Opinion about Necessary Criteria for Hospitals Management Performance Assessments
Dadgar, Elham; Janati, Ali; Tabrizi, Jafar Sadegh; Asghari-Jafarabadi, Mohammad; Barati, Omid
2012-01-01
Background: Managers in the hospital should have enough managerial skill to be coordinated with the complex environment. Defining a competency framework assessment for hospital man-agement will help to establish core competencies for hospital managers. The aim of this study was to develop concrete and suitable performance assessment criteria using expert's view. Methods: In this qualitative study in total, 20 professionals participated in the interview and Fo¬cus Group Discussions (FGD). Two of informants were interviewed and 18 professionals par¬ticipants in three focus group discussions. Discussions and interviews were well planned, the FGD environments were suitable and after interviews completion the notes were checked with participant for completeness. Thematic analysis method was used for the analysis of qualitative data. Results: Findings from 3 FGDs and 2 semi structured interviews done with 20 professionals were categorized accordance to themes. The findings were classified in 7 major and 41 sub themes. The major themes include competency related to planning, organization and staff per-formance management, leadership, information management, and clinical governance and per-formance indicators. Conclusion: All participants had hospital administration experience; so their explanation impor¬tant in identifying the criteria and developing hospital managers’ performance assessment tool. In addition to professional perspectives and studies done in other countries, in order to design this kind of tools, it is necessary to adopt the obtained findings to the local hospital conditions. PMID:24688938
Takamura, Ayari; Watanabe, Ken; Akutsu, Tomoko
2016-11-01
In investigations of sexual assaults, as well as in identifying a suspect, the detection of human sperm is important. Recently, a kit for fluorescent staining of human spermatozoa, SPERM HY-LITER™, has become available. This kit allows for microscopic observation of the heads of human sperm using an antibody tagged with a fluorescent dye. This kit is specific to human sperm and provides easy detection by luminescence. However, criteria need to be established to objectively evaluate the fluorescent signals and to evaluate the staining efficiency of this kit. These criteria will be indispensable for investigation of forensic samples. In the present study, the SPERM HY-LITER™ Express kit, which is an improved version of SPERM HY-LITER™, was evaluated using an image analysis procedure using Laplacian and Gaussian methods. This method could be used to automatically select important regions of fluorescence produced by sperm. The fluorescence staining performance was evaluated and compared under various experimental conditions, such as for aged traces and in combination with other chemical staining methods. The morphological characteristics of human sperm were incorporated into the criteria for objective identification of sperm, based on quantified features of the fluorescent spots. Using the criteria, non-specific or insignificant fluorescent spots were excluded, and the specificity of the kit for human sperm was confirmed. The image analysis method and criteria established in this study are universal and could be applied under any experimental conditions. These criteria will increase the reliability of operator judgment in the analysis of human sperm samples in forensics.
Ranking Schools' Academic Performance Using a Fuzzy VIKOR
NASA Astrophysics Data System (ADS)
Musani, Suhaina; Aziz Jemain, Abdul
2015-06-01
Determination rank is structuring alternatives in order of priority. It is based on the criteria determined for each alternative involved. Evaluation criteria are performed and then a composite index composed of each alternative for the purpose of arranging in order of preference alternatives. This practice is known as multiple criteria decision making (MCDM). There are several common approaches to MCDM, one of the practice is known as VIKOR (Multi-criteria Optimization and Compromise Solution). The objective of this study is to develop a rational method for school ranking based on linguistic information of a criterion. The school represents an alternative, while the results for a number of subjects as the criterion. The results of the examination for a course, is given according to the student percentage of each grade. Five grades of excellence, honours, average, pass and fail is used to indicate a level of achievement in linguistics. Linguistic variables are transformed to fuzzy numbers to form a composite index of school performance. Results showed that fuzzy set theory can solve the limitations of using MCDM when there is uncertainty problems exist in the data.
NASA Astrophysics Data System (ADS)
Wesemann, Johannes; Burgholzer, Reinhard; Herrnegger, Mathew; Schulz, Karsten
2017-04-01
In recent years, a lot of research in hydrological modelling has been invested to improve the automatic calibration of rainfall-runoff models. This includes for example (1) the implementation of new optimisation methods, (2) the incorporation of new and different objective criteria and signatures in the optimisation and (3) the usage of auxiliary data sets apart from runoff. Nevertheless, in many applications manual calibration is still justifiable and frequently applied. The hydrologist performing the manual calibration, with his expert knowledge, is able to judge the hydrographs simultaneously concerning details but also in a holistic view. This integrated eye-ball verification procedure available to man can be difficult to formulate in objective criteria, even when using a multi-criteria approach. Comparing the results of automatic and manual calibration is not straightforward. Automatic calibration often solely involves objective criteria such as Nash-Sutcliffe Efficiency Coefficient or the Kling-Gupta-Efficiency as a benchmark during the calibration. Consequently, a comparison based on such measures is intrinsically biased towards automatic calibration. Additionally, objective criteria do not cover all aspects of a hydrograph leaving questions concerning the quality of a simulation open. This contribution therefore seeks to examine the quality of manually and automatically calibrated hydrographs by interactively involving expert knowledge in the evaluation. Simulations have been performed for the Mur catchment in Austria with the rainfall-runoff model COSERO using two parameter sets evolved from a manual and an automatic calibration. A subset of resulting hydrographs for observation and simulation, representing the typical flow conditions and events, will be evaluated in this study. In an interactive crowdsourcing approach experts attending the session can vote for their preferred simulated hydrograph without having information on the calibration method that produced the respective hydrograph. Therefore, the result of the poll can be seen as an additional quality criterion for the comparison of the two different approaches and help in the evaluation of the automatic calibration method.
42 CFR 67.101 - Purpose and scope.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) Section 1142 of the Social Security Act to support research on the outcomes, effectiveness, and... services and procedures; projects to improve methods and data bases for outcomes and effectiveness research..., performance measures, and review criteria; conferences; and research on dissemination methods. (b) The...
Yamamoto, Keiichi; Sumi, Eriko; Yamazaki, Toru; Asai, Keita; Yamori, Masashi; Teramukai, Satoshi; Bessho, Kazuhisa; Yokode, Masayuki; Fukushima, Masanori
2012-01-01
Objective The use of electronic medical record (EMR) data is necessary to improve clinical research efficiency. However, it is not easy to identify patients who meet research eligibility criteria and collect the necessary information from EMRs because the data collection process must integrate various techniques, including the development of a data warehouse and translation of eligibility criteria into computable criteria. This research aimed to demonstrate an electronic medical records retrieval system (ERS) and an example of a hospital-based cohort study that identified both patients and exposure with an ERS. We also evaluated the feasibility and usefulness of the method. Design The system was developed and evaluated. Participants In total, 800 000 cases of clinical information stored in EMRs at our hospital were used. Primary and secondary outcome measures The feasibility and usefulness of the ERS, the method to convert text from eligible criteria to computable criteria, and a confirmation method to increase research data accuracy. Results To comprehensively and efficiently collect information from patients participating in clinical research, we developed an ERS. To create the ERS database, we designed a multidimensional data model optimised for patient identification. We also devised practical methods to translate narrative eligibility criteria into computable parameters. We applied the system to an actual hospital-based cohort study performed at our hospital and converted the test results into computable criteria. Based on this information, we identified eligible patients and extracted data necessary for confirmation by our investigators and for statistical analyses with our ERS. Conclusions We propose a pragmatic methodology to identify patients from EMRs who meet clinical research eligibility criteria. Our ERS allowed for the efficient collection of information on the eligibility of a given patient, reduced the labour required from the investigators and improved the reliability of the results. PMID:23117567
The BEACH Act of 2000 directed the U.S. EPA to establish more expeditious methods for the detection of pathogen indicators in coastal waters, as well as new water quality criteria based on these methods. Progress has been made in developing a quantitative PCR (qPCR) method for en...
A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.
Yang, Harry; Zhang, Jianchun
2015-01-01
The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.
Merits and limitations of optimality criteria method for structural optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Guptill, James D.; Berke, Laszlo
1993-01-01
The merits and limitations of the optimality criteria (OC) method for the minimum weight design of structures subjected to multiple load conditions under stress, displacement, and frequency constraints were investigated by examining several numerical examples. The examples were solved utilizing the Optimality Criteria Design Code that was developed for this purpose at NASA Lewis Research Center. This OC code incorporates OC methods available in the literature with generalizations for stress constraints, fully utilized design concepts, and hybrid methods that combine both techniques. Salient features of the code include multiple choices for Lagrange multiplier and design variable update methods, design strategies for several constraint types, variable linking, displacement and integrated force method analyzers, and analytical and numerical sensitivities. The performance of the OC method, on the basis of the examples solved, was found to be satisfactory for problems with few active constraints or with small numbers of design variables. For problems with large numbers of behavior constraints and design variables, the OC method appears to follow a subset of active constraints that can result in a heavier design. The computational efficiency of OC methods appears to be similar to some mathematical programming techniques.
Lebedeva, Elena R; Gurary, Natalia M; Gilev, Denis V; Olesen, Jes
2018-03-01
Introduction The International Classification of Headache Disorders 3rd edition beta (ICHD-3 beta) gave alternative diagnostic criteria for 1.2 migraine with aura (MA) and 1.2.1 migraine with typical aura (MTA) in the appendix. The latter were presumed to better differentiate transient ischemic attacks (TIA) from MA. The aim of the present study was to field test that. Methods Soon after admission, a neurologist interviewed 120 consecutive patients diagnosed with TIA after MRI or CT. Semi-structured interview forms addressed all details of the TIA episode and all information necessary to apply the ICHD-3beta diagnostic criteria for 1.2, 1.2.1, A1.2 and A1.2.1. Results Requiring at least one identical previous attack, the main body and the appendix criteria performed almost equally well. But requiring only one attack, more than a quarter of TIA patients also fulfilled the main body criteria for 1.2. Specificity was as follows for one attack: 1.2: 0.73, A1.2: 0.91, 1.2.1: 0.88 and A1.2.1: 1.0. Sensitivity when tested against ICHD-2 criteria were 100% for the main body criteria (because they were unchanged), 96% for A1.2 and 94% for A1.2.1. Conclusion The appendix criteria performed much better than the main body criteria for 1.2 MA and 1.2.1 MTA when diagnosing one attack (probable MA). We recommend that the appendix criteria should replace the main body criteria in the ICHD-3.
A combined approach of AHP and TOPSIS methods applied in the field of integrated software systems
NASA Astrophysics Data System (ADS)
Berdie, A. D.; Osaci, M.; Muscalagiu, I.; Barz, C.
2017-05-01
Adopting the most appropriate technology for developing applications on an integrated software system for enterprises, may result in great savings both in cost and hours of work. This paper proposes a research study for the determination of a hierarchy between three SAP (System Applications and Products in Data Processing) technologies. The technologies Web Dynpro -WD, Floorplan Manager - FPM and CRM WebClient UI - CRM WCUI are multi-criteria evaluated in terms of the obtained performances through the implementation of the same web business application. To establish the hierarchy a multi-criteria analysis model that combines the AHP (Analytic Hierarchy Process) and the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) methods was proposed. This model was built with the help of the SuperDecision software. This software is based on the AHP method and determines the weights for the selected sets of criteria. The TOPSIS method was used to obtain the final ranking and the technologies hierarchy.
A Pilot Study on Modeling of Diagnostic Criteria Using OWL and SWRL.
Hong, Na; Jiang, Guoqian; Pathak, Jyotishiman; Chute, Christopher G
2015-01-01
The objective of this study is to describe our efforts in a pilot study on modeling diagnostic criteria using a Semantic Web-based approach. We reused the basic framework of the ICD-11 content model and refined it into an operational model in the Web Ontology Language (OWL). The refinement is based on a bottom-up analysis method, in which we analyzed data elements (including value sets) in a collection (n=20) of randomly selected diagnostic criteria. We also performed a case study to formalize rule logic in the diagnostic criteria of metabolic syndrome using the Semantic Web Rule Language (SWRL). The results demonstrated that it is feasible to use OWL and SWRL to formalize the diagnostic criteria knowledge, and to execute the rules through reasoning.
Assessing the driving performance of older adult drivers: on-road versus simulated driving.
Lee, Hoe C; Cameron, Don; Lee, Andy H
2003-09-01
To validate a laboratory-based driving simulator in measuring on-road driving performance, 129 older adult drivers were assessed with both the simulator and an on-road test. The driving performance of the participants was gauged by appropriate and reliable age-specific assessment criteria, which were found to be negatively correlated with age. Using principal component analysis, two performance indices were developed from the criteria to represent the overall performance in simulated driving and the on-road assessment. There was significant positive association between the two indices, with the simulated driving performance index explaining over two-thirds of the variability of the on-road driving performance index, after adjustment for age and gender of the drivers. The results supported the validity of the driving simulator and it is a safer and more economical method than the on-road testing to assess the driving performance of older adult drivers.
Validation of a new modal performance measure for flexible controllers design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simo, J.B.; Tahan, S.A.; Kamwa, I.
1996-05-01
A new modal performance measure for power system stabilizer (PSS) optimization is proposed in this paper. The new method is based on modifying the square envelopes of oscillating modes, in order to take into account their damping ratios while minimizing the performance index. This criteria is applied to flexible controllers optimal design, on a multi-input-multi-output (MIMO) reduced-order model of a prototype power system. The multivariable model includes four generators, each having one input and one output. Linear time-response simulation and transient stability analysis with a nonlinear package confirm the superiority of the proposed criteria and illustrate its effectiveness in decentralizedmore » control.« less
Selection of reference standard during method development using the analytical hierarchy process.
Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun
2015-03-25
Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. Copyright © 2015 Elsevier B.V. All rights reserved.
Survey of existing performance requirements in codes and standards for light-frame construction
G. E. Sherwood
1980-01-01
Present building codes and standards are a combination of specifications and performance criteria. Where specifications prevail, the introduction f new materials or methods can be a long, cumbersome process. To facilitate the introduction of new technology, performance requirements are becoming more prevalent. In some areas, there is a lack of information on which to...
NASA Technical Reports Server (NTRS)
Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.
1975-01-01
Techniques and support software for the efficient performance of simulation validation are discussed. Overall validation software structure, the performance of validation at various levels of simulation integration, guidelines for check case formulation, methods for real time acquisition and formatting of data from an all up operational simulator, and methods and criteria for comparison and evaluation of simulation data are included. Vehicle subsystems modules, module integration, special test requirements, and reference data formats are also described.
Ortíz, Miguel A; Felizzola, Heriberto A; Nieto Isaza, Santiago
2015-01-01
The project selection process is a crucial step for healthcare organizations at the moment of implementing six sigma programs in both administrative and caring processes. However, six-sigma project selection is often defined as a decision making process with interaction and feedback between criteria; so that it is necessary to explore different methods to help healthcare companies to determine the Six-sigma projects that provide the maximum benefits. This paper describes the application of both ANP (Analytic Network process) and DEMATEL (Decision Making trial and evaluation laboratory)-ANP in a public medical centre to establish the most suitable six sigma project and finally, these methods were compared to evaluate their performance in the decision making process. ANP and DEMATEL-ANP were used to evaluate 6 six sigma project alternatives under an evaluation model composed by 3 strategies, 4 criteria and 15 sub-criteria. Judgement matrixes were completed by the six sigma team whose participants worked in different departments of the medical centre. The improving of care opportunity in obstetric outpatients was elected as the most suitable six sigma project with a score of 0,117 as contribution to the organization goals. DEMATEL-ANP performed better at decision making process since it reduced the error probability due to interactions and feedback. ANP and DEMATEL-ANP effectively supported six sigma project selection processes, helping to create a complete framework that guarantees the prioritization of projects that provide maximum benefits to healthcare organizations. As DEMATEL- ANP performed better, it should be used by practitioners involved in decisions related to the implementation of six sigma programs in healthcare sector accompanied by the adequate identification of the evaluation criteria that support the decision making model. Thus, this comparative study contributes to choosing more effective approaches in this field. Suggestions of further work are also proposed so that these methods can be applied more adequate in six sigma project selection processes in healthcare.
El Hanandeh, Ali; El-Zein, Abbas
2010-01-01
A modified version of the multi-criteria decision aid, ELECTRE III has been developed to account for uncertainty in criteria weightings and threshold values. The new procedure, called ELECTRE-SS, modifies the exploitation phase in ELECTRE III, through a new definition of the pre-order and the introduction of a ranking index (RI). The new approach accommodates cases where incomplete or uncertain preference data are present. The method is applied to a case of selecting a management strategy for the bio-degradable fraction in the municipal solid waste of Sydney. Ten alternatives are compared against 11 criteria. The results show that anaerobic digestion (AD) and composting of paper are less environmentally sound options than recycling. AD is likely to out-perform incineration where a market for heating does not exist. Moreover, landfilling can be a sound alternative, when considering overall performance and conditions of uncertainty.
The effectiveness of strategies to change organisational culture to improve healthcare performance
Parmelli, Elena; Flodgren, Gerd; Schaafsma, Mary Ellen; Baillie, Nick; Beyer, Fiona R; Eccles, Martin P
2014-01-01
Background Organisational culture is an anthropological metaphor used to inform research and consultancy and to explain organisational environments. Great emphasis has been placed during the last years on the need to change organisational culture in order to pursue effective improvement of healthcare performance. However, the precise nature of organisational culture in healthcare policy often remains underspecified and the desirability and feasibility of strategies to be adopted has been called into question. Objectives To determine the effectiveness of strategies to change organisational culture in order to improve healthcare performance. To examine the effectiveness of these strategies according to different patterns of organisational culture. Search methods We searched the following electronic databases for primary studies: The Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE, CINAHL, Sociological Abstracts, Web of Knowledge, PsycINFO, Business and Management, EThOS, Index to Theses, Intute, HMIC, SIGLE, and Scopus until October 2009. The Database of Abstracts of Reviews of Effectiveness (DARE) was searched for related reviews. We also searched the reference lists of all papers and relevant reviews identified, and we contacted experts in the field for advice on further potential studies. Selection criteria We considered randomised controlled trials (RCTs) or well designed quasi-experimental studies, controlled clinical trials (CCTs), controlled before and after studies (CBAs) and interrupted time series analyses (ITS) meeting the quality criteria used by the Cochrane Effective Practice and Organisation of Care Group (EPOC). Studies should be set in any type of healthcare organisation in which strategies to change organisational culture in order to improve healthcare performance were applied. Our main outcomes were objective measures of professional performance and patient outcome. Data collection and analysis At least two review authors independently applied the criteria for inclusion and exclusion criteria to scan titles and abstracts and then to screen the full reports of selected citations. At each stage results were compared and discrepancies solved through discussion. Main results The search strategy yielded 4239 records. After the full text assessment, no studies met the quality criteria used by the EPOC Group and evaluated the effectiveness of strategies to change organisational culture to improve healthcare performance. Authors’ conclusions It is not possible to draw any conclusions about the effectiveness of strategies to change organisational culture because we found no studies that fulfilled the methodological criteria for this review. Research efforts should focus on strengthening the evidence about the effectiveness of methods to change organisational culture to improve health care performance. PMID:21249706
Comparative analysis of methods for detecting interacting loci
2011-01-01
Background Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. Results We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs. Conclusion This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide, should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list. PMID:21729295
Comparative analysis of methods for detecting interacting loci.
Chen, Li; Yu, Guoqiang; Langefeld, Carl D; Miller, David J; Guy, Richard T; Raghuram, Jayaram; Yuan, Xiguo; Herrington, David M; Wang, Yue
2011-07-05
Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs. This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide, should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list.
[Multifactorial method for assessing the physical work capacity of mice].
Dubovik, B V; Bogomazov, S D
1987-01-01
Based on the swimming test according to Kiplinger, in experiments on (CBA X C57BL)F1 mice there were elaborated criteria for animal performance evaluation in the process of repeated swimming of a standard distance thus measuring power, volume of work and rate of the fatigue development in relative units. From the study of effects of sydnocarb, bemethyl and phenazepam on various parameters of physical performance of mice a conclusion was made that the proposed method provides a more informative evaluation of the pharmacological effect on physical performance of animals as compared to the methods based on the record of time of performing the load.
Thermal Load Considerations for Detonative Combustion-Based Gas Turbine Engines
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Perkins, H. Douglas
2004-01-01
An analysis was conducted to assess methods for, and performance implications of, cooling the passages (tubes) of a pulse detonation-based combustor conceptually installed in the core of a gas turbine engine typical of regional aircraft. Temperature-limited material stress criteria were developed from common-sense engineering practice, and available material properties. Validated, one-dimensional, numerical simulations were then used to explore a variety of cooling methods and establish whether or not they met the established criteria. Simulation output data from successful schemes were averaged and used in a cycle-deck engine simulation in order to assess the impact of the cooling method on overall performance. Results were compared to both a baseline engine equipped with a constant-pressure combustor and to one equipped with an idealized detonative combustor. Major findings indicate that thermal loads in these devices are large, but potentially manageable. However, the impact on performance can be substantial. Nearly one half of the ideally possible specific fuel consumption (SFC) reduction is lost due to cooling of the tubes. Details of the analysis are described, limitations are presented, and implications are discussed.
The Performance of IRT Model Selection Methods with Mixed-Format Tests
ERIC Educational Resources Information Center
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.
2012-01-01
When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…
Determination of laser cutting process conditions using the preference selection index method
NASA Astrophysics Data System (ADS)
Madić, Miloš; Antucheviciene, Jurgita; Radovanović, Miroslav; Petković, Dušan
2017-03-01
Determination of adequate parameter settings for improvement of multiple quality and productivity characteristics at the same time is of great practical importance in laser cutting. This paper discusses the application of the preference selection index (PSI) method for discrete optimization of the CO2 laser cutting of stainless steel. The main motivation for application of the PSI method is that it represents an almost unexplored multi-criteria decision making (MCDM) method, and moreover, this method does not require assessment of the considered criteria relative significances. After reviewing and comparing the existing approaches for determination of laser cutting parameter settings, the application of the PSI method was explained in detail. Experiment realization was conducted by using Taguchi's L27 orthogonal array. Roughness of the cut surface, heat affected zone (HAZ), kerf width and material removal rate (MRR) were considered as optimization criteria. The proposed methodology is found to be very useful in real manufacturing environment since it involves simple calculations which are easy to understand and implement. However, while applying the PSI method it was observed that it can not be useful in situations where there exist a large number of alternatives which have attribute values (performances) very close to those which are preferred.
Comparative analysis of autofocus functions in digital in-line phase-shifting holography.
Fonseca, Elsa S R; Fiadeiro, Paulo T; Pereira, Manuela; Pinheiro, António
2016-09-20
Numerical reconstruction of digital holograms relies on a precise knowledge of the original object position. However, there are a number of relevant applications where this parameter is not known in advance and an efficient autofocusing method is required. This paper addresses the problem of finding optimal focusing methods for use in reconstruction of digital holograms of macroscopic amplitude and phase objects, using digital in-line phase-shifting holography in transmission mode. Fifteen autofocus measures, including spatial-, spectral-, and sparsity-based methods, were evaluated for both synthetic and experimental holograms. The Fresnel transform and the angular spectrum reconstruction methods were compared. Evaluation criteria included unimodality, accuracy, resolution, and computational cost. Autofocusing under angular spectrum propagation tends to perform better with respect to accuracy and unimodality criteria. Phase objects are, generally, more difficult to focus than amplitude objects. The normalized variance, the standard correlation, and the Tenenbaum gradient are the most reliable spatial-based metrics, combining computational efficiency with good accuracy and resolution. A good trade-off between focus performance and computational cost was found for the Fresnelet sparsity method.
Multi-Criteria Decision Making Approaches for Quality Control of Genome-Wide Association Studies
Malovini, Alberto; Rognoni, Carla; Puca, Annibale; Bellazzi, Riccardo
2009-01-01
Experimental errors in the genotyping phases of a Genome-Wide Association Study (GWAS) can lead to false positive findings and to spurious associations. An appropriate quality control phase could minimize the effects of this kind of errors. Several filtering criteria can be used to perform quality control. Currently, no formal methods have been proposed for taking into account at the same time these criteria and the experimenter’s preferences. In this paper we propose two strategies for setting appropriate genotyping rate thresholds for GWAS quality control. These two approaches are based on the Multi-Criteria Decision Making theory. We have applied our method on a real dataset composed by 734 individuals affected by Arterial Hypertension (AH) and 486 nonagenarians without history of AH. The proposed strategies appear to deal with GWAS quality control in a sound way, as they lead to rationalize and make explicit the experimenter’s choices thus providing more reproducible results. PMID:21347174
Comparison of three commercially available fit-test methods.
Janssen, Larry L; Luinenburg, D Michael; Mullins, Haskell E; Nelson, Thomas J
2002-01-01
American National Standards Institute (ANSI) standard Z88.10, Respirator Fit Testing Methods, includes criteria to evaluate new fit-tests. The standard allows generated aerosol, particle counting, or controlled negative pressure quantitative fit-tests to be used as the reference method to determine acceptability of a new test. This study examined (1) comparability of three Occupational Safety and Health Administration-accepted fit-test methods, all of which were validated using generated aerosol as the reference method; and (2) the effect of the reference method on the apparent performance of a fit-test method under evaluation. Sequential fit-tests were performed using the controlled negative pressure and particle counting quantitative fit-tests and the bitter aerosol qualitative fit-test. Of 75 fit-tests conducted with each method, the controlled negative pressure method identified 24 failures; bitter aerosol identified 22 failures; and the particle counting method identified 15 failures. The sensitivity of each method, that is, agreement with the reference method in identifying unacceptable fits, was calculated using each of the other two methods as the reference. None of the test methods met the ANSI sensitivity criterion of 0.95 or greater when compared with either of the other two methods. These results demonstrate that (1) the apparent performance of any fit-test depends on the reference method used, and (2) the fit-tests evaluated use different criteria to identify inadequately fitting respirators. Although "acceptable fit" cannot be defined in absolute terms at this time, the ability of existing fit-test methods to reject poor fits can be inferred from workplace protection factor studies.
40 CFR 63.925 - Test methods and procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... Each potential leak interface (i.e., a location where organic vapor leakage could occur) on the cover... secured in the closed position. (3) The detection instrument shall meet the performance criteria of Method... in the unit, not for each individual organic constituent. (4) The detection instrument shall be...
40 CFR 63.905 - Test methods and procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... Each potential leak interface (i.e., a location where organic vapor leakage could occur) on the cover... secured in the closed position. (3) The detection instrument shall meet the performance criteria of Method... in the unit, not for each individual organic constituent. (4) The detection instrument shall be...
Zambaldi, Mattia; Beasley, Ian; Rushton, Alison
2017-08-01
Hamstring muscle injury (HMI) is the most common injury in professional football and has a high re-injury rate. Despite this, there are no validated criteria to support return to play (RTP) decisions. To use the Delphi method to reach expert consensus on RTP criteria after HMI in professional football. All professional football clubs in England (n=92) were invited to participate in a 3-round Delphi study. Round 1 requested a list of criteria used for RTP decisions after HMI. Responses were independently collated by 2 researchers under univocal definitions of RTP criteria. In round 2 participants rated their agreement for each RTP criterion on a 1-5 Likert Scale. In round 3 participants re-rated the criteria that had reached consensus in round 2. Descriptive statistics and Kendall's coefficient of concordance enabled interpretation of consensus. Participation rate was limited at 21.7% (n=20), while retention rate was high throughout the 3 rounds (90.0%, 85.0%, 90.0%). Round 1 identified 108 entries with varying definitions that were collated into a list of 14 RTP criteria. Rounds 2 and 3 identified 13 and 12 criteria reaching consensus, respectively. Five domains of RTP assessment were identified: functional performance, strength, flexibility, pain and player's confidence. The highest-rated criteria were in the functional performance domain, with particular importance given to sprint ability. This study defined a list of consensually agreed RTP criteria for HMI in professional football. Further work is now required to determine the validity of the identified criteria. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Lateral-Directional Eigenvector Flying Qualities Guidelines for High Performance Aircraft
NASA Technical Reports Server (NTRS)
Davidson, John B.; Andrisani, Dominick, II
1996-01-01
This report presents the development of lateral-directional flying qualities guidelines with application to eigenspace (eigenstructure) assignment methods. These guidelines will assist designers in choosing eigenvectors to achieve desired closed-loop flying qualities or performing trade-offs between flying qualities and other important design requirements, such as achieving realizable gain magnitudes or desired system robustness. This has been accomplished by developing relationships between the system's eigenvectors and the roll rate and sideslip transfer functions. Using these relationships, along with constraints imposed by system dynamics, key eigenvector elements are identified and guidelines for choosing values of these elements to yield desirable flying qualities have been developed. Two guidelines are developed - one for low roll-to-sideslip ratio and one for moderate-to-high roll-to-sideslip ratio. These flying qualities guidelines are based upon the Military Standard lateral-directional coupling criteria for high performance aircraft - the roll rate oscillation criteria and the sideslip excursion criteria. Example guidelines are generated for a moderate-to-large, an intermediate, and low value of roll-to-sideslip ratio.
Robotics-based synthesis of human motion.
Khatib, O; Demircan, E; De Sapio, V; Sentis, L; Besier, T; Delp, S
2009-01-01
The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.
Performance Assessment of Communicable Disease Surveillance in Disasters: A Systematic Review
Babaie, Javad; Ardalan, Ali; Vatandoost, Hasan; Goya, Mohammad Mehdi; Akbarisari, Ali
2015-01-01
Background: This study aimed to identify the indices and frameworks that have been used to assess the performance of communicable disease surveillance (CDS) in response to disasters and other emergencies, including infectious disease outbreaks. Method: In this systematic review, PubMed, Google Scholar, Scopus, ScienceDirect, ProQuest databases and grey literature were searched until the end of 2013. All retrieved titles were examined in accordance with inclusion criteria. Abstracts of the relevant titles were reviewed and eligible abstracts were included in a list for data abstraction. Finally, the study variables were extracted. Results: Sixteen articles and one book were found relevant to our study objectives. In these articles, 31 criteria and 35 indicators were used or suggested for the assessment/evaluation of the performance of surveillance systems in disasters. The Centers for Disease Control (CDC) updated guidelines for the evaluation of public health surveillance systems were the most widely used. Conclusion: Despite the importance of performance assessment in improving CDS in response to disasters, there is a lack of clear and accepted frameworks. There is also no agreement on the use of existing criteria and indices. The only relevant framework is the CDC guideline, which is a common framework for assessing public health surveillance systems as a whole. There is an urgent need to develop appropriate frameworks, criteria, and indices for specifically assessing the performance of CDS in response to disasters and other emergencies, including infectious diseases outbreaks. Key words: Disasters, Emergencies, Communicable Diseases, Surveillance System, Performance Assessment PMID:25774323
42 CFR 421.120 - Performance criteria.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 3 2013-10-01 2013-10-01 false Performance criteria. 421.120 Section 421.120... (CONTINUED) MEDICARE PROGRAM (CONTINUED) MEDICARE CONTRACTING Intermediaries § 421.120 Performance criteria. (a) Application of performance criteria. As part of the intermediary evaluations authorized by...
42 CFR 421.120 - Performance criteria.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 3 2012-10-01 2012-10-01 false Performance criteria. 421.120 Section 421.120... (CONTINUED) MEDICARE PROGRAM (CONTINUED) MEDICARE CONTRACTING Intermediaries § 421.120 Performance criteria. (a) Application of performance criteria. As part of the intermediary evaluations authorized by...
42 CFR 421.120 - Performance criteria.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 3 2014-10-01 2014-10-01 false Performance criteria. 421.120 Section 421.120... (CONTINUED) MEDICARE PROGRAM (CONTINUED) MEDICARE CONTRACTING Intermediaries § 421.120 Performance criteria. (a) Application of performance criteria. As part of the intermediary evaluations authorized by...
Santiago, E C; Bello, F B B
2003-06-01
The Association of Official Analytical Chemists (AOAC) Standard Method 972.23 (dry ashing and flame atomic absorption spectrophotometry (FAAS)), applied to the analysis of lead in tuna, was validated in three selected local laboratories to determine the acceptability of the method to both the Codex Alimentarius Commission (Codex) and the European Union (EU) Commission for monitoring lead in canned tuna. Initial validation showed that the standard AOAC method as performed in the three participating laboratories cannot satisfy the Codex/EU proposed criteria for the method detection limit for monitoring lead in fish at the present regulation level of 0.5 mg x kg(-1). Modification of the standard method by chelation/concentration of the digest solution before FAAS analysis showed that the modified method has the potential to meet Codex/EU criteria on sensitivity, accuracy and precision at the specified regulation level.
Using a Mixed Model to Evaluate Job Satisfaction in High-Tech Industries.
Tsai, Sang-Bing; Huang, Chih-Yao; Wang, Cheng-Kuang; Chen, Quan; Pan, Jingzhou; Wang, Ge; Wang, Jingan; Chin, Ta-Chia; Chang, Li-Chung
2016-01-01
R&D professionals are the impetus behind technological innovation, and their competitiveness and capability drive the growth of a company. However, high-tech industries have a chronic shortage of such indispensable professionals. Accordingly, reducing R&D personnel turnover has become a major human resource management challenge facing innovative companies. This study combined importance-performance analysis (IPA) with the decision-making trial and evaluation laboratory (DEMATEL) method to propose an IPA-DEMATEL model. Establishing this model involved three steps. First, an IPA was conducted to measure the importance of and satisfaction gained from job satisfaction criteria. Second, the DEMATEL method was used to determine the causal relationships of and interactive influence among the criteria. Third, a criteria model was constructed to evaluate job satisfaction of high-tech R&D personnel. On the basis of the findings, managerial suggestions are proposed.
Absolute order-of-magnitude reasoning applied to a social multi-criteria evaluation framework
NASA Astrophysics Data System (ADS)
Afsordegan, A.; Sánchez, M.; Agell, N.; Aguado, J. C.; Gamboa, G.
2016-03-01
A social multi-criteria evaluation framework for solving a real-case problem of selecting a wind farm location in the regions of Urgell and Conca de Barberá in Catalonia (northeast of Spain) is studied. This paper applies a qualitative multi-criteria decision analysis approach based on linguistic labels assessment able to address uncertainty and deal with different levels of precision. This method is based on qualitative reasoning as an artificial intelligence technique for assessing and ranking multi-attribute alternatives with linguistic labels in order to handle uncertainty. This method is suitable for problems in the social framework such as energy planning which require the construction of a dialogue process among many social actors with high level of complexity and uncertainty. The method is compared with an existing approach, which has been applied previously in the wind farm location problem. This approach, consisting of an outranking method, is based on Condorcet's original method. The results obtained by both approaches are analysed and their performance in the selection of the wind farm location is compared in aggregation procedures. Although results show that both methods conduct to similar alternatives rankings, the study highlights both their advantages and drawbacks.
Link, Manuela; Schmid, Christina; Pleus, Stefan; Baumstark, Annette; Rittmeyer, Delia; Haug, Cornelia; Freckmann, Guido
2015-04-14
The standard ISO (International Organization for Standardization) 15197 is widely accepted for the accuracy evaluation of systems for self-monitoring of blood glucose (SMBG). Accuracy evaluation was performed for 4 SMBG systems (Accu-Chek Aviva, ContourXT, GlucoCheck XL, GlucoMen LX PLUS) with 3 test strip lots each. To investigate a possible impact of the comparison method on system accuracy data, 2 different established methods were used. The evaluation was performed in a standardized manner following test procedures described in ISO 15197:2003 (section 7.3). System accuracy was assessed by applying ISO 15197:2003 and in addition ISO 15197:2013 criteria (section 6.3.3). For each system, comparison measurements were performed with a glucose oxidase (YSI 2300 STAT Plus glucose analyzer) and a hexokinase (cobas c111) method. All 4 systems fulfilled the accuracy requirements of ISO 15197:2003 with the tested lots. More stringent accuracy criteria of ISO 15197:2013 were fulfilled by 3 systems (Accu-Chek Aviva, ContourXT, GlucoMen LX PLUS) when compared to the manufacturer's comparison method and by 2 systems (Accu-Chek Aviva, ContourXT) when compared to the alternative comparison method. All systems showed lot-to-lot variability to a certain degree; 2 systems (Accu-Chek Aviva, ContourXT), however, showed only minimal differences in relative bias between the 3 evaluated lots. In this study, all 4 systems complied with the evaluated test strip lots with accuracy criteria of ISO 15197:2003. Applying ISO 15197:2013 accuracy limits, differences in the accuracy of the tested systems were observed, also demonstrating that the applied comparison method/system and the lot-to-lot variability can have a decisive influence on accuracy data obtained for a SMBG system. © 2015 Diabetes Technology Society.
Park, Tae-Jin; Lee, Jong-Hyeon; Lee, Myung-Sung; Park, Chang-Hee; Lee, Chang-Hoon; Moon, Seong-Dae; Chung, Jiwoong; Cui, Rongxue; An, Youn-Joo; Yeom, Dong-Hyuk; Lee, Soo-Hyung; Lee, Jae-Kwan; Zoh, Kyung-Duk
2018-09-01
Ammonia is deemed one of the most important pollutants in the freshwater environment because of its highly toxic nature and ubiquity in surface water. This study thus aims to derive the criteria for ammonia in freshwater to protect aquatic life because there are no water quality criteria for ammonia in Korea. Short-term lethal tests were conducted to perform the species sensitivity distribution (SSD) method. This method is widely used in ecological risk assessment to determine the chemical concentrations to protect aquatic species. Based on the species sensitivity distribution method using Korean indigenous aquatic biota, the hazardous concentration for 5% of biological species (HC 5 ) value calculated in this study was 44mg/L as total ammonia nitrogen (TAN). The value of the assessment factor was set at 2. Consequently, the criteria for ammonia were derived as 22mg/L at pH7 and 20°C. When the derived value was applied to the monitoring data nationwide, 0.51%, 0.09%, 0.18%, 0.20%, and 0.35% of the monitoring sites in Han River, Nakdong River, Geum River, Youngsan River, and lakes throughout the nation, respectively, exceeded this criteria. The Ministry of Environment in Korea has been considering introducing water quality standard of ammonia for protecting aquatic life. Therefore, our results can provide the basis for introducing the ammonia standard in Korea. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wormanns, Dag; Klotz, Ernst; Dregger, Uwe; Beyer, Florian; Heindel, Walter
2004-05-01
Lack of angiogenesis virtually excludes malignancy of a pulmonary nodule; assessment with quantitative contrast-enhanced CT (QECT) requires a reliable enhancement measurement technique. Diagnostic performance of different measurement methods in the distinction between malignant and benign nodules was evaluated. QECT (unenhanced scan and 4 post-contrast scans) was performed in 48 pulmonary nodules (12 malignant, 12 benign, 24 indeterminate). Nodule enhancement was the difference between the highest nodule density at any post-contrast scan and the unenhanced scan. Enhancement was determined with: A) the standard 2D method; B) a 3D method consisting of segmentation, removal of peripheral structures and density averaging. Enhancement curves were evaluated for their plausibility using a predefined set of criteria. Sensitivity and specificity were 100% and 33% for the 2D method resp. 92% and 55% for the 3D method using a threshold of 20 HU. One malignant nodule did not show significant enhancement with method B due to adjacent atelectasis which disappeared within the few minutes of the QECT examination. Better discrimination between benign and malignant lesions was achieved with a slightly higher threshold than proposed in the literature. Application of plausibility criteria to the enhancement curves rendered less plausibility faults with the 3D method. A new 3D method for analysis of QECT scans yielded less artefacts and better specificity in the discrimination between benign and malignant pulmonary nodules when using an appropriate enhancement threshold. Nevertheless, QECT results must be interpreted with care.
42 CFR 421.120 - Performance criteria.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 3 2010-10-01 2010-10-01 false Performance criteria. 421.120 Section 421.120... (CONTINUED) MEDICARE PROGRAM MEDICARE CONTRACTING Intermediaries § 421.120 Performance criteria. (a) Application of performance criteria. As part of the intermediary evaluations authorized by section 1816(f) of...
42 CFR 421.120 - Performance criteria.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 3 2011-10-01 2011-10-01 false Performance criteria. 421.120 Section 421.120... (CONTINUED) MEDICARE PROGRAM MEDICARE CONTRACTING Intermediaries § 421.120 Performance criteria. (a) Application of performance criteria. As part of the intermediary evaluations authorized by section 1816(f) of...
NASA Astrophysics Data System (ADS)
Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong
2016-08-01
A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.
Film-based delivery quality assurance for robotic radiosurgery: Commissioning and validation.
Blanck, Oliver; Masi, Laura; Damme, Marie-Christin; Hildebrandt, Guido; Dunst, Jürgen; Siebert, Frank-Andre; Poppinga, Daniela; Poppe, Björn
2015-07-01
Robotic radiosurgery demands comprehensive delivery quality assurance (DQA), but guidelines for commissioning of the DQA method is missing. We investigated the stability and sensitivity of our film-based DQA method with various test scenarios and routine patient plans. We also investigated the applicability of tight distance-to-agreement (DTA) Gamma-Index criteria. We used radiochromic films with multichannel film dosimetry and re-calibration and our analysis was performed in four steps: 1) Film-to-plan registration, 2) Standard Gamma-Index criteria evaluation (local-pixel-dose-difference ≤2%, distance-to-agreement ≤2 mm, pass-rate ≥90%), 3) Dose distribution shift until maximum pass-rate (Maxγ) was found (shift acceptance <1 mm), and 4) Final evaluation with tight DTA criteria (≤1 mm). Test scenarios consisted of purposefully introduced phantom misalignments, dose miscalibrations, and undelivered MU. Initial method evaluation was done on 30 clinical plans. Our method showed similar sensitivity compared to the standard End-2-End-Test and incorporated an estimate of global system offsets in the analysis. The simulated errors (phantom shifts, global robot misalignment, undelivered MU) were detected by our method while standard Gamma-Index criteria often did not reveal these deviations. Dose miscalibration was not detected by film alone, hence simultaneous ion-chamber measurement for film calibration is strongly recommended. 83% of the clinical patient plans were within our tight DTA tolerances. Our presented methods provide additional measurements and quality references for film-based DQA enabling more sensitive error detection. We provided various test scenarios for commissioning of robotic radiosurgery DQA and demonstrated the necessity to use tight DTA criteria. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Breast mass segmentation in mammography using plane fitting and dynamic programming.
Song, Enmin; Jiang, Luan; Jin, Renchao; Zhang, Lin; Yuan, Yuan; Li, Qiang
2009-07-01
Segmentation is an important and challenging task in a computer-aided diagnosis (CAD) system. Accurate segmentation could improve the accuracy in lesion detection and characterization. The objective of this study is to develop and test a new segmentation method that aims at improving the performance level of breast mass segmentation in mammography, which could be used to provide accurate features for classification. This automated segmentation method consists of two main steps and combines the edge gradient, the pixel intensity, as well as the shape characteristics of the lesions to achieve good segmentation results. First, a plane fitting method was applied to a background-trend corrected region-of-interest (ROI) of a mass to obtain the edge candidate points. Second, dynamic programming technique was used to find the "optimal" contour of the mass from the edge candidate points. Area-based similarity measures based on the radiologist's manually marked annotation and the segmented region were employed as criteria to evaluate the performance level of the segmentation method. With the evaluation criteria, the new method was compared with 1) the dynamic programming method developed by Timp and Karssemeijer, and 2) the normalized cut segmentation method, based on 337 ROIs extracted from a publicly available image database. The experimental results indicate that our segmentation method can achieve a higher performance level than the other two methods, and the improvements in segmentation performance level were statistically significant. For instance, the mean overlap percentage for the new algorithm was 0.71, whereas those for Timp's dynamic programming method and the normalized cut segmentation method were 0.63 (P < .001) and 0.61 (P < .001), respectively. We developed a new segmentation method by use of plane fitting and dynamic programming, which achieved a relatively high performance level. The new segmentation method would be useful for improving the accuracy of computerized detection and classification of breast cancer in mammography.
The Airline Quality Rating 2001 (PDF file)
DOT National Transportation Integrated Search
2001-04-01
The Airline Quality Rating (AQR) was developed and first announced in early : 1991 as an objective method of comparing airline quality on combined multiple : performance criteria. This current report, Airline Quality Rating 2001, reflects monthly Air...
Carrera, Cristina; Marchetti, Michael A; Dusza, Stephen W; Argenziano, Giuseppe; Braun, Ralph P; Halpern, Allan C; Jaimes, Natalia; Kittler, Harald J; Malvehy, Josep; Menzies, Scott W; Pellacani, Giovanni; Puig, Susana; Rabinovitz, Harold S; Scope, Alon; Soyer, H Peter; Stolz, Wilhelm; Hofmann-Wellenhof, Rainer; Zalaudek, Iris; Marghoob, Ashfaq A
2016-07-01
The comparative diagnostic performance of dermoscopic algorithms and their individual criteria are not well studied. To analyze the discriminatory power and reliability of dermoscopic criteria used in melanoma detection and compare the diagnostic accuracy of existing algorithms. This was a retrospective, observational study of 477 lesions (119 melanomas [24.9%] and 358 nevi [75.1%]), which were divided into 12 image sets that consisted of 39 or 40 images per set. A link on the International Dermoscopy Society website from January 1, 2011, through December 31, 2011, directed participants to the study website. Data analysis was performed from June 1, 2013, through May 31, 2015. Participants included physicians, residents, and medical students, and there were no specialty-type or experience-level restrictions. Participants were randomly assigned to evaluate 1 of the 12 image sets. Associations with melanoma and intraclass correlation coefficients (ICCs) were evaluated for the presence of dermoscopic criteria. Diagnostic accuracy measures were estimated for the following algorithms: the ABCD rule, the Menzies method, the 7-point checklist, the 3-point checklist, chaos and clues, and CASH (color, architecture, symmetry, and homogeneity). A total of 240 participants registered, and 103 (42.9%) evaluated all images. The 110 participants (45.8%) who evaluated fewer than 20 lesions were excluded, resulting in data from 130 participants (54.2%), 121 (93.1%) of whom were regular dermoscopy users. Criteria associated with melanoma included marked architectural disorder (odds ratio [OR], 6.6; 95% CI, 5.6-7.8), pattern asymmetry (OR, 4.9; 95% CI, 4.1-5.8), nonorganized pattern (OR, 3.3; 95% CI, 2.9-3.7), border score of 6 (OR, 3.3; 95% CI, 2.5-4.3), and contour asymmetry (OR, 3.2; 95% CI, 2.7-3.7) (P < .001 for all). Most dermoscopic criteria had poor to fair interobserver agreement. Criteria that reached moderate levels of agreement included comma vessels (ICC, 0.44; 95% CI, 0.40-0.49), absence of vessels (ICC, 0.46; 95% CI, 0.42-0.51), dark brown color (ICC, 0.40; 95% CI, 0.35-0.44), and architectural disorder (ICC, 0.43; 95% CI, 0.39-0.48). The Menzies method had the highest sensitivity for melanoma diagnosis (95.1%) but the lowest specificity (24.8%) compared with any other method (P < .001). The ABCD rule had the highest specificity (59.4%). All methods had similar areas under the receiver operating characteristic curves. Important dermoscopic criteria for melanoma recognition were revalidated by participants with varied experience. Six algorithms tested had similar but modest levels of diagnostic accuracy, and the interobserver agreement of most individual criteria was poor.
Karagiannidis, A; Perkoulidis, G
2009-04-01
This paper describes a conceptual framework and methodological tool developed for the evaluation of different anaerobic digestion technologies suitable for treating the organic fraction of municipal solid waste, by introducing the multi-criteria decision support method Electre III and demonstrating its related applicability via a test application. Several anaerobic digestion technologies have been proposed over the last years; when compared to biogas recovery from landfills, their advantage is the stability in biogas production and the stabilization of waste prior to final disposal. Anaerobic digestion technologies also show great adaptability to a broad spectrum of different input material beside the organic fraction of municipal solid waste (e.g. agricultural and animal wastes, sewage sludge) and can also be used in remote and isolated communities, either stand-alone or in conjunction to other renewable energy sources. Main driver for this work was the preliminary screening of such methods for potential application in Hellenic islands in the municipal solid waste management sector. Anaerobic digestion technologies follow different approaches to the anaerobic digestion process and also can include production of compost. In the presented multi-criteria analysis exercise, Electre III is implemented for comparing and ranking 5 selected alternative anaerobic digestion technologies. The results of a performed sensitivity analysis are then discussed. In conclusion, the performed multi-criteria approach was found to be a practical and feasible method for the integrated assessment and ranking of anaerobic digestion technologies by also considering different viewpoints and other uncertainties of the decision-making process.
The Inclusion of In-Plane Stresses in Delamination Criteria
NASA Technical Reports Server (NTRS)
Vizzini, Anthony J.; Fenske, Matthew T.
1998-01-01
A study of delamination is performed including strength of materials and fracture mechanics approaches with emphasis placed on methods of delamination prediction. Evidence is presented which supports the inclusion of the in-plane stresses in addition to the inter-laminar stress terms in delamination criteria. The delamination can be modeled as a resin rich region in between ply sets. The entire six component stress state in this resin layer is calculated through a finite element analysis and inputted into a new Modified Von Mises Delamination Criterion. This criterion builds onto previous criteria by including all six stress components. The MVMDC shows improved correlation to experimental data.
NASA Technical Reports Server (NTRS)
Ferrenberg, A.; Hunt, K.; Duesberg, J.
1985-01-01
The primary objective was the obtainment of atomization and mixing performance data for a variety of typical liquid oxygen/hydrocarbon injector element designs. Such data are required to establish injector design criteria and to provide critical inputs to liquid rocket engine combustor performance and stability analysis, and computational codes and methods. Deficiencies and problems with the atomization test equipment were identified, and action initiated to resolve them. Test results of the gas/liquid mixing tests indicated that an assessment of test methods was required. A series of 71 liquid/liquid tests were performed.
Evaluation of Various Depainting Processes on Mechanical Properties of 2024-T3 Aluminum Substrate
NASA Technical Reports Server (NTRS)
McGill, P.
2001-01-01
Alternate alkaline and neutral chemical paint strippers have been identified that, with respect to corrosion requirements, perform as well as or better than a methylene chloride baseline. These chemicals also, in general, meet corrosion acceptance criteria as specified in SAE MA 4872. Alternate acid chemical paint strippers have been identified that, with respect to corrosion requirements, perform as well as or better than a methylene chloride baseline. However, these chemicals do not generally meet corrosion acceptance criteria as specified in SAE MA 4872, especially in the areas of non-clad material performance and hydrogen embrittlement. Media blast methods reviewed in the study do not, in general, adversely affect fatigue performance or crack detectability of 2024-T3 substrate. Sodium bicarbonate stripping exhibited a tendency towards inhibiting crack detectability. These generalizations are based on a limited sample size and additional testing should be performed to characterize the response of specific substrates to specific processes.
NASA Astrophysics Data System (ADS)
Chen, X.; Kumar, M.; Basso, S.; Marani, M.
2017-12-01
Storage-discharge (S-Q) relations are widely used to derive watershed properties and predict streamflow responses. These relations are often obtained using different recession analysis methods, which vary in recession period identification criteria and Q vs. -dQ/dt fitting scheme. Although previous studies have indicated that different recession analysis methods can result in significantly different S-Q relations and subsequently derived hydrological variables, this observation has often been overlooked and S-Q relations have been used in as is form. This study evaluated the effectiveness of four recession analysis methods in obtaining the characteristic S-Q relation and reconstructing the streamflow. Results indicate that while some methods generally performed better than others, none of them consistently outperformed the others. Even the best-performing method could not yield accurate reconstructed streamflow time series and its PDFs in some watersheds, implying that either derived S-Q relations might not be reliable or S-Q relations cannot be used for hydrological simulations. Notably, accuracy of the methods is influenced by the extent of scatter in the ln(-dQ/dt) vs. ln(Q) plot. In addition, the derived S-Q relation was very sensitive to the criteria used for identifying recession periods. This result raises a warning sign against indiscriminate application of recession analysis methods and derived S-Q relations for watershed characterizations or hydrologic simulations. Thorough evaluation of representativeness of the derived S-Q relation should be performed before it is used for hydrologic analysis.
Determination of Dornic Acidity as a Method to Select Donor Milk in a Milk Bank
Garcia-Lara, Nadia Raquel; Escuder-Vieco, Diana; Chaves-Sánchez, Fernando; De la Cruz-Bertolo, Javier; Pallas-Alonso, Carmen Rosa
2013-01-01
Abstract Background Dornic acidity may be an indirect measurement of milk's bacteria content and its quality. There are no uniform criteria among different human milk banks on milk acceptance criteria. The main aim of this study is to report the correlation between Dornic acidity and bacterial growth in donor milk in order to validate the Dornic acidity value as an adequate method to select milk prior to its pasteurization. Materials and Methods From 105 pools, 4-mL samples of human milk were collected. Dornic acidity measurement and culture in blood and McConkey's agar cultures were performed. Based on Dornic acidity degrees, we classified milk into three quality categories: top quality (acidity <4°D), intermediate (acidity between 4°D and 7°D), and milk unsuitable to be consumed (acidity ≥8°D). Spearman's correlation coefficient was used to perform statistical analysis. Results Seventy percent of the samples had Dornic acidity under 4°D, and 88% had a value under 8°D. A weak positive correlation was observed between the bacterial growth in milk and Dornic acidity. The overall discrimination performance of Dornic acidity was higher for predicting growth of Gram-negative organisms. In milk with Dornic acidity of ≥4°D, such a measurement has a sensitivity of 100% for detecting all the samples with bacterial growth with Gram-negative bacteria of over 105 colony-forming units/mL. Conclusions The correlation between Dornic acidity and bacterial growth in donor milk is weak but positive. The measurement of Dornic acidity could be considered as a simple and economical method to select milk to pasteurize in a human milk bank based in quality and safety criteria. PMID:23373435
Apollo/Skylab suit program management systems study. Volume 2: Cost analysis
NASA Technical Reports Server (NTRS)
1974-01-01
The business management methods employed in the performance of the Apollo-Skylab Suit Program are studied. The data accumulated over the span of the contract as well as the methods used to accumulate the data are examined. Management methods associated with the monitoring and control of resources applied towards the performance of the contract are also studied and recommended upon. The primary objective is the compilation, analysis, and presentation of historical cost performance criteria. Cost data are depicted for all phases of the Apollo-Skylab program in common, meaningful terms, whereby the data may be applicable to future suit program planning efforts.
Comparison of outlier identification methods in hospital surgical quality improvement programs.
Bilimoria, Karl Y; Cohen, Mark E; Merkow, Ryan P; Wang, Xue; Bentrem, David J; Ingraham, Angela M; Richards, Karen; Hall, Bruce L; Ko, Clifford Y
2010-10-01
Surgeons and hospitals are being increasingly assessed by third parties regarding surgical quality and outcomes, and much of this information is reported publicly. Our objective was to compare various methods used to classify hospitals as outliers in established surgical quality assessment programs by applying each approach to a single data set. Using American College of Surgeons National Surgical Quality Improvement Program data (7/2008-6/2009), hospital risk-adjusted 30-day morbidity and mortality were assessed for general surgery at 231 hospitals (cases = 217,630) and for colorectal surgery at 109 hospitals (cases = 17,251). The number of outliers (poor performers) identified using different methods and criteria were compared. The overall morbidity was 10.3% for general surgery and 25.3% for colorectal surgery. The mortality was 1.6% for general surgery and 4.0% for colorectal surgery. Programs used different methods (logistic regression, hierarchical modeling, partitioning) and criteria (P < 0.01, P < 0.05, P < 0.10) to identify outliers. Depending on outlier identification methods and criteria employed, when each approach was applied to this single dataset, the number of outliers ranged from 7 to 57 hospitals for general surgery morbidity, 1 to 57 hospitals for general surgery mortality, 4 to 27 hospitals for colorectal morbidity, and 0 to 27 hospitals for colorectal mortality. There was considerable variation in the number of outliers identified using different detection approaches. Quality programs seem to be utilizing outlier identification methods contrary to what might be expected, thus they should justify their methodology based on the intent of the program (i.e., quality improvement vs. reimbursement). Surgeons and hospitals should be aware of variability in methods used to assess their performance as these outlier designations will likely have referral and reimbursement consequences.
Preusser, Matthias; Berghoff, Anna S.; Manzl, Claudia; Filipits, Martin; Weinhäusel, Andreas; Pulverer, Walter; Dieckmann, Karin; Widhalm, Georg; Wöhrer, Adelheid; Knosp, Engelbert; Marosi, Christine; Hainfellner, Johannes A.
2014-01-01
Testing of the MGMT promoter methylation status in glioblastoma is relevant for clinical decision making and research applications. Two recent and independent phase III therapy trials confirmed a prognostic and predictive value of the MGMT promoter methylation status in elderly glioblastoma patients. Several methods for MGMT promoter methylation testing have been proposed, but seem to be of limited test reliability. Therefore, and also due to feasibility reasons, translation of MGMT methylation testing into routine use has been protracted so far. Pyrosequencing after prior DNA bisulfite modification has emerged as a reliable, accurate, fast and easy-to-use method for MGMT promoter methylation testing in tumor tissues (including formalin-fixed and paraffin-embedded samples). We performed an intra- and inter-laboratory ring trial which demonstrates a high analytical performance of this technique. Thus, pyrosequencing-based assessment of MGMT promoter methylation status in glioblastoma meets the criteria of high analytical test performance and can be recommended for clinical application, provided that strict quality control is performed. Our article summarizes clinical indications, practical instructions and open issues for MGMT promoter methylation testing in glioblastoma using pyrosequencing. PMID:24359605
Minimization of annotation work: diagnosis of mammographic masses via active learning
NASA Astrophysics Data System (ADS)
Zhao, Yu; Zhang, Jingyang; Xie, Hongzhi; Zhang, Shuyang; Gu, Lixu
2018-06-01
The prerequisite for establishing an effective prediction system for mammographic diagnosis is the annotation of each mammographic image. The manual annotation work is time-consuming and laborious, which becomes a great hindrance for researchers. In this article, we propose a novel active learning algorithm that can adequately address this problem, leading to the minimization of the labeling costs on the premise of guaranteed performance. Our proposed method is different from the existing active learning methods designed for the general problem as it is specifically designed for mammographic images. Through its modified discriminant functions and improved sample query criteria, the proposed method can fully utilize the pairing of mammographic images and select the most valuable images from both the mediolateral and craniocaudal views. Moreover, in order to extend active learning to the ordinal regression problem, which has no precedent in existing studies, but is essential for mammographic diagnosis (mammographic diagnosis is not only a classification task, but also an ordinal regression task for predicting an ordinal variable, viz. the malignancy risk of lesions), multiple sample query criteria need to be taken into consideration simultaneously. We formulate it as a criteria integration problem and further present an algorithm based on self-adaptive weighted rank aggregation to achieve a good solution. The efficacy of the proposed method was demonstrated on thousands of mammographic images from the digital database for screening mammography. The labeling costs of obtaining optimal performance in the classification and ordinal regression task respectively fell to 33.8 and 19.8 percent of their original costs. The proposed method also generated 1228 wins, 369 ties and 47 losses for the classification task, and 1933 wins, 258 ties and 185 losses for the ordinal regression task compared to the other state-of-the-art active learning algorithms. By taking the particularities of mammographic images, the proposed AL method can indeed reduce the manual annotation work to a great extent without sacrificing the performance of the prediction system for mammographic diagnosis.
Minimization of annotation work: diagnosis of mammographic masses via active learning.
Zhao, Yu; Zhang, Jingyang; Xie, Hongzhi; Zhang, Shuyang; Gu, Lixu
2018-05-22
The prerequisite for establishing an effective prediction system for mammographic diagnosis is the annotation of each mammographic image. The manual annotation work is time-consuming and laborious, which becomes a great hindrance for researchers. In this article, we propose a novel active learning algorithm that can adequately address this problem, leading to the minimization of the labeling costs on the premise of guaranteed performance. Our proposed method is different from the existing active learning methods designed for the general problem as it is specifically designed for mammographic images. Through its modified discriminant functions and improved sample query criteria, the proposed method can fully utilize the pairing of mammographic images and select the most valuable images from both the mediolateral and craniocaudal views. Moreover, in order to extend active learning to the ordinal regression problem, which has no precedent in existing studies, but is essential for mammographic diagnosis (mammographic diagnosis is not only a classification task, but also an ordinal regression task for predicting an ordinal variable, viz. the malignancy risk of lesions), multiple sample query criteria need to be taken into consideration simultaneously. We formulate it as a criteria integration problem and further present an algorithm based on self-adaptive weighted rank aggregation to achieve a good solution. The efficacy of the proposed method was demonstrated on thousands of mammographic images from the digital database for screening mammography. The labeling costs of obtaining optimal performance in the classification and ordinal regression task respectively fell to 33.8 and 19.8 percent of their original costs. The proposed method also generated 1228 wins, 369 ties and 47 losses for the classification task, and 1933 wins, 258 ties and 185 losses for the ordinal regression task compared to the other state-of-the-art active learning algorithms. By taking the particularities of mammographic images, the proposed AL method can indeed reduce the manual annotation work to a great extent without sacrificing the performance of the prediction system for mammographic diagnosis.
Using a Mixed Model to Evaluate Job Satisfaction in High-Tech Industries
Tsai, Sang-Bing; Huang, Chih-Yao; Wang, Cheng-Kuang; Chen, Quan; Pan, Jingzhou; Wang, Ge; Wang, Jingan; Chin, Ta-Chia; Chang, Li-Chung
2016-01-01
R&D professionals are the impetus behind technological innovation, and their competitiveness and capability drive the growth of a company. However, high-tech industries have a chronic shortage of such indispensable professionals. Accordingly, reducing R&D personnel turnover has become a major human resource management challenge facing innovative companies. This study combined importance–performance analysis (IPA) with the decision-making trial and evaluation laboratory (DEMATEL) method to propose an IPA–DEMATEL model. Establishing this model involved three steps. First, an IPA was conducted to measure the importance of and satisfaction gained from job satisfaction criteria. Second, the DEMATEL method was used to determine the causal relationships of and interactive influence among the criteria. Third, a criteria model was constructed to evaluate job satisfaction of high-tech R&D personnel. On the basis of the findings, managerial suggestions are proposed. PMID:27139697
2012-01-01
Background Studies investigating the outcome of conservative scoliosis treatment differ widely with respect to the inclusion criteria used. This study has been performed to investigate the possibility to find useful inclusion criteria for future prospective studies on physiotherapy (PT). Materials and methods A PubMed search for outcome papers on PT was performed in order to detect study designs and inclusion criteria used. Results Real outcome papers (start of treatment in immature samples/end results after the end of growth; controlled studies in adults with scoliosis with a follow-up of more than 5 years) have not been found. Some papers investigated mid-term effects of exercises, most were retrospective, few prospective and many included patient samples with questionable treatment indications. Conclusion There is no outcome paper on PT in scoliosis with a patient sample at risk for being progressive in adults or in adolescents followed from premenarchial status until skeletal maturity. However, papers on bracing are more frequently found and bracing can be regarded as evidence-based in the conservative management and rehabilitation of idiopathic scoliosis in adolescents. PMID:22277541
Liu, Yupeng; Chen, Yifei; Tzeng, Gwo-Hshiung
2017-09-01
As a new application technology of the Internet of Things (IoT), intelligent medical treatment has attracted the attention of both nations and industries through its promotion of medical informatisation, modernisation, and intelligentisation. Faced with a wide variety of intelligent medical terminals, consumers may be affected by various factors when making purchase decisions. To examine and evaluate the key influential factors (and their interrelationships) of consumer adoption behavior for improving and promoting intelligent medical terminals toward achieving set aspiration level in each dimension and criterion. A hybrid modified Multiple Attribute Decision-Making (MADM) model was used for this study, based on three components: (1) the Decision-Making Trial and Evaluation Laboratory (DEMATEL) technique, to build an influential network relationship map (INRM) at both 'dimensions' and 'criteria' levels; (2) the DEMATEL-based analytic network process (DANP) method, to determine the interrelationships and influential weights among the criteria and identify the source-influential factors; and (3) the modified Vlse Kriterijumska Optimizacija I Kompromisno Resenje (VIKOR) method, to evaluate and improve for reducing the performance gaps to meet the consumers' needs for continuous improvement and sustainable products-development. First, a consensus on the influential factors affecting consumers' adoption of intelligent medical terminals was collected from experts' opinion in practical experience. Next, the interrelationships and influential weights of DANP among dimensions/criteria based on the DEMATEL technique were determined. Finally, two intelligent medicine bottles (AdhereTech, A 1 alternative; and Audio/Visual Alerting Pillbox, A 2 alternative) were reviewed as the terminal devices to verify the accuracy of the MADM model and evaluate its performance on each criterion for improving the total certification gaps by systematics according to the modified VIKOR method based on an INRM. In this paper, the criteria and dimensions used to improve the evaluation framework are validated. The systematic evaluation in index system is constructed on the basis of five dimensions and corresponding ten criteria. Influential weights of all criteria ranges from 0.037 to 0.152, which shows the rank of criteria importance. The evaluative framework were validated synthetically and scientifically. INRM (influential network relation map) was obtained from experts' opinion through DEMATEL technique shows complex interrelationship among factors. At the dimension level, the environmental dimension influences other dimensions the most, whereas the security dimension is most influenced by others. So the improvement order of environmental dimension is prior to security dimension. The newly constructed approach was still further validated by the results of the empirical case, where performance gap improvement strategies were analyzed for decision-makers. The modified VIKOR method was especially validated for solving real-world problems in intelligent medical terminal improvement processes. For this paper, A 1 performs better than A 2 , however, promotion mix, brand factor, and market environment are shortcomings faced by both A 1 and A 2 . In addition, A 2 should be improved in the wireless network technology, and the objective contact with a high degree of gaps. Based on the evaluation index system and the integrated model proposed here, decision-makers in enterprises can identify gaps when promoting intelligent medical terminals, from which they can get valuable advice to improve consumer adoption. Additionally, an INRM and the influential weights of DANP can be combined using the modified VIKOR method as integrated weightings to determine how to reduce gaps and provide the best improvement strategies for reaching set aspiration levels. Copyright © 2017 Elsevier B.V. All rights reserved.
A fuzzy MCDM approach for evaluating school performance based on linguistic information
NASA Astrophysics Data System (ADS)
Musani, Suhaina; Jemain, Abdul Aziz
2013-11-01
Decision making is the process of finding the best option among the feasible alternatives. This process should consider a variety of criteria, but this study only focus on academic achievement. The data used is the percentage of candidates who obtained Malaysian Certificate of Education (SPM) in Melaka based on school academic achievement for each subject. 57 secondary schools in Melaka as listed by the Ministry of Education involved in this study. Therefore the school ranking can be done using MCDM (Multi Criteria Decision Making) methods. The objective of this study is to develop a rational method for evaluating school performance based on linguistic information. Since the information or level of academic achievement provided in linguistic manner, there is a possible chance of getting incomplete or uncertain problems. So in order to overcome the situation, the information could be provided as fuzzy numbers. Since fuzzy set represents the uncertainty in human perceptions. In this research, VIKOR (Multi Criteria Optimization and Compromise Solution) has been used as a MCDM tool for the school ranking process in fuzzy environment. Results showed that fuzzy set theory can solve the limitations of using MCDM when there is uncertainty problems exist in the data.
Optimizing Eco-Efficiency Across the Procurement Portfolio.
Pelton, Rylie E O; Li, Mo; Smith, Timothy M; Lyon, Thomas P
2016-06-07
Manufacturing organizations' environmental impacts are often attributable to processes in the firm's upstream supply chain. Environmentally preferable procurement (EPP) and the establishment of environmental purchasing criteria can potentially reduce these indirect impacts. Life-cycle assessment (LCA) can help identify the purchasing criteria that are most effective in reducing environmental impacts. However, the high costs of LCA and the problems associated with the comparability of results have limited efforts to integrate procurement performance with quantitative organizational environmental performance targets. Moreover, environmental purchasing criteria, when implemented, are often established on a product-by-product basis without consideration of other products in the procurement portfolio. We develop an approach that utilizes streamlined LCA methods, together with linear programming, to determine optimal portfolios of product impact-reduction opportunities under budget constraints. The approach is illustrated through a simulated breakfast cereal manufacturing firm procuring grain, containerboard boxes, plastic packaging, electricity, and industrial cleaning solutions. Results suggest that extending EPP decisions and resources to the portfolio level, recently made feasible through the methods illustrated herein, can provide substantially greater CO2e and water-depletion reductions per dollar spend than a product-by-product approach, creating opportunities for procurement organizations to participate in firm-wide environmental impact reduction targets.
Parmar, Jayesh R.; Purnell, Miriam; Lang, Lynn A.
2016-01-01
Objective. To determine the ability of University of Maryland Eastern Shore School of Pharmacy’s admissions criteria to predict students’ academic performance in a 3-year pharmacy program and to analyze transferability to African-American students. Methods. Statistical analyses were conducted on retrospective data for 174 students. Didactic and experiential scores were used as measures of academic performance. Results. Pharmacy College Admission Test (PCAT), grade point average (GPA), interview, and observational scores combined with previous pharmacy experience and biochemistry coursework predicted the students' academic performance except second-year (P2) experiential performance. For African-American students, didactic performance positively correlated with PCAT writing subtests, while the experiential performance positively correlated with previous pharmacy experience and observational score. For nonAfrican-American students, didactic performance positively correlated with PCAT multiple-choice subtests, and experiential performance with interview score. The prerequisite GPA positively correlated with both of the student subgroups’ didactic performance. Conclusion. Both PCAT and GPA were predictors of didactic performance, especially in nonAfrican-Americans. Pharmacy experience and observational scores were predictors of experiential performance, especially in African-Americans. PMID:26941432
Mühlbacher, Axel C; Kaczynski, Anika
2016-02-01
Healthcare decision making is usually characterized by a low degree of transparency. The demand for transparent decision processes can be fulfilled only when assessment, appraisal and decisions about health technologies are performed under a systematic construct of benefit assessment. The benefit of an intervention is often multidimensional and, thus, must be represented by several decision criteria. Complex decision problems require an assessment and appraisal of various criteria; therefore, a decision process that systematically identifies the best available alternative and enables an optimal and transparent decision is needed. For that reason, decision criteria must be weighted and goal achievement must be scored for all alternatives. Methods of multi-criteria decision analysis (MCDA) are available to analyse and appraise multiple clinical endpoints and structure complex decision problems in healthcare decision making. By means of MCDA, value judgments, priorities and preferences of patients, insurees and experts can be integrated systematically and transparently into the decision-making process. This article describes the MCDA framework and identifies potential areas where MCDA can be of use (e.g. approval, guidelines and reimbursement/pricing of health technologies). A literature search was performed to identify current research in healthcare. The results showed that healthcare decision making is addressing the problem of multiple decision criteria and is focusing on the future development and use of techniques to weight and score different decision criteria. This article emphasizes the use and future benefit of MCDA.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Flammability of Flexible Cellular Materials Using a Radiant Heat Energy Source. (v) ASTM E 119-00a, Standard... Method for Surface Flammability of Materials Using a Radiant Heat Energy Source. (vii) ASTM E 648-00, Standard Test Method for Critical Radiant Flux of Floor-Covering Systems Using a Radiant Heat Energy Source...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Flammability of Flexible Cellular Materials Using a Radiant Heat Energy Source. (v) ASTM E 119-00a, Standard... Method for Surface Flammability of Materials Using a Radiant Heat Energy Source. (vii) ASTM E 648-00, Standard Test Method for Critical Radiant Flux of Floor-Covering Systems Using a Radiant Heat Energy Source...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Flammability of Flexible Cellular Materials Using a Radiant Heat Energy Source. (v) ASTM E 119-00a, Standard... Method for Surface Flammability of Materials Using a Radiant Heat Energy Source. (vii) ASTM E 648-00, Standard Test Method for Critical Radiant Flux of Floor-Covering Systems Using a Radiant Heat Energy Source...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Flammability of Flexible Cellular Materials Using a Radiant Heat Energy Source. (v) ASTM E 119-00a, Standard... Method for Surface Flammability of Materials Using a Radiant Heat Energy Source. (vii) ASTM E 648-00, Standard Test Method for Critical Radiant Flux of Floor-Covering Systems Using a Radiant Heat Energy Source...
Garcia Hejl, Carine; Ramirez, Jose Manuel; Vest, Philippe; Chianea, Denis; Renard, Christophe
2014-09-01
Laboratories working towards accreditation by the International Standards Organization (ISO) 15189 standard are required to demonstrate the validity of their analytical methods. The different guidelines set by various accreditation organizations make it difficult to provide objective evidence that an in-house method is fit for the intended purpose. Besides, the required performance characteristics tests and acceptance criteria are not always detailed. The laboratory must choose the most suitable validation protocol and set the acceptance criteria. Therefore, we propose a validation protocol to evaluate the performance of an in-house method. As an example, we validated the process for the detection and quantification of lead in whole blood by electrothermal absorption spectrometry. The fundamental parameters tested were, selectivity, calibration model, precision, accuracy (and uncertainty of measurement), contamination, stability of the sample, reference interval, and analytical interference. We have developed a protocol that has been applied successfully to quantify lead in whole blood by electrothermal atomic absorption spectrometry (ETAAS). In particular, our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics.
The need for performance criteria in evaluating the durability of wood products
Stan Lebow; Bessie Woodward; Patricia Lebow; Carol Clausen
2010-01-01
Data generated from wood-product durability evaluations can be difficult to interpret. Standard methods used to evaluate the potential long-term durability of wood products often provide little guidance on interpretation of test results. Decisions on acceptable performance for standardization and code compliance are based on the judgment of reviewers or committees....
Code of Federal Regulations, 2010 CFR
2010-10-01
..., etc.) shall be designed against acting as passageways for fire and smoke and representative... structural flooring assembly to perform as a barrier against under-vehicle fires. The fire resistance period... Flammability and Smoke Emission Characteristics of Materials Used in Passenger Cars and Locomotive Cabs B...
Schmitt, Jochen; Lange, Toni; Günther, Klaus-Peter; Kopkow, Christian; Rataj, Elisabeth; Apfelbacher, Christian; Aringer, Martin; Böhle, Eckhardt; Bork, Hartmut; Dreinhöfer, Karsten; Friederich, Niklaus; Frosch, Karl-Heinz; Gravius, Sascha; Gromnica-Ihle, Erika; Heller, Karl-Dieter; Kirschner, Stephan; Kladny, Bernd; Kohlhof, Hendrik; Kremer, Michael; Leuchten, Nicolai; Lippmann, Maike; Malzahn, Jürgen; Meyer, Heiko; Sabatowski, Rainer; Scharf, Hanns-Peter; Stoeve, Johannes; Wagner, Richard; Lützner, Jörg
2017-10-01
Background and Objectives Knee osteoarthritis (OA) is a significant public health burden. Rates of total knee arthroplasty (TKA) in OA vary substantially between geographical regions, most likely due to the lack of standardised indication criteria. We set out to define indication criteria for the German healthcare system for TKA in patients with knee OA, on the basis of best evidence and transparent multi-stakeholder consensus. Methods We undertook a complex mixed methods study, including an iterative process of systematic appraisal of existing evidence, Delphi consensus methods and stakeholder conferences. We established a consensus panel representing key German national societies of healthcare providers (orthopaedic surgeons, rheumatologists, pain physicians, psychologists, physiotherapists), payers, and patient representatives. A priori defined consensus criteria were at least 70% agreement and less than 20% disagreement among the consensus panel. Agreement was sought for (1) core indication criteria defined as criteria that must be met to consider TKA in a normal patient with knee OA, (2) additional (not obligatory) indication criteria, (3) absolute contraindication criteria that generally prohibit TKA, and (4) risk factors that do not prohibit TKA, but usually do not lead to a recommendation for TKA. Results The following 5 core indication criteria were agreed within the panel: 1. intermittent (several times per week) or constant knee pain for at least 3 - 6 months; 2. radiological confirmation of structural knee damage (osteoarthritis, osteonecrosis); 3. inadequate response to conservative treatment, including pharmacological and non-pharmacological treatment for at least 3 - 6 months; 4. adverse impact of knee disease on patient's quality of life for at least 3 - 6 months; 5. patient-reported suffering/impairment due to knee disease. Additional indication criteria, contraindication criteria, and risk factors for adverse outcome were also agreed by a large majority within the multi-perspective stakeholder panel. Conclusion The defined indication criteria constitute a prerequisite for appropriate provision of TKA in patients with knee OA in Germany. In eligible patients, shared-decision making should eventually determine if TKA is performed or not. The next important steps are the implementation of the defined indication criteria, and the prospective investigation of predictors of success or failure of TKA in the context of routine care provision in Germany. Georg Thieme Verlag KG Stuttgart · New York.
Parameter estimation and order selection for an empirical model of VO2 on-kinetics.
Alata, O; Bernard, O
2007-04-27
In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.
23 CFR 636.205 - Can past performance be used as an evaluation criteria?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 23 Highways 1 2010-04-01 2010-04-01 false Can past performance be used as an evaluation criteria... past performance be used as an evaluation criteria? (a) Yes, past performance information is one... used as an evaluation criteria in either phase-one or phase-two solicitations. If you elect to use past...
ACL Return to Sport Guidelines and Criteria.
Davies, George J; McCarty, Eric; Provencher, Matthew; Manske, Robert C
2017-09-01
Because of the epidemiological incidence of anterior cruciate ligament (ACL) injuries, the high reinjury rates that occur when returning back to sports, the actual number of patients that return to the same premorbid level of competition, the high incidence of osteoarthritis at 5-10-year follow-ups, and the effects on the long-term health of the knee and the quality of life for the patient, individualizing the return to sports after ACL reconstruction (ACL-R) is critical. However, one of the challenging but unsolved dilemmas is what criteria and clinical decision making should be used to return an athlete back to sports following an ACL-R. This article describes an example of a functional testing algorithm (FTA) as one method for clinical decision making based on quantitative and qualitative testing and assessment utilized to make informed decisions to return an athlete to their sports safely and without compromised performance. The methods were a review of the best current evidence to support a FTA. In order to evaluate all the complicated domains of the clinical decision making for individualizing the return to sports after ACL-R, numerous assessments need to be performed including the biopsychosocial concepts, impairment testing, strength and power testing, functional testing, and patient-reported outcomes (PROs). The optimum criteria to use for individualizing the return to sports after ACL-R remain elusive. However, since this decision needs to be made on a regular basis with the safety and performance factors of the patient involved, this FTA provides one method of quantitatively and qualitatively making the decisions. Admittedly, there is no predictive validity of this system, but it does provide practical guidelines to facilitate the clinical decision making process for return to sports. The clinical decision to return an athlete back into competition has significant implications ranging from the safety of the athlete, to performance factors and actual litigation issues. By using a multifactorial FTA, such as the one described, provides quantitative and qualitatively criteria to make an informed decision in the best interests of the athlete.
Empirical evaluation of interest-level criteria
NASA Astrophysics Data System (ADS)
Sahar, Sigal; Mansour, Yishay
1999-02-01
Efficient association rule mining algorithms already exist, however, as the size of databases increases, the number of patterns mined by the algorithms increases to such an extent that their manual evaluation becomes impractical. Automatic evaluation methods are, therefore, required in order to sift through the initial list of rules, which the datamining algorithm outputs. These evaluation methods, or criteria, rank the association rules mined from the dataset. We empirically examined several such statistical criteria: new criteria, as well as previously known ones. The empirical evaluation was conducted using several databases, including a large real-life dataset, acquired from an order-by-phone grocery store, a dataset composed from www proxy logs, and several datasets from the UCI repository. We were interested in discovering whether the ranking performed by the various criteria is similar or easily distinguishable. Our evaluation detected, when significant differences exist, three patterns of behavior in the eight criteria we examined. There is an obvious dilemma in determining how many association rules to choose (in accordance with support and confidence parameters). The tradeoff is between having stringent parameters and, therefore, few rules, or lenient parameters and, thus, a multitude of rules. In many cases, our empirical evaluation revealed that most of the rules found by the comparably strict parameters ranked highly according to the interestingness criteria, when using lax parameters (producing significantly more association rules). Finally, we discuss the association rules that ranked highest, explain why these results are sound, and how they direct future research.
Rudan, Igor; Gibson, Jennifer L.; Ameratunga, Shanthi; El Arifeen, Shams; Bhutta, Zulfiqar A.; Black, Maureen; Black, Robert E.; Brown, Kenneth H.; Campbell, Harry; Carneiro, Ilona; Chan, Kit Yee; Chandramohan, Daniel; Chopra, Mickey; Cousens, Simon; Darmstadt, Gary L.; Gardner, Julie Meeks; Hess, Sonja Y.; Hyder, Adnan A.; Kapiriri, Lydia; Kosek, Margaret; Lanata, Claudio F.; Lansang, Mary Ann; Lawn, Joy; Tomlinson, Mark; Tsai, Alexander C.; Webster, Jayne
2008-01-01
This article provides detailed guidelines for the implementation of systematic method for setting priorities in health research investments that was recently developed by Child Health and Nutrition Research Initiative (CHNRI). The target audience for the proposed method are international agencies, large research funding donors, and national governments and policy-makers. The process has the following steps: (i) selecting the managers of the process; (ii) specifying the context and risk management preferences; (iii) discussing criteria for setting health research priorities; (iv) choosing a limited set of the most useful and important criteria; (v) developing means to assess the likelihood that proposed health research options will satisfy the selected criteria; (vi) systematic listing of a large number of proposed health research options; (vii) pre-scoring check of all competing health research options; (viii) scoring of health research options using the chosen set of criteria; (ix) calculating intermediate scores for each health research option; (x) obtaining further input from the stakeholders; (xi) adjusting intermediate scores taking into account the values of stakeholders; (xii) calculating overall priority scores and assigning ranks; (xiii) performing an analysis of agreement between the scorers; (xiv) linking computed research priority scores with investment decisions; (xv) feedback and revision. The CHNRI method is a flexible process that enables prioritizing health research investments at any level: institutional, regional, national, international, or global. PMID:19090596
Code of Federal Regulations, 2014 CFR
2014-01-01
..., performance criteria, inspection requirements, marking requirements, testing equipment, test procedures and... purchase, installation, and use of the product being standardized. (b) Requirements for Department of... organization to such an extent that it would contain similar requirements and test methods for identical types...
Combined loading criterial influence on structural performance
NASA Technical Reports Server (NTRS)
Kuchta, B. J.; Sealey, D. M.; Howell, L. J.
1972-01-01
An investigation was conducted to determine the influence of combined loading criteria on the space shuttle structural performance. The study consisted of four primary phases: Phase (1) The determination of the sensitivity of structural weight to various loading parameters associated with the space shuttle. Phase (2) The determination of the sensitivity of structural weight to various levels of loading parameter variability and probability. Phase (3) The determination of shuttle mission loading parameters variability and probability as a function of design evolution and the identification of those loading parameters where inadequate data exists. Phase (4) The determination of rational methods of combining both deterministic time varying and probabilistic loading parameters to provide realistic design criteria. The study results are presented.
The stochastic control of the F-8C aircraft using the Multiple Model Adaptive Control (MMAC) method
NASA Technical Reports Server (NTRS)
Athans, M.; Dunn, K. P.; Greene, E. S.; Lee, W. H.; Sandel, N. R., Jr.
1975-01-01
The purpose of this paper is to summarize results obtained for the adaptive control of the F-8C aircraft using the so-called Multiple Model Adaptive Control method. The discussion includes the selection of the performance criteria for both the lateral and the longitudinal dynamics, the design of the Kalman filters for different flight conditions, the 'identification' aspects of the design using hypothesis testing ideas, and the performance of the closed loop adaptive system.
ERIC Educational Resources Information Center
Lin, Jie
2006-01-01
The Bookmark standard-setting procedure was developed to address the perceived problems with the most popular method for setting cut-scores: the Angoff procedure (Angoff, 1971). The purposes of this article are to review the Bookmark procedure and evaluate it in terms of Berk's (1986) criteria for evaluating cut-score setting methods. The…
Determining the Number of Factors in P-Technique Factor Analysis
ERIC Educational Resources Information Center
Lo, Lawrence L.; Molenaar, Peter C. M.; Rovine, Michael
2017-01-01
Determining the number of factors is a critical first step in exploratory factor analysis. Although various criteria and methods for determining the number of factors have been evaluated in the usual between-subjects R-technique factor analysis, there is still question of how these methods perform in within-subjects P-technique factor analysis. A…
2012-12-01
evaluate predictive performance following methods described in Malinowski et al. (1997). Acceptance criteria and control limits will be based on...69: 69–78. Malinowski , H., P. Marroum, V.R. Uppoor, et al. 1997. Draft guidance for industry extended release solid oral dosage forms. In: Young D
Towards an operational definition of pharmacy clinical competency
NASA Astrophysics Data System (ADS)
Douglas, Charles Allen
The scope of pharmacy practice and the training of future pharmacists have undergone a strategic shift over the last few decades. The pharmacy profession recognizes greater pharmacist involvement in patient care activities. Towards this strategic objective, pharmacy schools are training future pharmacists to meet these new clinical demands. Pharmacy students have clerkships called Advanced Pharmacy Practice Experiences (APPEs), and these clerkships account for 30% of the professional curriculum. APPEs provide the only opportunity for students to refine clinical skills under the guidance of an experienced pharmacist. Nationwide, schools of pharmacy need to evaluate whether students have successfully completed APPEs and are ready treat patients. Schools are left to their own devices to develop assessment programs that demonstrate to the public and regulatory agencies, students are clinically competent prior to graduation. There is no widely accepted method to evaluate whether these assessment programs actually discriminate between the competent and non-competent students. The central purpose of this study is to demonstrate a rigorous method to evaluate the validity and reliability of APPE assessment programs. The method introduced in this study is applicable to a wide variety of assessment programs. To illustrate this method, the study evaluated new performance criteria with a novel rating scale. The study had two main phases. In the first phase, a Delphi panel was created to bring together expert opinions. Pharmacy schools nominated exceptional preceptors to join a Delphi panel. Delphi is a method to achieve agreement of complex issues among experts. The principal researcher recruited preceptors representing a variety of practice settings and geographical regions. The Delphi panel evaluated and refined the new performance criteria. In the second phase, the study produced a novel set of video vignettes that portrayed student performances based on recommendations of an expert panel. Pharmacy preceptors assessed the performances with the new performance criteria. Estimates of reliability and accuracy from preceptors' assessments can be used to establish benchmarks for future comparisons. Findings from the first phase suggested preceptors held a unique perspective, where APPE assessments are based in relevance to clinical activities. The second phase analyzed assessment results from pharmacy preceptors who watched the video simulations. Reliability results were higher for non-randomized compared to randomized video simulations. Accuracy results showed preceptors more readily identified high and low student performances compared to average students. These results indicated the need for pharmacy preceptor training in performance assessment. The study illustrated a rigorous method to evaluate the validity and reliability of APPE assessment instruments.
Probabilistic methods for rotordynamics analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.
1991-01-01
This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.
In pursuit of goodness in bioethics: analysis of an exemplary article.
Hofmann, Bjørn; Magelssen, Morten
2018-06-15
What is good bioethics? Addressing this question is key for reinforcing and developing the field. In particular, a discussion of potential quality criteria can heighten awareness and contribute to the quality of bioethics publications. Accordingly, the objective of this article is threefold: first, we want to identify a set of criteria for quality in bioethics. Second, we want to illustrate the added value of a novel method: in-depth analysis of a single article with the aim of deriving quality criteria. The third and ultimate goal is to stimulate a broad and vivid debate on goodness in bioethics. An initial literature search reveals a range of diverse quality criteria. In order to expand on the realm of such quality criteria, we perform an in-depth analysis of an article that is acclaimed for being exemplary. The analysis results in eleven specific quality criteria for good bioethics in three categories: argumentative, empirical, and dialectic. Although we do not claim that the identified criteria are universal or absolute, we argue that they are fruitful for fueling a continuous constitutive debate on what is "good bioethics." Identifying, debating, refining, and applying such criteria is an important part of defining and improving bioethics.
Relaxing decision criteria does not improve recognition memory in amnesic patients.
Reber, P J; Squire, L R
1999-05-01
An important question about the organization of memory is whether information available in non-declarative memory can contribute to performance on tasks of declarative memory. Dorfman, Kihlstrom, Cork, and Misiaszek (1995) described a circumstance in which the phenomenon of priming might benefit recognition memory performance. They reported that patients receiving electroconvulsive therapy improved their recognition performance when they were encouraged to relax their criteria for endorsing test items as familiar. It was suggested that priming improved recognition by making information available about the familiarity of test items. In three experiments, we sought unsuccessfully to reproduce this phenomenon in amnesic patients. In Experiment 3, we reproduced the methods and procedure used by Dorfman et al. but still found no evidence for improved recognition memory following the manipulation of decision criteria. Although negative findings have their own limitations, our findings suggest that the phenomenon reported by Dorfman et al. does not generalize well. Our results agree with several recent findings that suggest that priming is independent of recognition memory and does not contribute to recognition memory scores.
Multi-criteria evaluation of sources for self-help domestic water supply
NASA Astrophysics Data System (ADS)
Nnaji, C. C.; Banigo, A.
2018-03-01
Two multi-criteria decision analysis methods were employed to evaluate six water sources. The analytical hierarchical process (AHP) ranked borehole highest with a rank of 0.321 followed by water board with a rank of 0.284. The other sources ranked far below these two as follows: water tanker (0.139), rainwater harvesting (0.117), shallow well (0.114) and stream (0.130). The Technique for Order Performance by Similarity to the Ideal Solution (TOPSIS) ranked water board highest with a rank of 0.865, followed by borehole with a value of 0.778. Quality and risk of contamination were found to be the most influential criteria while seasonality was the least.
Jędrkiewicz, Renata; Orłowski, Aleksander; Namieśnik, Jacek; Tobiszewski, Marek
2016-01-15
In this study we perform ranking of analytical procedures for 3-monochloropropane-1,2-diol determination in soy sauces by PROMETHEE method. Multicriteria decision analysis was performed for three different scenarios - metrological, economic and environmental, by application of different weights to decision making criteria. All three scenarios indicate capillary electrophoresis-based procedure as the most preferable. Apart from that the details of ranking results differ for these three scenarios. The second run of rankings was done for scenarios that include metrological, economic and environmental criteria only, neglecting others. These results show that green analytical chemistry-based selection correlates with economic, while there is no correlation with metrological ones. This is an implication that green analytical chemistry can be brought into laboratories without analytical performance costs and it is even supported by economic reasons. Copyright © 2015 Elsevier B.V. All rights reserved.
The post-evaluation of green residential building in Ningxia
NASA Astrophysics Data System (ADS)
Wu, Yunna; Wang, Zhen
2017-06-01
Green residential buildings are concerned by more and more people. However, the development of green residential buildings has been limited due to the single-standard requirements and lack of the multi-objective performance. At same time, the evaluation criteria system of green residential building is not comprehensive enough. So first of all, using SPSS software, residents questionnaire surveys are figured and found that the judge of experts and residents about the green elements is inconsistent, so the owners’ satisfaction is included in the post-evaluation criterial systems of green residential building from five aspects-the preliminary work of construction, construction process, economic, social benefits and owners satisfaction in Ningxia area, combined with expert interviews. Secondly, in the post-evaluation, it is difficult for many experts judgment matrix to meet the requirement of consistency, in this paper using MATLAB program, judgment matrix consistency is adjusted. And the weights of the criteria and sub-criteria and experts weights using group AHP method are determined. Finally, the grey clustering method is used to establish the post-evaluation model and the real case of Sai-shang project is carried out. It shows that the result obtained by using the improved criteria system and method in this paper is in a high degree of agreement with the actual result.
Peroni, M; Golland, P; Sharp, G C; Baroni, G
2016-02-01
A crucial issue in deformable image registration is achieving a robust registration algorithm at a reasonable computational cost. Given the iterative nature of the optimization procedure an algorithm must automatically detect convergence, and stop the iterative process when most appropriate. This paper ranks the performances of three stopping criteria and six stopping value computation strategies for a Log-Domain Demons Deformable registration method simulating both a coarse and a fine registration. The analyzed stopping criteria are: (a) velocity field update magnitude, (b) mean squared error, and (c) harmonic energy. Each stoping condition is formulated so that the user defines a threshold ∊, which quantifies the residual error that is acceptable for the particular problem and calculation strategy. In this work, we did not aim at assigning a value to e, but to give insights in how to evaluate and to set the threshold on a given exit strategy in a very popular registration scheme. Experiments on phantom and patient data demonstrate that comparing the optimization metric minimum over the most recent three iterations with the minimum over the fourth to sixth most recent iterations can be an appropriate algorithm stopping strategy. The harmonic energy was found to provide best trade-off between robustness and speed of convergence for the analyzed registration method at coarse registration, but was outperformed by mean squared error when all the original pixel information is used. This suggests the need of developing mathematically sound new convergence criteria in which both image and vector field information could be used to detect the actual convergence, which could be especially useful when considering multi-resolution registrations. Further work should be also dedicated to study same strategies performances in other deformable registration methods and body districts. © The Author(s) 2014.
Advanced technology component derating
NASA Astrophysics Data System (ADS)
Jennings, Timothy A.
1992-02-01
A technical study performed to determine the derating criteria of advanced technology components is summarized. The study covered existing criteria from AFSC Pamphlet 800-27 and the development of new criteria based on data, literature searches, and the use of advanced technology prediction methods developed in RADC-TR-90-72. The devices that were investigated were as follows: VHSIC, ASIC, MIMIC, Microprocessor, PROM, Power Transistors, RF Pulse Transistors, RF Multi-Transistor Packages, Photo Diodes, Photo Transistors, Opto-Electronic Couplers, Injection Laser Diodes, LED, Hybrid Deposited Film Resistors, Chip Resistors, and Capacitors and SAW devices. The results of the study are additional derating criteria that extend the range of AFSC Pamphlet 800-27. These data will be transitioned from the report to AFSC Pamphlet 800-27 for use by government and contractor personnel in derating electronics systems yielding increased safety margins and improved system reliability.
Effects of Selected Task Performance Criteria at Initiating Adaptive Task Real locations
NASA Technical Reports Server (NTRS)
Montgomery, Demaris A.
2001-01-01
In the current report various performance assessment methods used to initiate mode transfers between manual control and automation for adaptive task reallocation were tested. Participants monitored two secondary tasks for critical events while actively controlling a process in a fictional system. One of the secondary monitoring tasks could be automated whenever operators' performance was below acceptable levels. Automation of the secondary task and transfer of the secondary task back to manual control were either human- or machine-initiated. Human-initiated transfers were based on the operator's assessment of the current task demands while machine-initiated transfers were based on the operators' performance. Different performance assessment methods were tested in two separate experiments.
Poitevin, Eric; Nicolas, Marine; Graveleau, Laetitia; Richoz, Janique; Andrey, Daniel; Monard, Florence
2009-01-01
A single-laboratory validation (SLV) and a ring trial (RT) were undertaken to determine nine nutritional elements in food products by inductively coupled plasma-atomic emission spectroscopy in order to improve and update AOAC Official Method 984.27. The improvements involved optimized microwave digestion, selected analytical lines, internal standardization, and ion buffering. Simultaneous determination of nine elements (calcium, copper, iron, potassium, magnesium, manganese, sodium, phosphorus, and zinc) was made in food products. Sample digestion was performed through wet digestion of food samples by microwave technology with either closed or open vessel systems. Validation was performed to characterize the method for selectivity, sensitivity, linearity, accuracy, precision, recovery, ruggedness, and uncertainty. The robustness and efficiency of this method was proved through a successful internal RT using experienced food industry laboratories. Performance characteristics are reported for 13 certified and in-house reference materials, populating the AOAC triangle food sectors, which fulfilled AOAC criteria and recommendations for accuracy (trueness, recovery, and z-scores) and precision (repeatability and reproducibility RSD and HorRat values) regarding SLV and RT. This multielemental method is cost-efficient, time-saving, accurate, and fit-for-purpose according to ISO 17025 Norm and AOAC acceptability criteria, and is proposed as an improved version of AOAC Official Method 984.27 for fortified food products, including infant formula.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannidis, A.; Papageorgiou, A., E-mail: apapa@auth.g; Perkoulidis, G.
In Greece more than 14,000 tonnes of infectious hospital waste are produced yearly; a significant part of it is still mismanaged. Only one off-site licensed incineration facility for hospital wastes is in operation, with the remaining of the market covered by various hydroclave and autoclave units, whereas numerous problems are still generally encountered regarding waste segregation, collection, transportation and management, as well as often excessive entailed costs. Everyday practices still include dumping the majority of solid hospital waste into household disposal sites and landfills after sterilization, still largely without any preceding recycling and separation steps. Discussed in the present papermore » are the implemented and future treatment practices of infectious hospital wastes in Central Macedonia; produced quantities are reviewed, actual treatment costs are addressed critically, whereas the overall situation in Greece is discussed. Moreover, thermal treatment processes that could be applied for the treatment of infectious hospital wastes in the region are assessed via the multi-criteria decision method Analytic Hierarchy Process. Furthermore, a sensitivity analysis was performed and the analysis demonstrated that a centralized autoclave or hydroclave plant near Thessaloniki is the best performing option, depending however on the selection and weighing of criteria of the multi-criteria process. Moreover the study found that a common treatment option for the treatment of all infectious hospital wastes produced in the Region of Central Macedonia, could offer cost and environmental benefits. In general the multi-criteria decision method, as well as the conclusions and remarks of this study can be used as a basis for future planning and anticipation of the needs for investments in the area of medical waste management.« less
40 CFR 265.112 - Closure plan; amendment of plan.
Code of Federal Regulations, 2010 CFR
2010-07-01
... residues and contaminated containment system components, equipment, structures, and soils during partial... contaminated soils, methods for sampling and testing surrounding soils, and criteria for determining the extent of decontamination necessary to satisfy the closure performance standard; and (5) A detailed...
40 CFR 264.112 - Closure plan; amendment of plan.
Code of Federal Regulations, 2010 CFR
2010-07-01
... residues and contaminated containment system components, equipment, structures, and soils during partial... contaminated soils, methods for sampling and testing surrounding soils, and criteria for determining the extent of decontamination required to satisfy the closure performance standard; and (5) A detailed...
Survey of aircraft electrical power systems
NASA Technical Reports Server (NTRS)
Lee, C. H.; Brandner, J. J.
1972-01-01
Areas investigated include: (1) load analysis; (2) power distribution, conversion techniques and generation; (3) design criteria and performance capabilities of hydraulic and pneumatic systems; (4) system control and protection methods; (5) component and heat transfer systems cooling; and (6) electrical system reliability.
Engjom, Trond; Pham, Khahn Do-Chong; Erchinger, Friedemann; Haldorsen, Ingfrid Salvesen; Gilja, Odd Helge; Dimcevski, Georg; Havre, Roald Flesland
2018-03-26
We aimed to evaluate the agreement of single criteria and dedicated scores from transabdominal ultrasound of the pancreas (US) compared to standards by endoscopic ultrasound (EUS) and computed tomography (CT). In this observational cohort study performed in a tertiary care center, US and EUS were performed in 110 patients referred for suspected CP. Based on the Mayo score, 52 patients were diagnosed with CP. The sonographic findings obtained by both methods were registered. The number of criteria was counted and scored according to the Rosemont score. Agreement between the number of detected US and EUS criteria was substantial (ICC = 0.74 [0.61 - 0.83]. Adding Rosemont weighting improved the agreement (ICC = 0.88 [0.81 - 0.92]). Regarding individual criteria, the agreement was substantial for the detection of calcifications (κ = 0.86) and moderate for cysts and irregular or dilated pancreatic duct (κ = 0.42 - 0.58). Agreement for the other criteria was poorer (κ≤ 0.40). The diagnostic performance indices [95 % CI] of US for diagnosing CP (using Mayo score as reference standard) were for the unweighted score: Sensitivity: 0.65 [0.51 - 0.78], specificity: 0.97 [0.87 - 1.00]; and for Rosemont score: Sensitivity: 0.75 [0.61 - 0.86], specificity: 0.95 [0.83 - 0.99]. The agreement between US and EUS for the unweighted and weighted scores was substantial. For the features calcifications, cysts and main pancreatic duct (MPD) changes, agreement was moderate to substantial. For the other detected US criteria, the agreement with EUS was too poor to be clinically relevant. © Georg Thieme Verlag KG Stuttgart · New York.
Rider, Lisa G.; Aggarwal, Rohit; Pistorio, Angela; Bayat, Nastaran; Erman, Brian; Feldman, Brian M.; Huber, Adam M.; Cimaz, Rolando; Cuttica, Rubén J.; de Oliveira, Sheila Knupp; Lindsley, Carol B.; Pilkington, Clarissa A.; Punaro, Marilyn; Ravelli, Angelo; Reed, Ann M.; Rouster-Stevens, Kelly; van Royen, Annet; Dressler, Frank; Magalhaes, Claudia Saad; Constantin, Tamás; Davidson, Joyce E.; Magnusson, Bo; Russo, Ricardo; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A.; Miller, Frederick W.; Vencovsky, Jiri; Ruperto, Nicolino
2017-01-01
Objective Develop response criteria for juvenile dermatomyositis (JDM). Methods We analyzed the performance of 312 definitions that used core set measures (CSM) from either the International Myositis Assessment and Clinical Studies Group (IMACS) or the Pediatric Rheumatology International Trials Organization (PRINTO) and were derived from natural history data and a conjoint-analysis survey. They were further validated in the PRINTO trial of prednisone alone compared to prednisone with methotrexate or cyclosporine and the Rituximab in Myositis trial. Experts considered 14 top-performing candidate criteria based on their performance characteristics and clinical face validity using nominal group technique at a consensus conference. Results Consensus was reached for a conjoint analysis–based continuous model with a Total Improvement Score of 0-100, using absolute percent change in CSM with thresholds for minimal (≥30 points), moderate (≥45), and major improvement (≥70). The same criteria were chosen for adult dermatomyositis/polymyositis with differing thresholds for improvement. The sensitivity and specificity were 89% and 91-98% for minimal, 92-94% and 94-99% for moderate, and 91-98% and 85-85% for major improvement, respectively, in JDM patient cohorts using the IMACS and PRINTO CSM. These criteria were validated in the PRINTO trial for differentiating between treatment arms for minimal and moderate improvement (P=0.009–0.057) and in the Rituximab trial for significantly differentiating the physician rating of improvement (P<0.006). Conclusion The response criteria for JDM was a conjoint analysis–based model using a continuous improvement score based on absolute percent change in CSM, with thresholds for minimal, moderate, and major improvement. PMID:28382787
Rider, Lisa G.; Aggarwal, Rohit; Pistorio, Angela; Bayat, Nastaran; Erman, Brian; Feldman, Brian M.; Huber, Adam M.; Cimaz, Rolando; Cuttica, Rubén J.; de Oliveira, Sheila Knupp; Lindsley, Carol B.; Pilkington, Clarissa A.; Punaro, Marilyn; Ravelli, Angelo; Reed, Ann M.; Rouster-Stevens, Kelly; van Royen, Annet; Dressler, Frank; Magalhaes, Claudia Saad; Constantin, Tamás; Davidson, Joyce E.; Magnusson, Bo; Russo, Ricardo; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A.; Miller, Frederick W.; Vencovsky, Jiri; Ruperto, Nicolino
2017-01-01
Objective Develop response criteria for juvenile dermatomyositis (JDM). Methods We analyzed the performance of 312 definitions that used core set measures (CSM) from either the International Myositis Assessment and Clinical Studies Group (IMACS) or the Pediatric Rheumatology International Trials Organization (PRINTO) and were derived from natural history data and a conjoint-analysis survey. They were further validated in the PRINTO trial of prednisone alone compared to prednisone with methotrexate or cyclosporine and the Rituximab in Myositis trial. Experts considered 14 top-performing candidate criteria based on their performance characteristics and clinical face validity using nominal group technique at a consensus conference. Results Consensus was reached for a conjoint analysis–based continuous model with a Total Improvement Score of 0-100, using absolute percent change in CSM with thresholds for minimal (≥30 points), moderate (≥45), and major improvement (≥70). The same criteria were chosen for adult dermatomyositis/polymyositis with differing thresholds for improvement. The sensitivity and specificity were 89% and 91-98% for minimal, 92-94% and 94-99% for moderate, and 91-98% and 85-85% for major improvement, respectively, in JDM patient cohorts using the IMACS and PRINTO CSM. These criteria were validated in the PRINTO trial for differentiating between treatment arms for minimal and moderate improvement (P=0.009–0.057) and in the Rituximab trial for significantly differentiating the physician rating of improvement (P<0.006). Conclusion The response criteria for JDM was a conjoint analysis–based model using a continuous improvement score based on absolute percent change in CSM, with thresholds for minimal, moderate, and major improvement. PMID:28382778
Validity and Reliability of Dermoscopic Criteria Used to Differentiate Nevi From Melanoma
Carrera, Cristina; Marchetti, Michael A.; Dusza, StephenW.; Argenziano, Giuseppe; Braun, Ralph P.; Halpern, Allan C.; Jaimes, Natalia; Kittler, Harald J.; Malvehy, Josep; Menzies, Scott W.; Pellacani, Giovanni; Puig, Susana; Rabinovitz, Harold S.; Scope, Alon; Soyer, H. Peter; Stolz, Wilhelm; Hofmann-Wellenhof, Rainer; Zalaudek, Iris; Marghoob, Ashfaq A.
2017-01-01
IMPORTANCE The comparative diagnostic performance of dermoscopic algorithms and their individual criteria are not well studied. OBJECTIVES To analyze the discriminatory power and reliability of dermoscopic criteria used in melanoma detection and compare the diagnostic accuracy of existing algorithms. DESIGN, SETTING, AND PARTICIPANTS This was a retrospective, observational study of 477 lesions (119 melanomas [24.9%] and 358 nevi [75.1%]), which were divided into 12 image sets that consisted of 39 or 40 images per set. A link on the International Dermoscopy Society website from January 1, 2011, through December 31, 2011, directed participants to the study website. Data analysis was performed from June 1, 2013, through May 31, 2015. Participants included physicians, residents, and medical students, and there were no specialty-type or experience-level restrictions. Participants were randomly assigned to evaluate 1 of the 12 image sets. MAIN OUTCOMES AND MEASURES Associations with melanoma and intraclass correlation coefficients (ICCs) were evaluated for the presence of dermoscopic criteria. Diagnostic accuracy measures were estimated for the following algorithms: the ABCD rule, the Menzies method, the 7-point checklist, the 3-point checklist, chaos and clues, and CASH (color, architecture, symmetry, and homogeneity). RESULTS A total of 240 participants registered, and 103 (42.9%) evaluated all images. The 110 participants (45.8%) who evaluated fewer than 20 lesions were excluded, resulting in data from 130 participants (54.2%), 121 (93.1%) of whom were regular dermoscopy users. Criteria associated with melanoma included marked architectural disorder (odds ratio [OR], 6.6; 95% CI, 5.6–7.8), pattern asymmetry (OR, 4.9; 95% CI, 4.1–5.8), nonorganized pattern (OR, 3.3; 95% CI, 2.9–3.7), border score of 6 (OR, 3.3; 95% CI, 2.5–4.3), and contour asymmetry (OR, 3.2; 95% CI, 2.7–3.7) (P < .001 for all). Most dermoscopic criteria had poor to fair interobserver agreement. Criteria that reached moderate levels of agreement included comma vessels (ICC, 0.44; 95% CI, 0.40–0.49), absence of vessels (ICC, 0.46; 95% CI, 0.42–0.51), dark brown color (ICC, 0.40; 95% CI, 0.35–0.44), and architectural disorder (ICC, 0.43; 95% CI, 0.39–0.48). The Menzies method had the highest sensitivity for melanoma diagnosis (95.1%) but the lowest specificity (24.8%) compared with any other method (P < .001). The ABCD rule had the highest specificity (59.4%). All methods had similar areas under the receiver operating characteristic curves. CONCLUSIONS AND RELEVANCE Important dermoscopic criteria for melanoma recognition were revalidated by participants with varied experience. Six algorithms tested had similar but modest levels of diagnostic accuracy, and the interobserver agreement of most individual criteria was poor. PMID:27074267
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carson, M; Molineu, A; Taylor, P
Purpose: To analyze the most recent results of IROC Houston’s anthropomorphic H&N phantom to determine the nature of failing irradiations and the feasibility of altering pass/fail credentialing criteria. Methods: IROC Houston’s H&N phantom, used for IMRT credentialing for NCI-sponsored clinical trials, requires that an institution’s treatment plan must agree with measurement within 7% (TLD doses) and ≥85% pixels must pass 7%/4 mm gamma analysis. 156 phantom irradiations (November 2014 – October 2015) were re-evaluated using tighter criteria: 1) 5% TLD and 5%/4 mm, 2) 5% TLD and 5%/3 mm, 3) 4% TLD and 4%/4 mm, and 4) 3% TLD andmore » 3%/3 mm. Failure/poor performance rates were evaluated with respect to individual film and TLD performance by location in the phantom. Overall poor phantom results were characterized qualitatively as systematic (dosimetric) errors, setup errors/positional shifts, global but non-systematic errors, and errors affecting only a local region. Results: The pass rate for these phantoms using current criteria is 90%. Substituting criteria 1-4 reduces the overall pass rate to 77%, 70%, 63%, and 37%, respectively. Statistical analyses indicated the probability of noise-induced TLD failure at the 5% criterion was <0.5%. Using criteria 1, TLD results were most often the cause of failure (86% failed TLD while 61% failed film), with most failures identified in the primary PTV (77% cases). Other criteria posed similar results. Irradiations that failed from film only were overwhelmingly associated with phantom shifts/setup errors (≥80% cases). Results failing criteria 1 were primarily diagnosed as systematic: 58% of cases. 11% were setup/positioning errors, 8% were global non-systematic errors, and 22% were local errors. Conclusion: This study demonstrates that 5% TLD and 5%/4 mm gamma criteria may be both practically and theoretically achievable. Further work is necessary to diagnose and resolve dosimetric inaccuracy in these trials, particularly for systematic dose errors. This work is funded by NCI Grant CA180803.« less
Judging Surgical Research: How Should We Evaluate Performance and Measure Value?
Souba, Wiley W.; Wilmore, Douglas W.
2000-01-01
Objective To establish criteria to evaluate performance in surgical research, and to suggest strategies to optimize research in the future. Summary Background Data Research is an integral component of the academic mission, focusing on important clinical problems, accounting for surgical advances, and providing training and mentoring for young surgeons. With constraints on healthcare resources, there is increasing pressure to generate clinical revenues at the expense of the time and effort devoted to surgical research. An approach that would assess the value of research would allow prioritization of projects. Further, alignment of high-priority research projects with clinical goals would optimize research gains and maximize the clinical enterprise. Methods The authors reviewed performance criteria applied to industrial research and modified these criteria to apply to surgical research. They reviewed several programs that align research objectives with clinical goals. Results Performance criteria were categorized along several dimensions: internal measures (quality, productivity, innovation, learning, and development), customer satisfaction, market share, and financial indices (cost and profitability). A “report card” was proposed to allow the assessment of research in an individual department or division. Conclusions The department’s business strategy can no longer be divorced from its research strategy. Alignment between research and clinical goals will maximize the department’s objectives but will create the need to modify existing hierarchical structures and reward systems. Such alignment appears to be the best way to ensure the success of surgical research in the future. PMID:10862192
Do PICU patients meet technical criteria for performing indirect calorimetry?
Beggs, Megan R; Garcia Guerra, Gonzalo; Larsen, Bodil M K
2016-10-01
Indirect calorimetry (IC) is considered gold standard for assessing energy needs of critically ill children as predictive equations and clinical status indicators are often unreliable. Accurate assessment of energy requirements in this vulnerable population is essential given the high risk of over or underfeeding and the consequences thereof. The proportion of patients and patient days in pediatric intensive care (PICU) for which energy expenditure (EE) can be measured using IC is currently unknown. In the current study, we aimed to quantify the daily proportion of consecutive PICU patients who met technical criteria to perform indirect calorimetry and describe the technical contraindications when criteria were not met. Prospective, observational, single-centre study conducted in a cardiac and general PICU. All consecutive patients admitted for at least 96 h were included in the study. Variables collected for each patient included age at admission, admission diagnosis, and if technical criteria for indirect calorimetry were met. Technical criteria variables were collected within the same 2 h each morning and include: provision of supplemental oxygen, ventilator settings, endotracheal tube (ETT) leak, diagnosis of chest tube air leak, provision of external gas support (i.e. nitric oxide), and provision of extracorporeal membrane oxygenation (ECMO). 288 patients were included for a total of 3590 patient days between June 2014 and February 2015. The main reasons for admission were: surgery (cardiac and non-cardiac), respiratory distress, trauma, oncology and medicine/other. The median (interquartile range) patient age was 0.7 (0.3-4.6) years. The median length of PICU stay was 7 (5-14) days. Only 34% (95% CI, 32.4-35.5%) of patient days met technical criteria for IC. For patients less than 6 months of age, technical criteria were met on significantly fewer patient days (29%, p < 0.01). Moreover, 27% of patients did not meet technical criteria for IC on any day during their PICU stay. Most frequent reasons for why IC could not be performed included supplemental oxygen, ECMO, and ETT leak. In the current study, technical criteria to perform IC in the PICU were not met for 27% of patients and were not met on 66% of patient days. Moreover, criteria were met on only 29% of days for infants 6 months and younger where children 24 months of age and older still only met criteria on 40% of patient days. This data represents a major gap in the feasibility of current recommendations for assessing energy requirements of this population. Future studies are needed to improve methods of predicting and measuring energy requirements in critically ill children who do not meet current criteria for indirect calorimetry. Copyright © 2016 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.
Progressive Failure Analysis Methodology for Laminated Composite Structures
NASA Technical Reports Server (NTRS)
Sleight, David W.
1999-01-01
A progressive failure analysis method has been developed for predicting the failure of laminated composite structures under geometrically nonlinear deformations. The progressive failure analysis uses C(exp 1) shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms and several options are available to degrade the material properties after failures. The progressive failure analysis method is implemented in the COMET finite element analysis code and can predict the damage and response of laminated composite structures from initial loading to final failure. The different failure criteria and material degradation methods are compared and assessed by performing analyses of several laminated composite structures. Results from the progressive failure method indicate good correlation with the existing test data except in structural applications where interlaminar stresses are important which may cause failure mechanisms such as debonding or delaminations.
NASA Technical Reports Server (NTRS)
1989-01-01
An assessment of quantitative methods and measures for measuring launch commit criteria (LCC) performance measurement trends is made. A statistical performance trending analysis pilot study was processed and compared to STS-26 mission data. This study used four selected shuttle measurement types (solid rocket booster, external tank, space shuttle main engine, and range safety switch safe and arm device) from the five missions prior to mission 51-L. After obtaining raw data coordinates, each set of measurements was processed to obtain statistical confidence bounds and mean data profiles for each of the selected measurement types. STS-26 measurements were compared to the statistical data base profiles to verify the statistical capability of assessing occurrences of data trend anomalies and abnormal time-varying operational conditions associated with data amplitude and phase shifts.
A Tool for the Automated Design and Evaluation of Habitat Interior Layouts
NASA Technical Reports Server (NTRS)
Simon, Matthew A.; Wilhite, Alan W.
2013-01-01
The objective of space habitat design is to minimize mass and system size while providing adequate space for all necessary equipment and a functional layout that supports crew health and productivity. Unfortunately, development and evaluation of interior layouts is often ignored during conceptual design because of the subjectivity and long times required using current evaluation methods (e.g., human-in-the-loop mockup tests and in-depth CAD evaluations). Early, more objective assessment could prevent expensive design changes that may increase vehicle mass and compromise functionality. This paper describes a new interior design evaluation method to enable early, structured consideration of habitat interior layouts. This interior layout evaluation method features a comprehensive list of quantifiable habitat layout evaluation criteria, automatic methods to measure these criteria from a geometry model, and application of systems engineering tools and numerical methods to construct a multi-objective value function measuring the overall habitat layout performance. In addition to a detailed description of this method, a C++/OpenGL software tool which has been developed to implement this method is also discussed. This tool leverages geometry modeling coupled with collision detection techniques to identify favorable layouts subject to multiple constraints and objectives (e.g., minimize mass, maximize contiguous habitable volume, maximize task performance, and minimize crew safety risks). Finally, a few habitat layout evaluation examples are described to demonstrate the effectiveness of this method and tool to influence habitat design.
Optimum design of space storable gas/liquid coaxial injectors.
NASA Technical Reports Server (NTRS)
Burick, R. J.
1972-01-01
Review of the results of a program of single-element, cold-flow/hot-fire experiments performed for the purpose of establishing design criteria for a high-performance gas/liquid (FLOX/CH4) coaxial injector. The approach and the techniques employed resulted in the direct design of an injector that met or exceeded the performance and chamber compatibility goals of the program without any need for the traditional 'cut-and-try' development methods.
Criteria for quantitative and qualitative data integration: mixed-methods research methodology.
Lee, Seonah; Smith, Carrol A M
2012-05-01
Many studies have emphasized the need and importance of a mixed-methods approach for evaluation of clinical information systems. However, those studies had no criteria to guide integration of multiple data sets. Integrating different data sets serves to actualize the paradigm that a mixed-methods approach argues; thus, we require criteria that provide the right direction to integrate quantitative and qualitative data. The first author used a set of criteria organized from a literature search for integration of multiple data sets from mixed-methods research. The purpose of this article was to reorganize the identified criteria. Through critical appraisal of the reasons for designing mixed-methods research, three criteria resulted: validation, complementarity, and discrepancy. In applying the criteria to empirical data of a previous mixed methods study, integration of quantitative and qualitative data was achieved in a systematic manner. It helped us obtain a better organized understanding of the results. The criteria of this article offer the potential to produce insightful analyses of mixed-methods evaluations of health information systems.
TACCDAS Testbed Human Factors Evaluation Methodology,
1980-03-01
3 TEST METHOD Development of performance criteria................... 8 Test participant identification ...................... 8 Control of...major milestones involved in the evaluation process leading up to the evaluation of the complete testbed in the field are identified. Test methods and...inevitably will be different in several ways from the intended system as foreseen by the system designers. The system users provide insights into these
Passive sampling of gas-phase air toxics and criteria pollutants has become an attractive monitoring method in human exposure studies due to the relatively low sampling cost and ease of use. This study evaluates the performance of Model 3300 Ogawa(TM) Passive NO2 Samplers and 3...
Lobato, Ramiro D; Lagares, Alfonso; Villena, Victoria; García Seoane, Jorge; Jiménez-Roldán, Luis; Munarriz, Pablo M; Castaño-Leon, Ana M; Alén, José F
2015-01-01
The design of an appropriate method for the selection of medical graduates for residency posts is extremely important, not only for the efficiency of the method itself (accurate identification of most competent candidates), but also for its influence on the study and teaching methodologies operating in medical schools. Currently, there is a great variation in the criteria used in different countries and there is no definitively appropriate method. The use of isolated or combined criteria, such as the marks obtained by students in medical schools, their performance in tests of theoretical knowledge and evaluations of clinical competence, or personal interviews, have a limited value for identifying those candidates who will perform better during the residency and later on during independent practice. To analyse the variability in the methodologies used for the selection of residents employed in different countries, in particular those used in the United Kingdom and USA, where external agencies and medical schools make systematic analyses of curriculum development. The advantages and disadvantages of national or transnational licensing examinations on the process of convergence and harmonization of medical degrees and residency programmes through Europe are discussed. The present analysis is used to design a new and more efficient multi-criteria methodology for resident selection in Spain, which will be published in the next issue of this journal. Since the multi-criteria methods used in UK and USA appear to be most consistent, these have been employed for designing the new methodology that could be applied in Spain. Although many experts in medical education reject national examinations for awarding medical degrees or ranking candidates for residency posts, it seems that, when appropriately designed, they can be used to verify the level of competence of graduating students without necessarily distorting curriculum implementation or improvement. Copyright © 2014 Sociedad Española de Neurocirugía. Published by Elsevier España. All rights reserved.
36 CFR 1194.31 - Functional performance criteria.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Functional performance... Performance Criteria § 1194.31 Functional performance criteria. (a) At least one mode of operation and... audio and enlarged print output working together or independently, or support for assistive technology...
Chisti, Mohammod Jobayer; Salam, Mohammed Abdus; Shahid, Abu S. M. S. B.; Shahunja, K. M.; Das, Sumon Kumar; Faruque, Abu Syed Golam; Bardhan, Pradip Kumar; Ahmed, Tahmeed
2017-01-01
Evidences on diagnosis of tuberculosis (TB) following the World Health Organization (WHO) criteria in children with severe acute malnutrition (SAM) are lacking. We sought to evaluate the WHO criteria for the diagnosis of TB in such children. In this prospective study, we enrolled SAM children aged <5 with radiological pneumonia. We collected induced sputum and gastric lavage for smear microscopy, mycobacterial culture, and Xpert MTB/RIF. Using the last 2 methods as the gold standard, we determined sensitivity, specificity, and positive and negative predictive values of WHO criteria (n = 388). However, Xpert MTB/RIF was performed on the last 214 children. Compared to mycobacterial culture–confirmed TB, sensitivity and specificity (95% confidence interval) of WHO criteria were 40 (14% to 73%) and 84 (80% to 87%), respectively. Compared to culture- and/or Xpert MTB/RIF-confirmed TB, the values were 22% (9% to 43%) and 83 (79% to 87%), respectively. Thus, the good specificity of the WHO criteria may help minimize overtreatment with anti-TB therapy in SAM children, especially in resource-limited settings. PMID:28229100
Chisti, Mohammod Jobayer; Salam, Mohammed Abdus; Shahid, Abu S M S B; Shahunja, K M; Das, Sumon Kumar; Faruque, Abu Syed Golam; Bardhan, Pradip Kumar; Ahmed, Tahmeed
2017-01-01
Evidences on diagnosis of tuberculosis (TB) following the World Health Organization (WHO) criteria in children with severe acute malnutrition (SAM) are lacking. We sought to evaluate the WHO criteria for the diagnosis of TB in such children. In this prospective study, we enrolled SAM children aged <5 with radiological pneumonia. We collected induced sputum and gastric lavage for smear microscopy, mycobacterial culture, and Xpert MTB/RIF. Using the last 2 methods as the gold standard, we determined sensitivity, specificity, and positive and negative predictive values of WHO criteria (n = 388). However, Xpert MTB/RIF was performed on the last 214 children. Compared to mycobacterial culture-confirmed TB, sensitivity and specificity (95% confidence interval) of WHO criteria were 40 (14% to 73%) and 84 (80% to 87%), respectively. Compared to culture- and/or Xpert MTB/RIF-confirmed TB, the values were 22% (9% to 43%) and 83 (79% to 87%), respectively. Thus, the good specificity of the WHO criteria may help minimize overtreatment with anti-TB therapy in SAM children, especially in resource-limited settings.
NASA Astrophysics Data System (ADS)
Subagadis, Y. H.; Schütze, N.; Grundmann, J.
2014-09-01
The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water-society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA) using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.
Development of a hazard-based method for evaluating the fire safety of passenger trains
DOT National Transportation Integrated Search
1999-01-01
The fire safety of U.S. passenger rail trains currently is addressed through small-scale flammability and smoke emission tests and performance criteria promulgated by the Federal Railroad Administration (FRA). The FRA approach relies heavily on test ...
Nonclinical dose formulation analysis method validation and sample analysis.
Whitmire, Monica Lee; Bryan, Peter; Henry, Teresa R; Holbrook, John; Lehmann, Paul; Mollitor, Thomas; Ohorodnik, Susan; Reed, David; Wietgrefe, Holly D
2010-12-01
Nonclinical dose formulation analysis methods are used to confirm test article concentration and homogeneity in formulations and determine formulation stability in support of regulated nonclinical studies. There is currently no regulatory guidance for nonclinical dose formulation analysis method validation or sample analysis. Regulatory guidance for the validation of analytical procedures has been developed for drug product/formulation testing; however, verification of the formulation concentrations falls under the framework of GLP regulations (not GMP). The only current related regulatory guidance is the bioanalytical guidance for method validation. The fundamental parameters for bioanalysis and formulation analysis validations that overlap include: recovery, accuracy, precision, specificity, selectivity, carryover, sensitivity, and stability. Divergence in bioanalytical and drug product validations typically center around the acceptance criteria used. As the dose formulation samples are not true "unknowns", the concept of quality control samples that cover the entire range of the standard curve serving as the indication for the confidence in the data generated from the "unknown" study samples may not always be necessary. Also, the standard bioanalytical acceptance criteria may not be directly applicable, especially when the determined concentration does not match the target concentration. This paper attempts to reconcile the different practices being performed in the community and to provide recommendations of best practices and proposed acceptance criteria for nonclinical dose formulation method validation and sample analysis.
Sustainability performance evaluation: Literature review and future directions.
Büyüközkan, Gülçin; Karabulut, Yağmur
2018-07-01
Current global economic activities are increasingly being perceived as unsustainable. Despite the high number of publications, sustainability science remains highly dispersed over diverse approaches and topics. This article aims to provide a structured overview of sustainability performance evaluation related publications and to document the current state of literature, categorize publications, analyze and link trends, as well as highlight gaps and provide research recommendations. 128 articles between 2007 and 2018 are identified. The results suggest that sustainability performance evaluation models shall be more balanced, suitable criteria and their interrelations shall be well defined and subjectivity of qualitative criteria inherent to sustainability indicators shall be considered. To address this subjectivity, group decision-making techniques and other analytical methods that can deal with uncertainty, conflicting indicators, and linguistic evaluations can be used in future works. By presenting research gaps, this review stimulates researchers to establish practically applicable sustainability performance evaluation frameworks to help assess and compare the degree of sustainability, leading to more sustainable business practices. The review is unique in defining corporate sustainability performance evaluation for the first time, exploring the gap between sustainability accounting and sustainability assessment, and coming up with a structured overview of innovative research recommendations about integrating analytical assessment methods into conceptual sustainability frameworks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Assessing the performance of regional landslide early warning models: the EDuMaP method
NASA Astrophysics Data System (ADS)
Calvello, M.; Piciullo, L.
2015-10-01
The paper proposes the evaluation of the technical performance of a regional landslide early warning system by means of an original approach, called EDuMaP method, comprising three successive steps: identification and analysis of the Events (E), i.e. landslide events and warning events derived from available landslides and warnings databases; definition and computation of a Duration Matrix (DuMa), whose elements report the time associated with the occurrence of landslide events in relation to the occurrence of warning events, in their respective classes; evaluation of the early warning model Performance (P) by means of performance criteria and indicators applied to the duration matrix. During the first step, the analyst takes into account the features of the warning model by means of ten input parameters, which are used to identify and classify landslide and warning events according to their spatial and temporal characteristics. In the second step, the analyst computes a time-based duration matrix having a number of rows and columns equal to the number of classes defined for the warning and landslide events, respectively. In the third step, the analyst computes a series of model performance indicators derived from a set of performance criteria, which need to be defined by considering, once again, the features of the warning model. The proposed method is based on a framework clearly distinguishing between local and regional landslide early warning systems as well as among correlation laws, warning models and warning systems. The applicability, potentialities and limitations of the EDuMaP method are tested and discussed using real landslides and warnings data from the municipal early warning system operating in Rio de Janeiro (Brazil).
Assessment of shrimp farming impact on groundwater quality using analytical hierarchy process
NASA Astrophysics Data System (ADS)
Anggie, Bernadietta; Subiyanto, Arief, Ulfah Mediaty; Djuniadi
2018-03-01
Improved shrimp farming affects the groundwater quality conditions. Assessment of shrimp farming impact on groundwater quality conventionally has less accuracy. This paper presents the implementation of Analytical Hierarchy Process (AHP) method for assessing shrimp farming impact on groundwater quality. The data used is the impact data of shrimp farming in one of the regions in Indonesia from 2006-2016. Criteria used in this study were 8 criteria and divided into 49 sub-criteria. The weighting by AHP performed to determine the importance level of criteria and sub-criteria. Final priority class of shrimp farming impact were obtained from the calculation of criteria's and sub-criteria's weights. The validation was done by comparing priority class of shrimp farming impact and water quality conditions. The result show that 50% of the total area was moderate priority class, 37% was low priority class and 13% was high priority class. From the validation result impact assessment for shrimp farming has been high accuracy to the groundwater quality conditions. This study shows that assessment based on AHP has a higher accuracy to shrimp farming impact and can be used as the basic fisheries planning to deal with impacts that have been generated.
Determination of Dornic acidity as a method to select donor milk in a milk bank.
Vázquez-Román, Sara; Garcia-Lara, Nadia Raquel; Escuder-Vieco, Diana; Chaves-Sánchez, Fernando; De la Cruz-Bertolo, Javier; Pallas-Alonso, Carmen Rosa
2013-02-01
Dornic acidity may be an indirect measurement of milk's bacteria content and its quality. There are no uniform criteria among different human milk banks on milk acceptance criteria. The main aim of this study is to report the correlation between Dornic acidity and bacterial growth in donor milk in order to validate the Dornic acidity value as an adequate method to select milk prior to its pasteurization. From 105 pools, 4-mL samples of human milk were collected. Dornic acidity measurement and culture in blood and McConkey's agar cultures were performed. Based on Dornic acidity degrees, we classified milk into three quality categories: top quality (acidity <4°D), intermediate (acidity between 4°D and 7°D), and milk unsuitable to be consumed (acidity ≥ 8°D). Spearman's correlation coefficient was used to perform statistical analysis. Seventy percent of the samples had Dornic acidity under 4°D, and 88% had a value under 8°D. A weak positive correlation was observed between the bacterial growth in milk and Dornic acidity. The overall discrimination performance of Dornic acidity was higher for predicting growth of Gram-negative organisms. In milk with Dornic acidity of ≥ 4°D, such a measurement has a sensitivity of 100% for detecting all the samples with bacterial growth with Gram-negative bacteria of over 10(5) colony-forming units/mL. The correlation between Dornic acidity and bacterial growth in donor milk is weak but positive. The measurement of Dornic acidity could be considered as a simple and economical method to select milk to pasteurize in a human milk bank based in quality and safety criteria.
Assessment Criteria for Competency-Based Education: A Study in Nursing Education
ERIC Educational Resources Information Center
Fastré, Greet M. J.; van der Klink, Marcel R.; Amsing-Smit, Pauline; van Merriënboer, Jeroen J.
2014-01-01
This study examined the effects of type of assessment criteria (performance-based vs. competency-based), the relevance of assessment criteria (relevant criteria vs. all criteria), and their interaction on secondary vocational education students' performance and assessment skills. Students on three programmes in the domain of nursing and care…
Posttest calculation of the PBF LOC-11B and LOC-11C experiments using RELAP4/MOD6. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendrix, C.E.
Comparisons between RELAP4/MOD6, Update 4 code-calculated and measured experimental data are presented for the PBF LOC-11C and LOC-11B experiments. Independent code verification techniques are now being developed and this study represents a preliminary effort applying structured criteria for developing computer models, selecting code input, and performing base-run analyses. Where deficiencies are indicated in the base-case representation of the experiment, methods of code and criteria improvement are developed and appropriate recommendations are made.
Tučník, Petr; Bureš, Vladimír
2016-01-01
Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.
NASA Astrophysics Data System (ADS)
Chui, T. F. M.; Yang, Y.
2017-12-01
Green infrastructures (GI) have been widely used to mitigate flood risk, improve surface water quality, and to restore predevelopment hydrologic regimes. Commonly-used GI include, bioretention system, porous pavement and green roof, etc. They are normally sized to fulfil different design criteria (e.g. providing certain storage depths, limiting peak surface flow rates) that are formulated for current climate conditions. While GI commonly have long lifespan, the sensitivity of their performance to climate change is however unclear. This study first proposes a method to formulate suitable design criteria to meet different management interests (e.g. different levels of first flush reduction and peak flow reduction). Then typical designs of GI are proposed. In addition, a high resolution stochastic design storm generator using copulas and random cascade model is developed, which is calibrated using recorded rainfall time series. Then, few climate change scenarios are generated by varying the duration and depth of design storms, and changing the parameters of the calibrated storm generator. Finally, the performance of GI with typical designs under the random synthesized design storms are then assessed using numerical modeling. The robustness of the designs is obtained by the comparing their performance in the future scenarios to the current one. This study overall examines the robustness of the current GI design criteria under uncertain future climate conditions, demonstrating whether current GI design criteria should be modified to account for climate change.
Could martial arts fall training be safe for persons with osteoporosis?: a feasibility study
2010-01-01
Background Osteoporosis is a well-established risk factor for fall-related hip fractures. Training fall arrest strategies, such as martial arts (MA) fall techniques, might be useful to prevent hip fractures in persons with osteoporosis, provided that the training itself is safe. This study was conducted to determine whether MA fall training would be safe for persons with osteoporosis extrapolated from the data of young adults and using stringent safety criteria. Methods Young adults performed sideways and forward MA falls from a kneeling position on both a judo mat and a mattress as well as from a standing position on a mattress. Hip impact forces and kinematic data were collected. For each condition, the highest hip impact force was compared with two safety criteria based on the femoral fracture load and the use of a hip protector. Results The highest hip impact force during the various fall conditions ranged between 1426 N and 3132 N. Sideways falls from a kneeling and standing position met the safety criteria if performed on the mattress (max 1426 N and 2012 N, respectively) but not if the falls from a kneeling position were performed on the judo mat (max 2219 N). Forward falls only met the safety criteria if performed from a kneeling position on the mattress (max 2006 N). Hence, forward falls from kneeling position on a judo mat (max 2474 N) and forward falls from standing position on the mattress (max 3132 N) did not meet both safety criteria. Conclusions Based on the data of young adults and safety criteria, the MA fall training was expected to be safe for persons with osteoporosis if appropriate safety measures are taken: during the training persons with osteoporosis should wear hip protectors that could attenuate the maximum hip impact force by at least 65%, perform the fall exercises on a thick mattress, and avoid forward fall exercises from a standing position. Hence, a modified MA fall training might be useful to reduce hip fracture risk in persons with osteoporosis. PMID:20412560
Investigation of High-Angle-of-Attack Maneuver-Limiting Factors. Part 1. Analysis and Simulation
1980-12-01
useful, are not so satisfying or in- structive as the more positive identification of causal factors offered by the methods developed in Reference 5...same methods be applied to additional high-performance fighter aircraft having widely differing high AOA handling characteristics to see if further...predictions and the nonlinear model results were resolved. The second task involved development of methods , criteria, and an associated pilot rating scale, for
American Thyroid Association Statement on Remote-Access Thyroid Surgery
Bernet, Victor; Fahey, Thomas J.; Kebebew, Electron; Shaha, Ashok; Stack, Brendan C.; Stang, Michael; Steward, David L.; Terris, David J.
2016-01-01
Background: Remote-access techniques have been described over the recent years as a method of removing the thyroid gland without an incision in the neck. However, there is confusion related to the number of techniques available and the ideal patient selection criteria for a given technique. The aims of this review were to develop a simple classification of these approaches, describe the optimal patient selection criteria, evaluate the outcomes objectively, and define the barriers to adoption. Methods: A review of the literature was performed to identify the described techniques. A simple classification was developed. Technical details, outcomes, and the learning curve were described. Expert opinion consensus was formulated regarding recommendations for patient selection and performance of remote-access thyroid surgery. Results: Remote-access thyroid procedures can be categorized into endoscopic or robotic breast, bilateral axillo-breast, axillary, and facelift approaches. The experience in the United States involves the latter two techniques. The limited data in the literature suggest long operative times, a steep learning curve, and higher costs with remote-access thyroid surgery compared with conventional thyroidectomy. Nevertheless, a consensus was reached that, in appropriate hands, it can be a viable option for patients with unilateral small nodules who wish to avoid a neck incision. Conclusions: Remote-access thyroidectomy has a role in a small group of patients who fit strict selection criteria. These approaches require an additional level of expertise, and therefore should be done by surgeons performing a high volume of thyroid and robotic surgery. PMID:26858014
Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem
Akutsah, Francis; Olusanya, Micheal O.; Adewumi, Aderemi O.
2018-01-01
The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems. PMID:29554662
Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem.
Ezugwu, Absalom E; Akutsah, Francis; Olusanya, Micheal O; Adewumi, Aderemi O
2018-01-01
The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems.
NASA Astrophysics Data System (ADS)
Rahmanita, E.; Widyaningrum, V. T.; Kustiyahningsih, Y.; Purnama, J.
2018-04-01
SMEs have a very important role in the development of the economy in Indonesia. SMEs assist the government in terms of creating new jobs and can support household income. The number of SMEs in Madura and the number of measurement indicators in the SME mapping so that it requires a method.This research uses Fuzzy Analytic Network Process (FANP) method for performance measurement SME. The FANP method can handle data that contains uncertainty. There is consistency index in determining decisions. Performance measurement in this study is based on a perspective of the Balanced Scorecard. This research approach integrated internal business perspective, learning, and growth perspective and fuzzy Analytic Network Process (FANP). The results of this research areframework a priority weighting of assessment indicators SME.
Dodd, Andrew; Osterhoff, Georg; Guy, Pierre; Lefaivre, Kelly A
2016-06-01
To report methods of measurement of radiographic displacement and radiographic outcomes in acetabular fractures described in the literature. A systematic review of the English literature was performed using EMBASE and Medline in August 2014. Inclusion criteria were studies of operatively treated acetabular fractures in adults with acute (<6 weeks) open reduction and internal fixation that reported radiographic outcomes. Exclusion criteria included case series with <10 patients, fractures managed >6 weeks from injury, acute total hip arthroplasty, periprosthetic fractures, time frame of radiographic outcomes not stated, missing radiographic outcome data, and non-English language articles. Basic information collected included journal, author, year published, number of fractures, and fracture types. Specific data collected included radiographic outcome data, method of measuring radiographic displacement, and methods of interpreting or categorizing radiographic outcomes. The number of reproducible radiographic measurement techniques (2/64) and previously described radiographic interpretation methods (4) were recorded. One radiographic reduction grading criterion (Matta) was used nearly universally in articles that used previously described criteria. Overall, 70% of articles using this criteria documented anatomic reductions. The current standard of measuring radiographic displacement in publications dealing with acetabulum fractures almost universally lacks basic description, making further scientific rigor, such as testing reproducibility, impossible. Further work is necessary to standardize radiographic measurement techniques, test their reproducibility, and qualify their validity or determine which measurements are important to clinical outcomes. Diagnostic Level IV. See Instructions for Authors for a complete description of levels of evidence.
Comparative analysis on the selection of number of clusters in community detection
NASA Astrophysics Data System (ADS)
Kawamoto, Tatsuro; Kabashima, Yoshiyuki
2018-02-01
We conduct a comparative analysis on various estimates of the number of clusters in community detection. An exhaustive comparison requires testing of all possible combinations of frameworks, algorithms, and assessment criteria. In this paper we focus on the framework based on a stochastic block model, and investigate the performance of greedy algorithms, statistical inference, and spectral methods. For the assessment criteria, we consider modularity, map equation, Bethe free energy, prediction errors, and isolated eigenvalues. From the analysis, the tendency of overfit and underfit that the assessment criteria and algorithms have becomes apparent. In addition, we propose that the alluvial diagram is a suitable tool to visualize statistical inference results and can be useful to determine the number of clusters.
Pardo, Scott; Simmons, David A
2016-09-01
The relationship between International Organization for Standardization (ISO) accuracy criteria and mean absolute relative difference (MARD), 2 methods for assessing the accuracy of blood glucose meters, is complex. While lower MARD values are generally better than higher MARD values, it is not possible to define a particular MARD value that ensures a blood glucose meter will satisfy the ISO accuracy criteria. The MARD value that ensures passing the ISO accuracy test can be described only as a probabilistic range. In this work, a Bayesian model is presented to represent the relationship between ISO accuracy criteria and MARD. Under the assumptions made in this work, there is nearly a 100% chance of satisfying ISO 15197:2013 accuracy requirements if the MARD value is between 3.25% and 5.25%. © 2016 Diabetes Technology Society.
49 CFR 240.127 - Criteria for examining skill performance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 4 2011-10-01 2011-10-01 false Criteria for examining skill performance. 240.127... Elements of the Certification Process § 240.127 Criteria for examining skill performance. (a) Each railroad... have procedures for examining the performance skills of a person being evaluated for qualification as a...
49 CFR 240.127 - Criteria for examining skill performance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Criteria for examining skill performance. 240.127... Elements of the Certification Process § 240.127 Criteria for examining skill performance. (a) Each railroad... have procedures for examining the performance skills of a person being evaluated for qualification as a...
Billeci, Lucia; Varanini, Maurizio
2017-01-01
The non-invasive fetal electrocardiogram (fECG) technique has recently received considerable interest in monitoring fetal health. The aim of our paper is to propose a novel fECG algorithm based on the combination of the criteria of independent source separation and of a quality index optimization (ICAQIO-based). The algorithm was compared with two methods applying the two different criteria independently—the ICA-based and the QIO-based methods—which were previously developed by our group. All three methods were tested on the recently implemented Fetal ECG Synthetic Database (FECGSYNDB). Moreover, the performance of the algorithm was tested on real data from the PhysioNet fetal ECG Challenge 2013 Database. The proposed combined method outperformed the other two algorithms on the FECGSYNDB (ICAQIO-based: 98.78%, QIO-based: 97.77%, ICA-based: 97.61%). Significant differences were obtained in particular in the conditions when uterine contractions and maternal and fetal ectopic beats occurred. On the real data, all three methods obtained very high performances, with the QIO-based method proving slightly better than the other two (ICAQIO-based: 99.38%, QIO-based: 99.76%, ICA-based: 99.37%). The findings from this study suggest that the proposed method could potentially be applied as a novel algorithm for accurate extraction of fECG, especially in critical recording conditions. PMID:28509860
Uebbing, Lukas; Klumpp, Lukas; Webster, Gregory K; Löbenberg, Raimar
2017-01-01
Drug product performance testing is an important part of quality-by-design approaches, but this process often lacks the underlying mechanistic understanding of the complex interactions between the disintegration and dissolution processes involved. Whereas a recent draft guideline by the US Food and Drug Administration (FDA) has allowed the replacement of dissolution testing with disintegration testing, the mentioned criteria are not globally accepted. This study provides scientific justification for using disintegration testing rather than dissolution testing as a quality control method for certain immediate release (IR) formulations. A mechanistic approach, which is beyond the current FDA criteria, is presented. Dissolution testing via United States Pharmacopeial Convention Apparatus II at various paddle speeds was performed for immediate and extended release formulations of metronidazole. Dissolution profile fitting via DDSolver and dissolution profile predictions via DDDPlus™ were performed. The results showed that Fickian diffusion and drug particle properties (DPP) were responsible for the dissolution of the IR tablets, and that formulation factors (eg, coning) impacted dissolution only at lower rotation speeds. Dissolution was completely formulation controlled if extended release tablets were tested and DPP were not important. To demonstrate that disintegration is the most important dosage form attribute when dissolution is DPP controlled, disintegration, intrinsic dissolution and dissolution testing were performed in conventional and disintegration impacting media (DIM). Tablet disintegration was affected by DIM and model fitting to the Korsmeyer-Peppas equation showed a growing effect of the formulation in DIM. DDDPlus was able to predict tablet dissolution and the intrinsic dissolution profiles in conventional media and DIM. The study showed that disintegration has to occur before DPP-dependent dissolution can happen. The study suggests that disintegration can be used as performance test of rapidly disintegrating tablets beyond the FDA criteria. The scientific criteria and justification is that dissolution has to be DPP dependent, originated from active pharmaceutical ingredient characteristics and formulations factors have to be negligible.
Uebbing, Lukas; Klumpp, Lukas; Webster, Gregory K; Löbenberg, Raimar
2017-01-01
Drug product performance testing is an important part of quality-by-design approaches, but this process often lacks the underlying mechanistic understanding of the complex interactions between the disintegration and dissolution processes involved. Whereas a recent draft guideline by the US Food and Drug Administration (FDA) has allowed the replacement of dissolution testing with disintegration testing, the mentioned criteria are not globally accepted. This study provides scientific justification for using disintegration testing rather than dissolution testing as a quality control method for certain immediate release (IR) formulations. A mechanistic approach, which is beyond the current FDA criteria, is presented. Dissolution testing via United States Pharmacopeial Convention Apparatus II at various paddle speeds was performed for immediate and extended release formulations of metronidazole. Dissolution profile fitting via DDSolver and dissolution profile predictions via DDDPlus™ were performed. The results showed that Fickian diffusion and drug particle properties (DPP) were responsible for the dissolution of the IR tablets, and that formulation factors (eg, coning) impacted dissolution only at lower rotation speeds. Dissolution was completely formulation controlled if extended release tablets were tested and DPP were not important. To demonstrate that disintegration is the most important dosage form attribute when dissolution is DPP controlled, disintegration, intrinsic dissolution and dissolution testing were performed in conventional and disintegration impacting media (DIM). Tablet disintegration was affected by DIM and model fitting to the Korsmeyer–Peppas equation showed a growing effect of the formulation in DIM. DDDPlus was able to predict tablet dissolution and the intrinsic dissolution profiles in conventional media and DIM. The study showed that disintegration has to occur before DPP-dependent dissolution can happen. The study suggests that disintegration can be used as performance test of rapidly disintegrating tablets beyond the FDA criteria. The scientific criteria and justification is that dissolution has to be DPP dependent, originated from active pharmaceutical ingredient characteristics and formulations factors have to be negligible. PMID:28442890
Poitevin, Eric
2012-01-01
A single-laboratory validation (SLV) and a ring trial (RT) were undertaken to determine nine nutritional elements in food products by inductively coupled plasma-optical emission spectrometry in order to modernize AOAC Official Method 984.27. The improvements involved extension of the scope to all food matrixes (including infant formula), optimized microwave digestion, selected analytical lines, internal standardization, and ion buffering. Simultaneous determination of nine elements (calcium, copper, iron, potassium, magnesium, manganese, sodium, phosphorus, and zinc) was made in food products. Sample digestion was performed through wet digestion of food samples by microwave technology with either closed- or open-vessel systems. Validation was performed to characterize the method for selectivity, sensitivity, linearity, accuracy, precision, recovery, ruggedness, and uncertainty. The robustness and efficiency of this method was proven through a successful RT using experienced independent food industry laboratories. Performance characteristics are reported for 13 certified and in-house reference materials, populating the AOAC triangle food sectors, which fulfilled AOAC criteria and recommendations for accuracy (trueness, recovery, and z-scores) and precision (repeatability and reproducibility RSD, and HorRat values) regarding SLVs and RTs. This multielemental method is cost-efficient, time-saving, accurate, and fit-for-purpose according to ISO 17025 Norm and AOAC acceptability criteria, and is proposed as an extended updated version of AOAC Official Method 984.27 for fortified food products, including infant formula.
Dobashi, Akira; Goda, Kenichi; Yoshimura, Noboru; Ohya, Tomohiko R; Kato, Masayuki; Sumiyama, Kazuki; Matsushima, Masato; Hirooka, Shinichi; Ikegami, Masahiro; Tajiri, Hisao
2016-01-01
AIM To simplify the diagnostic criteria for superficial esophageal squamous cell carcinoma (SESCC) on Narrow Band Imaging combined with magnifying endoscopy (NBI-ME). METHODS This study was based on the post-hoc analysis of a randomized controlled trial. We performed NBI-ME for 147 patients with present or a history of squamous cell carcinoma in the head and neck, or esophagus between January 2009 and June 2011. Two expert endoscopists detected 89 lesions that were suspicious for SESCC lesions, which had been prospectively evaluated for the following 6 NBI-ME findings in real time: “intervascular background coloration”; “proliferation of intrapapillary capillary loops (IPCL)”; and “dilation”, “tortuosity”, “change in caliber”, and “various shapes (VS)” of IPCLs (i.e., Inoue’s tetrad criteria). The histologic examination of specimens was defined as the gold standard for diagnosis. A stepwise logistic regression analysis was used to identify candidates for the simplified criteria from among the 6 NBI-ME findings for diagnosing SESCCs. We evaluated diagnostic performance of the simplified criteria compared with that of Inoue’s criteria. RESULTS Fifty-four lesions (65%) were histologically diagnosed as SESCCs and the others as low-grade intraepithelial neoplasia or inflammation. In the univariate analysis, proliferation, tortuosity, change in caliber, and VS were significantly associated with SESCC (P < 0.01). The combination of VS and proliferation was statistically extracted from the 6 NBI-ME findings by using the stepwise logistic regression model. We defined the combination of VS and proliferation as simplified dyad criteria for SESCC. The areas under the curve of the simplified dyad criteria and Inoue’s tetrad criteria were 0.70 and 0.73, respectively. No significant difference was shown between them. The sensitivity, specificity, and accuracy of diagnosis for SESCC were 77.8%, 57.1%, 69.7% and 51.9%, 80.0%, 62.9% for the simplified dyad criteria and Inoue’s tetrad criteria, respectively. CONCLUSION The combination of proliferation and VS may serve as simplified criteria for the diagnosis of SESCC using NBI-ME. PMID:27895406
Stebler, N; Schuepbach-Regula, G; Braam, P; Falzon, L C
2015-09-01
Zoonotic diseases have a significant impact on public health globally. To prevent or reduce future zoonotic outbreaks, there is a constant need to invest in research and surveillance programs while updating risk management strategies. However, given the limited resources available, disease prioritization based on the need for their control and surveillance is important. This study was performed to identify and weight disease criteria for the prioritization of zoonotic diseases in Switzerland using a semi-quantitative research method based on expert opinion. Twenty-eight criteria relevant for disease control and surveillance, classified under five domains, were selected following a thorough literature review, and these were evaluated and weighted by seven experts from the Swiss Federal Veterinary Office using a modified Delphi panel. The median scores assigned to each criterion were then used to rank 16 notifiable and/or emerging zoonoses in Switzerland. The experts weighted the majority of the criteria similarly, and the top three criteria were Severity of disease in humans, incidence and prevalence of the disease in humans and treatment in humans. Based on these weightings, the three highest ranked diseases were Avian Influenza, Bovine Spongiform Encephalitis, and Bovine Tuberculosis. Overall, this study provided a preliminary list of criteria relevant for disease prioritization in Switzerland. These were further evaluated in a companion study which involved a quantitative prioritization method and multiple stakeholders. Copyright © 2015 Elsevier B.V. All rights reserved.
Biodegradation of oil refinery wastes under OPA and CERCLA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, W.W.; Banipal, B.S.; Myers, J.M.
1995-12-31
Land treatment of oil refinery wastes has been used as a disposal method for decades. More recently, numerous laboratory studies have been performed attempting to quantify degradation rates of more toxic polycyclic aromatic hydrocarbon compounds (PAHs). This paper discusses the results of the fullscale aerobic biodegradation operations using land treatment at the Macmillan Ring-Free Oil refining facility. The tiered feasibility approach of evaluating biodegradation as a treatment method to achieve site-specific cleanup criteria, including pilot biodegradation operations, is discussed in an earlier paper. Analytical results of biodegradation indicate that degradation rates observed in the laboratory can be met and exceededmore » under field conditions and that site-specific cleanup criteria can be attained within a proposed project time. Also prevented are degradation rates and half-lives for PAHs for which cleanup criteria have been established. PAH degradation rates and half-life values are determined and compared with the laboratory degradation rates and half-life values which used similar oil refinery wastes by other in investigators (API 1987).« less
Performance of a Small Gas Generator Using Liquid Hydrogen and Liquid Oxygen
NASA Technical Reports Server (NTRS)
Acker, Loren W.; Fenn, David B.; Dietrich, Marshall W.
1961-01-01
The performance and operating problems of a small hot-gas generator burning liquid hydrogen with liquid oxygen are presented. Two methods of ignition are discussed. Injector and combustion chamber design details based on rocket design criteria are also given. A carefully fabricated showerhead injector of simple design provided a gas generator that yielded combustion efficiencies of 93 and 96 percent.
Software Cost Measuring and Reporting. One of the Software Acquisition Engineering Guidebook Series.
1979-01-02
through the peripherals. How- and performance criteria), ever, his interaction is usually minimal since, by difinition , the automatic test Since TS...performs its Software estimating is still heavily intended functions properly. dependent on experienced judgement. However, quantitative methods...apply to systems of totally different can be distributed to specialists who content. The Quantitative guideline may are most familiar with the work. One
A semi-learning algorithm for noise rejection: an fNIRS study on ADHD children
NASA Astrophysics Data System (ADS)
Sutoko, Stephanie; Funane, Tsukasa; Katura, Takusige; Sato, Hiroki; Kiguchi, Masashi; Maki, Atsushi; Monden, Yukifumi; Nagashima, Masako; Yamagata, Takanori; Dan, Ippeita
2017-02-01
In pediatrics studies, the quality of functional near infrared spectroscopy (fNIRS) signals is often reduced by motion artifacts. These artifacts likely mislead brain functionality analysis, causing false discoveries. While noise correction methods and their performance have been investigated, these methods require several parameter assumptions that apparently result in noise overfitting. In contrast, the rejection of noisy signals serves as a preferable method because it maintains the originality of the signal waveform. Here, we describe a semi-learning algorithm to detect and eliminate noisy signals. The algorithm dynamically adjusts noise detection according to the predetermined noise criteria, which are spikes, unusual activation values (averaged amplitude signals within the brain activation period), and high activation variances (among trials). Criteria were sequentially organized in the algorithm and orderly assessed signals based on each criterion. By initially setting an acceptable rejection rate, particular criteria causing excessive data rejections are neglected, whereas others with tolerable rejections practically eliminate noises. fNIRS data measured during the attention response paradigm (oddball task) in children with attention deficit/hyperactivity disorder (ADHD) were utilized to evaluate and optimize the algorithm's performance. This algorithm successfully substituted the visual noise identification done in the previous studies and consistently found significantly lower activation of the right prefrontal and parietal cortices in ADHD patients than in typical developing children. Thus, we conclude that the semi-learning algorithm confers more objective and standardized judgment for noise rejection and presents a promising alternative to visual noise rejection
THE DERIVATION, ANALYSIS, AND CLASSIFICATION OF INSTRUCTIONAL OBJECTIVES.
ERIC Educational Resources Information Center
AMMERMAN, HARRY L.; MELCHING, WILLIAM H.
THIS REPORT EXAMINES THE METHODS, TERMS, AND CRITERIA ASSOCIATED WITH THE DETERMINATION OF STUDENT PERFORMANCE OBJECTIVES. SELECTED EDUCATIONAL AND TRAINING RESEARCH LITERATURE WAS REVIEWED TO IDENTIFY PROCEDURES CURRENTLY USED IN DETERMINING INSTRUCTIONAL OBJECTIVES. A SURVEY OF EIGHT ARMY SERVICE SCHOOLS WAS CONDUCTED TO DETERMINE PROCEDURES…
The Impact of Granule Density on Tabletting and Pharmaceutical Product Performance.
van den Ban, Sander; Goodwin, Daniel J
2017-05-01
The impact of granule densification in high-shear wet granulation on tabletting and product performance was investigated, at pharmaceutical production scale. Product performance criteria need to be balanced with the need to deliver manufacturability criteria to assure robust industrial scale tablet manufacturing processes. A Quality by Design approach was used to determine in-process control specifications for tabletting, propose a design space for disintegration and dissolution, and to understand the permitted operating limits and required controls for an industrial tabletting process. Granules of varying density (filling density) were made by varying water amount added, spray rate, and wet massing time in a design of experiment (DoE) approach. Granules were compressed into tablets to a range of thicknesses to obtain tablets of varying breaking force. Disintegration and dissolution performance was evaluated for the tablets made. The impact of granule filling density on tabletting was rationalised with compressibility, tabletability and compactibility. Tabletting and product performance criteria provided competing requirements for porosity. An increase in granule filling density impacted tabletability and compactability and limited the ability to achieve tablets of adequate mechanical strength. An increase in tablet solid fraction (decreased porosity) impacted disintegration and dissolution. An attribute-based design space for disintegration and dissolution was specified to achieve both product performance and manufacturability. The method of granulation and resulting granule filling density is a key design consideration to achieve both product performance and manufacturability required for modern industrial scale pharmaceutical product manufacture and distribution.
Investigation of the Multiple Method Adaptive Control (MMAC) method for flight control systems
NASA Technical Reports Server (NTRS)
Athans, M.; Baram, Y.; Castanon, D.; Dunn, K. P.; Green, C. S.; Lee, W. H.; Sandell, N. R., Jr.; Willsky, A. S.
1979-01-01
The stochastic adaptive control of the NASA F-8C digital-fly-by-wire aircraft using the multiple model adaptive control (MMAC) method is presented. The selection of the performance criteria for the lateral and the longitudinal dynamics, the design of the Kalman filters for different operating conditions, the identification algorithm associated with the MMAC method, the control system design, and simulation results obtained using the real time simulator of the F-8 aircraft at the NASA Langley Research Center are discussed.
Comparison of deterministic and stochastic methods for time-dependent Wigner simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Sihong, E-mail: sihong@math.pku.edu.cn; Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg
2015-11-01
Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution ofmore » a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.« less
Gerstl, Lucia; Schoppe, Nikola; Albers, Lucia; Ertl-Wagner, Birgit; Alperin, Noam; Ehrt, Oliver; Pomschar, Andreas; Landgraf, Mirjam N; Heinen, Florian
2017-11-01
Idiopathic intracranial hypertension (IIH) in children is a rare condition of unknown etiology and various clinical presentations. The primary aim of this study was to evaluate if our pediatric IIH study group fulfilled the revised diagnostic criteria for IIH published in 2013, particularly with regard to clinical presentation and threshold value of an elevated lumbar puncture opening pressure. Additionally we investigated the potential utilization of MR-based and fundoscopic methods of estimating intracranial pressure for improved diagnosis. Clinical data were collected retrospectively from twelve pediatric patients diagnosed with IIH between 2008 and 2012 and revised diagnostic criteria were applied. Comparison with non-invasive methods for measuring intracranial pressure, MRI-based measurement (MR-ICP) and venous ophthalmodynamometry was performed. Only four of the twelve children (33%) fulfilled the revised diagnostic criteria for a definite diagnosis of IIH. Regarding noninvasive methods, MR-ICP (n = 6) showed a significantly higher mean of intracranial pressure compared to a healthy age- and sex-matched control group (p = 0.0043). Venous ophthalmodynamometry (n = 4) showed comparable results to invasive lumbar puncture. The revised diagnostic criteria for IIH may be too strict especially in children without papilledema. MR-ICP and venous ophthalmodynamometry are promising complementary procedures for monitoring disease progression and response to treatment. Copyright © 2017 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.
Human Health Water Quality Criteria and Methods for Toxics
Documents pertaining to Human Health Water Quality Criteria and Methods for Toxins. Includes 2015 Update for Water Quality Criteria, 2002 National Recommended Human Health Criteria, and 2000 EPA Methodology.
NASA Astrophysics Data System (ADS)
Poncelet, Carine; Merz, Ralf; Merz, Bruno; Parajka, Juraj; Oudin, Ludovic; Andréassian, Vazken; Perrin, Charles
2017-08-01
Most of previous assessments of hydrologic model performance are fragmented, based on small number of catchments, different methods or time periods and do not link the results to landscape or climate characteristics. This study uses large-sample hydrology to identify major catchment controls on daily runoff simulations. It is based on a conceptual lumped hydrological model (GR6J), a collection of 29 catchment characteristics, a multinational set of 1103 catchments located in Austria, France, and Germany and four runoff model efficiency criteria. Two analyses are conducted to assess how features and criteria are linked: (i) a one-dimensional analysis based on the Kruskal-Wallis test and (ii) a multidimensional analysis based on regression trees and investigating the interplay between features. The catchment features most affecting model performance are the flashiness of precipitation and streamflow (computed as the ratio of absolute day-to-day fluctuations by the total amount in a year), the seasonality of evaporation, the catchment area, and the catchment aridity. Nonflashy, nonseasonal, large, and nonarid catchments show the best performance for all the tested criteria. We argue that this higher performance is due to fewer nonlinear responses (higher correlation between precipitation and streamflow) and lower input and output variability for such catchments. Finally, we show that, compared to national sets, multinational sets increase results transferability because they explore a wider range of hydroclimatic conditions.
24 CFR 214.303 - Performance criteria.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Performance criteria. 214.303 Section 214.303 Housing and Urban Development Regulations Relating to Housing and Urban Development... HOUSING COUNSELING PROGRAM Program Administration § 214.303 Performance criteria. To maintain HUD-approved...
24 CFR 214.303 - Performance criteria.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Performance criteria. 214.303 Section 214.303 Housing and Urban Development Regulations Relating to Housing and Urban Development... HOUSING COUNSELING PROGRAM Program Administration § 214.303 Performance criteria. To maintain HUD-approved...
78 FR 7820 - Notice of Intelligent Mail Indicia Performance Criteria
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-04
... FURTHER INFORMATION CONTACT: Marlo Kay Ivey, Business Programs Specialist, Payment Technology, U.S. Postal... Performance Criteria and Security Architecture for Open Information Based Indicia (IBI) Postage Evidencing Systems and the Performance Criteria and Security Architecture for Closed Information Based Indicia (IBI...
Code of Federal Regulations, 2010 CFR
2010-10-01
... September 9, 2002, shall meet the test performance criteria for flammability and smoke emission..., refurbishment, or overhaul of the car or cab, shall meet the test performance criteria for flammability and... of tests of material conducted in accordance with the standards and performance criteria for...
NASA Astrophysics Data System (ADS)
Harney, Robert C.
1997-03-01
A novel methodology offering the potential for resolving two of the significant problems of implementing multisensor target recognition systems, i.e., the rational selection of a specific sensor suite and optimal allocation of requirements among sensors, is presented. Based on a sequence of conjectures (and their supporting arguments) concerning the relationship of extractable information content to recognition performance of a sensor system, a set of heuristics (essentially a reformulation of Johnson's criteria applicable to all sensor and data types) is developed. An approach to quantifying the information content of sensor data is described. Coupling this approach with the widely accepted Johnson's criteria for target recognition capabilities results in a quantitative method for comparing the target recognition ability of diverse sensors (imagers, nonimagers, active, passive, electromagnetic, acoustic, etc.). Extension to describing the performance of multiple sensors is straightforward. The application of the technique to sensor selection and requirements allocation is discussed.
Lenters-Westra, Erna; English, Emma
2017-08-28
As a reference laboratory for HbA1c, it is essential to have accurate and precise HbA1c methods covering a range of measurement principles. We report an evaluation of the Abbott Enzymatic (Architect c4000), Roche Gen.3 HbA1c (Cobas c513) and Tosoh G11 using different quality targets. The effect of hemoglobin variants, other potential interferences and the performance in comparison to both the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) and the National Glycohemoglobin Standardization Program (NGSP) reference systems was assessed using certified evaluation protocols. Each of the evaluated HbA1c methods had CVs <3% in SI units and <2% in NGSP units at 46 mmol/mol (6.4%) and 72 mmol/mol (8.7%) and passed the NGSP criteria when compared with six secondary reference measurement procedures (SRMPs). Sigma was 8.6 for Abbott Enzymatic, 3.3 for Roche Cobas c513 and 6.9 for Tosoh G11. No clinically significant interference was detected for the common Hb variants for the three methods. All three methods performed well and are suitable for clinical application in the analysis of HbA1c. Partly based on the result of this study, the Abbott Enzymatic method on the Architect c4000 and the Roche Gen.3 HbA1c on the Cobas c513 are now official, certified IFCC and NGSP SRMPs in the IFCC and NGSP networks. Sigma metrics quality criteria presented in a graph distinguish between good and excellent performance.
Web-based application on employee performance assessment using exponential comparison method
NASA Astrophysics Data System (ADS)
Maryana, S.; Kurnia, E.; Ruyani, A.
2017-02-01
Employee performance assessment is also called a performance review, performance evaluation, or assessment of employees, is an effort to assess the achievements of staffing performance with the aim to increase productivity of employees and companies. This application helps in the assessment of employee performance using five criteria: Presence, Quality of Work, Quantity of Work, Discipline, and Teamwork. The system uses the Exponential Comparative Method and Weighting Eckenrode. Calculation results using graphs were provided to see the assessment of each employee. Programming language used in this system is written in Notepad++ and MySQL database. The testing result on the system can be concluded that this application is correspond with the design and running properly. The test conducted is structural test, functional test, and validation, sensitivity analysis, and SUMI testing.
The generation of monoclonal antibodies and their use in rapid diagnostic tests
USDA-ARS?s Scientific Manuscript database
Antibodies are the most important component of an immunoassay. In these proceedings we outline novel methods used to generate and select monoclonal antibodies that meet performance criteria for use in rapid lateral flow and microfluidic immunoassay tests for the detection of agricultural pathogens ...
E-Commerce New Venture Performance: How Funding Impacts Culture.
ERIC Educational Resources Information Center
Hamilton, R. H.
2001-01-01
Explores the three primary methods of funding for e-commerce startups and the impact that funding criteria have had on the resulting organizational cultures. Highlights include self-funded firms; venture capital funding; corporate funding; and a table that compares the three types, including examples. (LRW)
49 CFR Appendix E to Part 229 - Performance Criteria for Locomotive Crashworthiness
Code of Federal Regulations, 2011 CFR
2011-10-01
... Crashworthiness E Appendix E to Part 229 Transportation Other Regulations Relating to Transportation (Continued..., App. E Appendix E to Part 229—Performance Criteria for Locomotive Crashworthiness This appendix provides performance criteria for the crashworthiness evaluation of alternative locomotive designs, and...
49 CFR Appendix E to Part 229 - Performance Criteria for Locomotive Crashworthiness
Code of Federal Regulations, 2013 CFR
2013-10-01
... Crashworthiness E Appendix E to Part 229 Transportation Other Regulations Relating to Transportation (Continued..., App. E Appendix E to Part 229—Performance Criteria for Locomotive Crashworthiness This appendix provides performance criteria for the crashworthiness evaluation of alternative locomotive designs, and...
49 CFR Appendix E to Part 229 - Performance Criteria for Locomotive Crashworthiness
Code of Federal Regulations, 2012 CFR
2012-10-01
... Crashworthiness E Appendix E to Part 229 Transportation Other Regulations Relating to Transportation (Continued..., App. E Appendix E to Part 229—Performance Criteria for Locomotive Crashworthiness This appendix provides performance criteria for the crashworthiness evaluation of alternative locomotive designs, and...
49 CFR Appendix E to Part 229 - Performance Criteria for Locomotive Crashworthiness
Code of Federal Regulations, 2014 CFR
2014-10-01
... Crashworthiness E Appendix E to Part 229 Transportation Other Regulations Relating to Transportation (Continued..., App. E Appendix E to Part 229—Performance Criteria for Locomotive Crashworthiness This appendix provides performance criteria for the crashworthiness evaluation of alternative locomotive designs, and...
30 CFR 7.44 - Technical requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... nonmetallic materials shall meet the acceptable performance criteria for the impact test in § 7.46... material under part 18 of this chapter; and (ii) Meet the acceptable performance criteria for the...) Battery box and cover insulating material shall meet the acceptable performance criteria for the acid...
30 CFR 7.44 - Technical requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... nonmetallic materials shall meet the acceptable performance criteria for the impact test in § 7.46... material under part 18 of this chapter; and (ii) Meet the acceptable performance criteria for the...) Battery box and cover insulating material shall meet the acceptable performance criteria for the acid...
Bottai, Matteo; Tjärnlund, Anna; Santoni, Giola; Werth, Victoria P; Pilkington, Clarissa; de Visser, Marianne; Alfredsson, Lars; Amato, Anthony A; Barohn, Richard J; Liang, Matthew H; Aggarwal, Rohit; Arnardottir, Snjolaug; Chinoy, Hector; Cooper, Robert G; Danko, Katalin; Dimachkie, Mazen M; Feldman, Brian M; García-De La Torre, Ignacio; Gordon, Patrick; Hayashi, Taichi; Katz, James D; Kohsaka, Hitoshi; Lachenbruch, Peter A; Lang, Bianca A; Li, Yuhui; Oddis, Chester V; Olesinka, Marzena; Reed, Ann M; Rutkowska-Sak, Lidia; Sanner, Helga; Selva-O’Callaghan, Albert; Wook Song, Yeong; Ytterberg, Steven R; Miller, Frederick W; Rider, Lisa G; Lundberg, Ingrid E; Amoruso, Maria
2017-01-01
Objective To describe the methodology used to develop new classification criteria for adult and juvenile idiopathic inflammatory myopathies (IIMs) and their major subgroups. Methods An international, multidisciplinary group of myositis experts produced a set of 93 potentially relevant variables to be tested for inclusion in the criteria. Rheumatology, dermatology, neurology and paediatric clinics worldwide collected data on 976 IIM cases (74% adults, 26% children) and 624 non-IIM comparator cases with mimicking conditions (82% adults, 18% children). The participating clinicians classified each case as IIM or non-IIM. Generally, the classification of any given patient was based on few variables, leaving remaining variables unmeasured. We investigated the strength of the association between all variables and between these and the disease status as determined by the physician. We considered three approaches: (1) a probability-score approach, (2) a sum-of-items approach criteria and (3) a classification-tree approach. Results The approaches yielded several candidate models that were scrutinised with respect to statistical performance and clinical relevance. The probability-score approach showed superior statistical performance and clinical practicability and was therefore preferred over the others. We developed a classification tree for subclassification of patients with IIM. A calculator for electronic devices, such as computers and smartphones, facilitates the use of the European League Against Rheumatism/American College of Rheumatology (EULAR/ACR) classification criteria. Conclusions The new EULAR/ACR classification criteria provide a patient’s probability of having IIM for use in clinical and research settings. The probability is based on a score obtained by summing the weights associated with a set of criteria items. PMID:29177080
Weiss, H R
2012-01-01
There is a wide variation of the inclusion criteria found in studies investigating the outcome of conservative scoliosis treatment. While the application of the SRS criteria for studies on bracing seem useful, there are no inclusion criteria for the investigation of physiotherapy alone. This study has been performed to investigate the possibility to find useful inclusion criteria for future prospective studies on physiotherapy (PT). A PubMed and (incomplete) hand search for outcome papers on PT has been performed in order to detect study designs and inclusion criteria used. Real outcome papers (start of treatment in immature samples / end results after the end of growth) have not been found. Some papers investigated mid-term effects of exercises, most were retrospective, few prospective and many included patient samples with questionable treatment indications. No paper has been found with patients of risk for being progressive followed from premenarchial status until skeletal maturity under physiotherapy treatment alone. Claims made to regard physiotherapy as an evidence based method of treatment are not justified scientifically. An agreement of the scientific community on common inclusion criteria for future studies on PT is necessary. We would suggest the following: (1) girls only, (2) age 10 to 13 with the first signs of maturation (Tanner II), (3) Risser 0-2, (4) risk for progression 40 - 60% according to Lonstein and Carlson. There is no outcome paper on PT in scoliosis with a patient sample at risk for being progressive followed from premenarchial status until skeletal maturity. Therefore, only bracing can be regarded as being evidence based in the management of scoliosis patients during growth.
Screening for increased cardiometabolic risk in primary care: a systematic review
den Engelsen, Corine; Koekkoek, Paula S; Godefrooij, Merijn B; Spigt, Mark G; Rutten, Guy E
2014-01-01
Background Many programmes to detect and prevent cardiovascular disease (CVD) have been performed, but the optimal strategy is not yet clear. Aim To present a systematic review of cardiometabolic screening programmes performed among apparently healthy people (not yet known to have CVD, diabetes, or cardiometabolic risk factors) and mixed populations (apparently healthy people and people diagnosed with risk factor or disease) to define the optimal screening strategy. Design and setting Systematic review of studies performed in primary care in Western countries. Method MEDLINE, Embase, and CINAHL databases were searched for studies screening for increased cardiometabolic risk. Exclusion criteria were studies designed to assess prevalence of risk factors without follow-up or treatment; without involving a GP; when fewer than two risk factors were considered as the primary outcome; and studies constrained to ethnic minorities. Results The search strategy yielded 11 445 hits; 26 met the inclusion criteria. Five studies (1995–2012) were conducted in apparently healthy populations: three used a stepwise method. Response rates varied from 24% to 79%. Twenty-one studies (1967–2012) were performed in mixed populations; one used a stepwise method. Response rates varied from 50% to 75%. Prevalence rates could not be compared because of heterogeneity of used thresholds and eligible populations. Observed time trends were a shift from mixed to apparently healthy populations, increasing use of risk scores, and increasing use of stepwise screening methods. Conclusion The optimal screening strategy in primary care is likely stepwise, in apparently healthy people, with the use of risk scores. Increasing public awareness and actively involving GPs might facilitate screening efficiency and uptake. PMID:25267047
NASA Technical Reports Server (NTRS)
Repa, B. S.; Zucker, R. S.; Wierwille, W. W.
1977-01-01
The influence of vehicle transient response characteristics on driver-vehicle performance in discrete maneuvers as measured by integral performance criteria was investigated. A group of eight ordinary drivers was presented with a series of eight vehicle transfer function configurations in a driving simulator. Performance in two discrete maneuvers was analyzed by means of integral performance criteria. Results are presented.
De Francesco, Davide; Leech, Robert; Sabin, Caroline A.; Winston, Alan
2018-01-01
Objective The reported prevalence of cognitive impairment remains similar to that reported in the pre-antiretroviral therapy era. This may be partially artefactual due to the methods used to diagnose impairment. In this study, we evaluated the diagnostic performance of the HIV-associated neurocognitive disorder (Frascati criteria) and global deficit score (GDS) methods in comparison to a new, multivariate method of diagnosis. Methods Using a simulated ‘normative’ dataset informed by real-world cognitive data from the observational Pharmacokinetic and Clinical Observations in PeoPle Over fiftY (POPPY) cohort study, we evaluated the apparent prevalence of cognitive impairment using the Frascati and GDS definitions, as well as a novel multivariate method based on the Mahalanobis distance. We then quantified the diagnostic properties (including positive and negative predictive values and accuracy) of each method, using bootstrapping with 10,000 replicates, with a separate ‘test’ dataset to which a pre-defined proportion of ‘impaired’ individuals had been added. Results The simulated normative dataset demonstrated that up to ~26% of a normative control population would be diagnosed with cognitive impairment with the Frascati criteria and ~20% with the GDS. In contrast, the multivariate Mahalanobis distance method identified impairment in ~5%. Using the test dataset, diagnostic accuracy [95% confidence intervals] and positive predictive value (PPV) was best for the multivariate method vs. Frascati and GDS (accuracy: 92.8% [90.3–95.2%] vs. 76.1% [72.1–80.0%] and 80.6% [76.6–84.5%] respectively; PPV: 61.2% [48.3–72.2%] vs. 29.4% [22.2–36.8%] and 33.9% [25.6–42.3%] respectively). Increasing the a priori false positive rate for the multivariate Mahalanobis distance method from 5% to 15% resulted in an increase in sensitivity from 77.4% (64.5–89.4%) to 92.2% (83.3–100%) at a cost of specificity from 94.5% (92.8–95.2%) to 85.0% (81.2–88.5%). Conclusion Our simulations suggest that the commonly used diagnostic criteria of HIV-associated cognitive impairment label a significant proportion of a normative reference population as cognitively impaired, which will likely lead to a substantial over-estimate of the true proportion in a study population, due to their lower than expected specificity. These findings have important implications for clinical research regarding cognitive health in people living with HIV. More accurate methods of diagnosis should be implemented, with multivariate techniques offering a promising solution. PMID:29641619
NASA Astrophysics Data System (ADS)
Erfaisalsyah, M. H.; Mansur, A.; Khasanah, A. U.
2017-11-01
For a company which engaged in the textile field, specify the supplier of raw materials for production is one important part of supply chain management which can affect the company's business processes. This study aims to identify the best suppliers of raw material suppliers of yarn for PC. PKBI based on several criteria. In this study, the integration between the Analytical Hierarchy Process (AHP) and the Standardized Unitless Rating (SUR) are used to assess the performance of the suppliers. By using AHP, it can be known the value of the relative weighting of each criterion. While SUR shows the sequence performance value of the supplier. The result of supplier ranking calculation can be used to know the strengths and weaknesses of each supplier based on its performance criteria. From the final result, it can be known which suppliers should improve their performance in order to create long term cooperation with the company.
NASA Astrophysics Data System (ADS)
Bijl, Piet
2016-10-01
When acquiring a new imaging system and operational task performance is a critical factor for success, it is necessary to specify minimum acceptance requirements that need to be met using a sensor performance model and/or performance tests. Currently, there exist a variety of models and test from different origin (defense, security, road safety, optometry) and they all do different predictions. This study reviews a number of frequently used methods and shows the effects that small changes in procedure or threshold criteria can have on the outcome of a test. For example, a system may meet the acceptance requirements but not satisfy the needs for the operational task, or the choice of test may determine the rank order of candidate sensors. The goal of the paper is to make people aware of the pitfalls associated with the acquisition process, by i) illustrating potential tricks to have a system accepted that is actually not suited for the operational task, and ii) providing tips to avoid this unwanted situation.
47 CFR 76.611 - Cable television basic signal leakage performance criteria.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Cable television basic signal leakage... television basic signal leakage performance criteria. (a) No cable television system shall commence or... one of the following cable television basic signal leakage performance criteria: (1) prior to carriage...
49 CFR Appendix E to Part 229 - Performance Criteria for Locomotive Crashworthiness
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Performance Criteria for Locomotive Crashworthiness E Appendix E to Part 229 Transportation Other Regulations Relating to Transportation (Continued..., App. E Appendix E to Part 229—Performance Criteria for Locomotive Crashworthiness This appendix...
32 CFR 101.6 - Criteria for satisfactory performance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 1 2010-07-01 2010-07-01 false Criteria for satisfactory performance. 101.6..., MILITARY AND CIVILIAN PARTICIPATION IN RESERVE TRAINING PROGRAMS § 101.6 Criteria for satisfactory...) Shall require members to: (1) Meet the standards of satisfactory performance of training duty set forth...
Cai, Yefeng; Wu, Ming; Yang, Jun
2014-02-01
This paper describes a method for focusing the reproduced sound in the bright zone without disturbing other people in the dark zone in personal audio systems. The proposed method combines the least-squares and acoustic contrast criteria. A constrained parameter is introduced to tune the balance between two performance indices, namely, the acoustic contrast and the spatial average error. An efficient implementation of this method using convex optimization is presented. Offline simulations and real-time experiments using a linear loudspeaker array are conducted to evaluate the performance of the presented method. Results show that compared with the traditional acoustic contrast control method, the proposed method can improve the flatness of response in the bright zone by sacrificing the level of acoustic contrast.
Multi-projector auto-calibration and placement optimization for non-planar surfaces
NASA Astrophysics Data System (ADS)
Li, Dong; Xie, Jinghui; Zhao, Lu; Zhou, Lijing; Weng, Dongdong
2015-10-01
Non-planar projection has been widely applied in virtual reality and digital entertainment and exhibitions because of its flexible layout and immersive display effects. Compared with planar projection, a non-planar projection is more difficult to achieve because projector calibration and image distortion correction are difficult processes. This paper uses a cylindrical screen as an example to present a new method for automatically calibrating a multi-projector system in a non-planar environment without using 3D reconstruction. This method corrects the geometric calibration error caused by the screen's manufactured imperfections, such as an undulating surface or a slant in the vertical plane. In addition, based on actual projection demand, this paper presents the overall performance evaluation criteria for the multi-projector system. According to these criteria, we determined the optimal placement for the projectors. This method also extends to surfaces that can be parameterized, such as spheres, ellipsoids, and paraboloids, and demonstrates a broad applicability.
A Strategy to Identify Critical Appraisal Criteria for Primary Mixed-Method Studies
Sale, Joanna E. M.; Brazil, Kevin
2015-01-01
The practice of mixed-methods research has increased considerably over the last 10 years. While these studies have been criticized for violating quantitative and qualitative paradigmatic assumptions, the methodological quality of mixed-method studies has not been addressed. The purpose of this paper is to identify criteria to critically appraise the quality of mixed-method studies in the health literature. Criteria for critically appraising quantitative and qualitative studies were generated from a review of the literature. These criteria were organized according to a cross-paradigm framework. We recommend that these criteria be applied to a sample of mixed-method studies which are judged to be exemplary. With the consultation of critical appraisal experts and experienced qualitative, quantitative, and mixed-method researchers, further efforts are required to revise and prioritize the criteria according to importance. PMID:26526412
Eskandari, Mahnaz; Homaee, Mehdi; Mahmodi, Shahla
2012-08-01
Landfill site selection is a complicated multi criteria land use planning that should convince all related stakeholders with different insights. This paper addresses an integrating approach for landfill siting based on conflicting opinions among environmental, economical and socio-cultural expertise. In order to gain optimized siting decision, the issue was investigated in different viewpoints. At first step based on opinion sampling and questionnaire results of 35 experts familiar with local situations, the national environmental legislations and international practices, 13 constraints and 15 factors were built in hierarchical structure. Factors divided into three environmental, economical and socio-cultural groups. In the next step, the GIS-database was developed based on the designated criteria. In the third stage, the criteria standardization and criteria weighting were accomplished. The relative importance weights of criteria and subcriteria were estimated, respectively, using analytical hierarchy process and rank ordering methods based on different experts opinions. Thereafter, by using simple additive weighting method, the suitability maps for landfill siting in Marvdasht, Iran, was evaluated in environmental, economical and socio-cultural visions. The importance of each group of criteria in its own vision was assigned to be higher than two other groups. In the fourth stage, the final suitability map was obtained after crossing three resulted maps in different visions and reported in five suitability classes for landfill construction. This map indicated that almost 1224 ha of the study area can be considered as best suitable class for landfill siting considering all visions. In the last stage, a comprehensive field visit was performed to verify the selected site obtained from the proposed model. This field inspection has confirmed the proposed integrating approach for the landfill siting. Copyright © 2012 Elsevier Ltd. All rights reserved.
Demirkaya, Erkan; Saglam, Celal; Turker, Turker; Koné-Paut, Isabelle; Woo, Pat; Doglio, Matteo; Amaryan, Gayane; Frenkel, Joost; Uziel, Yosef; Insalaco, Antonella; Cantarini, Luca; Hofer, Michael; Boiu, Sorina; Duzova, Ali; Modesto, Consuelo; Bryant, Annette; Rigante, Donato; Papadopoulou-Alataki, Efimia; Guillaume-Czitrom, Severine; Kuemmerle-Deschner, Jasmine; Neven, Bénédicte; Lachmann, Helen; Martini, Alberto; Ruperto, Nicolino; Gattorno, Marco; Ozen, Seza
2016-01-01
Our aims were to validate the pediatric diagnostic criteria in a large international registry and to compare them with the performance of previous criteria for the diagnosis of familial Mediterranean fever (FMF). Pediatric patients with FMF from the Eurofever registry were used for the validation of the existing criteria. The other periodic fevers served as controls: mevalonate kinase deficiency (MKD), tumor necrosis factor receptor-associated periodic syndrome (TRAPS), cryopyrin-associated periodic syndrome (CAPS), aphthous stomatitis, pharyngitis, adenitis syndrome (PFAPA), and undefined periodic fever from the same registry. The performances of Tel Hashomer, Livneh, and the Yalcinkaya-Ozen criteria were assessed. The FMF group included 339 patients. The control group consisted of 377 patients (53 TRAPS, 45 MKD, 32 CAPS, 160 PFAPA, 87 undefined periodic fevers). Patients with FMF were correctly diagnosed using the Yalcinkaya-Ozen criteria with a sensitivity rate of 87.4% and a specificity rate of 40.7%. On the other hand, Tel Hashomer and Livneh criteria displayed a sensitivity of 45.0 and 77.3%, respectively. Both of the latter criteria displayed a better specificity than the Yalcinkaya-Ozen criteria: 97.2 and 41.1% for the Tel Hashomer and Livneh criteria, respectively. The overall accuracy for the Yalcinkaya-Ozen criteria was 65 and 69.6% (using 2 and 3 criteria), respectively. Ethnicity and residence had no effect on the performance of the Yalcinkaya-Ozen criteria. The Yalcinkaya-Ozen criteria yielded a better sensitivity than the other criteria in this international cohort of patients and thus can be used as a tool for FMF diagnosis in pediatric patients from either the European or eastern Mediterranean region. However, the specificity was lower than the previously suggested adult criteria.
Music performance anxiety-part 2. a review of treatment options.
Brugués, Ariadna Ortiz
2011-09-01
Music performance anxiety (MPA) affects many individuals independent of age, gender, experience, and hours of practice. In order to prevent MPA from happening or to alleviate it when it occurs, a review of the literature about its prevention and treatment was done. Forty-four articles, meeting evidence-based medicine (EBM) criteria, were identified and analyzed. Performance repertoire should be chosen based on the musician's skill level, and it should be practiced to the point of automaticity. Because of this, the role of music teachers is essential in preventing MPA. Prevention is the most effective method against MPA. Several treatments (psychological as well as pharmacological) have been studied on subjects in order to determine the best treatment for MPA. Cognitive-behavioral therapy (CBT) seems to be the most effective, but further investigation is desired. Some musicians, in addition to CBT, also take beta-blockers; however, these drugs should only be prescribed occasionally after analyzing the situation and considering the contraindications and possible side effects. Despite these conclusions, more randomized studies with larger, homogeneous groups of subjects would be desirable (according to the EBM criteria), as well as support for the necessity of both MPA prevention and optimized methods of treatment when it does occur.
NASA Technical Reports Server (NTRS)
Koenig, John C.; Billitti, Joseph W.; Tallon, John M.
1980-01-01
The criteria is defined for auditing photovoltaic system applications and experiments. The purpose of the audit is twofold: to see if the application is meeting its stated objectives and to measure the application's progress in terms of the National Photovoltaic Program's goals of performance, cost, reliability, safety, and socio-environmental acceptance. The information obtained from an audit will be used to assess the status of an application and to provide the Department of Energy with recommendations on the future conduct of the application. Those aspects are covered of a site audit necessary to produce a systematic method for the gathering of qualitative and quantitative data to measure the success of an application. A sequence of audit events and guidelines for obtaining the required information is presented.
Analyzing Population Genetics Data: A Comparison of the Software
USDA-ARS?s Scientific Manuscript database
Choosing a software program for analyzing population genetic data can be a challenge without prior knowledge of the methods used by each program. There are numerous web sites listing programs by type of data analyzed, type of analyses performed, or other criteria. Even with programs categorized in ...
40 CFR 300.420 - Remedial site evaluation.
Code of Federal Regulations, 2014 CFR
2014-07-01
... section is to describe the methods, procedures, and criteria the lead agency shall use to collect data, as...) Remedial preliminary assessment. (1) The lead agency shall perform a remedial PA on all sites in CERCLIS as... indicates that a removal action may be warranted, the lead agency shall initiate removal evaluation pursuant...
40 CFR 300.420 - Remedial site evaluation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... section is to describe the methods, procedures, and criteria the lead agency shall use to collect data, as...) Remedial preliminary assessment. (1) The lead agency shall perform a remedial PA on all sites in CERCLIS as... indicates that a removal action may be warranted, the lead agency shall initiate removal evaluation pursuant...
Bond expectations for milled surfaces and typical tack coat materials used in Virginia.
DOT National Transportation Integrated Search
2009-01-01
The ultimate purpose of the program of research of which this study was a part is to identify a test method and acceptance criteria for bonding of HMA layers. In this study, three tasks were performed to help achieve that purpose: a laboratory compar...
DOT National Transportation Integrated Search
2015-01-01
One of the objectives of this study was to evaluate soil testing equipment based on its capability of measuring in-place stiffness or modulus values. : As design criteria transition from empirical to mechanistic-empirical, soil test methods and equip...
49 CFR 26.7 - What discriminatory actions are forbidden?
Code of Federal Regulations, 2010 CFR
2010-10-01
... Section 26.7 Transportation Office of the Secretary of Transportation PARTICIPATION BY DISADVANTAGED... performance of any contract covered by this part on the basis of race, color, sex, or national origin. (b) In... criteria or methods of administration that have the effect of defeating or substantially impairing...
Spatial Map of Synthesized Criteria for the Redundancy Resolution of Human Arm Movements.
Li, Zhi; Milutinovic, Dejan; Rosen, Jacob
2015-11-01
The kinematic redundancy of the human arm enables the elbow position to rotate about the axis going through the shoulder and wrist, which results in infinite possible arm postures when the arm reaches to a target in a 3-D workspace. To infer the control strategy the human motor system uses to resolve redundancy in reaching movements, this paper compares five redundancy resolution criteria and evaluates their arm posture prediction performance using data on healthy human motion. Two synthesized criteria are developed to provide better real-time arm posture prediction than the five individual criteria. Of these two, the criterion synthesized using an exponential method predicts the arm posture more accurately than that using a least squares approach, and therefore is preferable for inferring the contributions of the individual criteria to motor control during reaching movements. As a methodology contribution, this paper proposes a framework to compare and evaluate redundancy resolution criteria for arm motion control. A cluster analysis which associates criterion contributions with regions of the workspace provides a guideline for designing a real-time motion control system applicable to upper-limb exoskeletons for stroke rehabilitation.
MHA admission criteria and program performance: do they predict career performance?
Porter, J; Galfano, V J
1987-01-01
The purpose of this study was to determine to what extent admission criteria predict graduate school and career performance. The study also analyzed which objective and subjective criteria served as the best predictors. MHA graduates of the University of Minnesota from 1974 to 1977 were surveyed to assess career performance. Student files served as the data base on admission criteria and program performance. Career performance was measured by four variables: total compensation, satisfaction, fiscal responsibility, and level of authority. High levels of MHA program performance were associated with women who had high undergraduate GPAs from highly selective undergraduate colleges, were undergraduate business majors, and participated in extracurricular activities. High levels of compensation were associated with relatively low undergraduate GPAs, high levels of participation in undergraduate extracurricular activities, and being single at admission to graduate school. Admission to MHA programs should be based upon both objective and subjective criteria. Emphasis should be placed upon the selection process for MHA students since admission criteria are shown to explain 30 percent of the variability in graduate program performance, and as much as 65 percent of the variance in level of position authority.
Hancerliogullari, Gulsah; Hancerliogullari, Kadir Oymen; Koksalmis, Emrah
2017-01-23
Determining the most suitable anesthesia method for circumcision surgery plays a fundamental role in pediatric surgery. This study is aimed to present pediatric surgeons' perspective on the relative importance of the criteria for selecting anesthesia method for circumcision surgery by utilizing the multi-criteria decision making methods. Fuzzy set theory offers a useful tool for transforming linguistic terms into numerical assessments. Since the evaluation of anesthesia methods requires linguistic terms, we utilize the fuzzy Analytic Hierarchy Process (AHP) and fuzzy Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). Both mathematical decision-making methods are originated from individual judgements for qualitative factors utilizing the pair-wise comparison matrix. Our model uses four main criteria, eight sub-criteria as well as three alternatives. To assess the relative priorities, an online questionnaire was completed by three experts, pediatric surgeons, who had experience with circumcision surgery. Discussion of the results with the experts indicates that time-related factors are the most important criteria, followed by psychology, convenience and duration. Moreover, general anesthesia with penile block for circumcision surgery is the preferred choice of anesthesia compared to general anesthesia without penile block, which has a greater priority compared to local anesthesia under the discussed main-criteria and sub-criteria. The results presented in this study highlight the need to integrate surgeons' criteria into the decision making process for selecting anesthesia methods. This is the first study in which multi-criteria decision making tools, specifically fuzzy AHP and fuzzy TOPSIS, are used to evaluate anesthesia methods for a pediatric surgical procedure.
Hardware Demonstration: Radiated Emissions as a Function of Common Mode Current
NASA Technical Reports Server (NTRS)
Mc Closkey, John; Roberts, Jen
2016-01-01
This presentation describes the electromagnetic compatibility (EMC) tests performed on the Integrated Science Instrument Module (ISIM), the science payload of the James Webb Space Telescope (JWST), at NASAs Goddard Space Flight Center (GSFC) in August 2015. By its very nature of being an integrated payload, it could be treated as neither a unit level test nor an integrated spacecraft observatory test. Non-standard test criteria are described along with non-standard test methods that had to be developed in order to evaluate them. Results are presented to demonstrate that all test criteria were met in less than the time allocated.
EMC Test Challenges for NASAs James Webb Space Telescope
NASA Technical Reports Server (NTRS)
McCloskey, John
2016-01-01
This presentation describes the electromagnetic compatibility (EMC) tests performed on the Integrated Science Instrument Module (ISIM), the science payload of the James Webb Space Telescope (JWST), at NASAs Goddard Space Flight Center (GSFC) in August 2015. By its very nature of being an integrated payload, it could be treated as neither a unit level test nor an integrated spacecraft observatory test. Non-standard test criteria are described along with non-standard test methods that had to be developed in order to evaluate them. Results are presented to demonstrate that all test criteria were met in less than the time allocated.
EMC Test Challenges for NASA's James Webb Space Telescope
NASA Technical Reports Server (NTRS)
McCloskey, John
2016-01-01
This presentation describes the electromagnetic compatibility (EMC) tests performed on the Integrated Science Instrument Module (ISIM), the science payload of the James Webb Space Telescope (JWST), at NASAs Goddard Space Flight Center (GSFC) in August 2015. By its very nature of being an integrated payload, it could be treated as neither a unit level test nor an integrated spacecraft observatory test. Non-standard test criteria are described along with non-standard test methods that had to be developed in order to evaluate them. Results are presented to demonstrate that all test criteria were met in less than the time allocated.
NASA Technical Reports Server (NTRS)
Mulhall, B. D. L.
1980-01-01
The development of both quantitative criteria that were used to evaluate conceptional systems for automating the functions for the FBI Identification Division is described. Specific alternative systems for automation were compared by using these developed criteria, defined as Measures of Effectiveness (MOE), to gauge system's performance in attempting to achieve certain goals. The MOE, essentially measurement tools that were developed through the combination of suitable parameters, pertain to each conceivable area of system operation. The methods and approaches used, both in selecting the parameters and in using the resulting MOE, are described.
The Evaluation of Published Indexes, and Abstract Journals:, Criteria and Possible Procedures
Lancaster, F. W.
1971-01-01
This paper describes possible criteria by which the effectiveness of a published index may be evaluated and suggest procedures that might be used to conduct an evaluation of a published index. The procedures were developed for the National Library of Medicine and relate specifically to the recurring bibliographies produced by MEDLARS in various specialized areas of biomedicine. The methods described should, however, be applicable to other printed indexes and abstract journals. Factors affecting the performance of a published index are also discussed and some research projects relevant to the evaluation of published indexes are reviewed. PMID:5146770
Kim, Yeseul; Kim, Gayoung; Kong, Byung Soo; Lee, Ji Eun; Oh, Yu Mi; Hyun, Jae Won; Kim, Su Hyun; Joung, AeRan; Kim, Byoung Joon; Choi, Kyungho; Kim, Ho Jin
2017-04-01
The detection of aquaporin 4-IgG (AQP4-IgG) is now a critical diagnostic criterion for neuromyelitis optica spectrum disorder (NMOSD). To evaluate the serostatus of NMOSD patients based on the 2015 new diagnostic criteria using a new in-house cell-based assay (CBA). We generated a stable cell line using internal ribosome entry site-containing bicistronic vectors, which allow the simultaneous expression of two proteins (AQP4 and green fluorescent protein) separately from the same RNA transcript. We performed in-house CBA using serum from 386 patients: 178 NMOSD patients diagnosed according to the new diagnostic criteria without AQP4-IgG, 63 high risk NMOSD patients presenting 1 of the 6 core clinical characteristics of NMOSD but not fulfilling dissemination in space, and 145 patients with other neurological diseases, including 66 with multiple sclerosis. The serostatus of 111 definite and high risk NMOSD patients were also tested using a commercial CBA kit with identical serum to evaluate the correlation between the 2 methods. All assays were performed by two independent and blinded investigators. Our in-house assay yielded a specificity of 100% and sensitivities of 80% (142 of 178) and 76% (48 of 63) when detecting definite- and high risk NMOSD patients, respectively. The comparison with the commercial CBA kit revealed a correlation for 102 of the 111 patients: no correlation was present in 7 patients who were seronegative using the commercial method but seropositive using the in-house method, and in 2 patients who were seropositive using the commercial method but seronegative using the in-house method. These results demonstrate that our in-house CBA is a highly specific and sensitive method for detecting AQP4-IgG in NMOSD patients. Copyright © 2017 Korean Neurological Association
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Ahmed, Sameh; Alqurshi, Abdulmalik; Mohamed, Abdel-Maaboud Ismail
2018-07-01
A new robust and reliable high-performance liquid chromatography (HPLC) method with multi-criteria decision making (MCDM) approach was developed to allow simultaneous quantification of atenolol (ATN) and nifedipine (NFD) in content uniformity testing. Felodipine (FLD) was used as an internal standard (I.S.) in this study. A novel marriage between a new interactive response optimizer and a HPLC method was suggested for multiple response optimizations of target responses. An interactive response optimizer was used as a decision and prediction tool for the optimal settings of target responses, according to specified criteria, based on Derringer's desirability. Four independent variables were considered in this study: Acetonitrile%, buffer pH and concentration along with column temperature. Eight responses were optimized: retention times of ATN, NFD, and FLD, resolutions between ATN/NFD and NFD/FLD, and plate numbers for ATN, NFD, and FLD. Multiple regression analysis was applied in order to scan the influences of the most significant variables for the regression models. The experimental design was set to give minimum retention times, maximum resolution and plate numbers. The interactive response optimizer allowed prediction of optimum conditions according to these criteria with a good composite desirability value of 0.98156. The developed method was validated according to the International Conference on Harmonization (ICH) guidelines with the aid of the experimental design. The developed MCDM-HPLC method showed superior robustness and resolution in short analysis time allowing successful simultaneous content uniformity testing of ATN and NFD in marketed capsules. The current work presents an interactive response optimizer as an efficient platform to optimize, predict responses, and validate HPLC methodology with tolerable design space for assay in quality control laboratories. Copyright © 2018 Elsevier B.V. All rights reserved.
Diagnosis of multiple sclerosis from EEG signals using nonlinear methods.
Torabi, Ali; Daliri, Mohammad Reza; Sabzposhan, Seyyed Hojjat
2017-12-01
EEG signals have essential and important information about the brain and neural diseases. The main purpose of this study is classifying two groups of healthy volunteers and Multiple Sclerosis (MS) patients using nonlinear features of EEG signals while performing cognitive tasks. EEG signals were recorded when users were doing two different attentional tasks. One of the tasks was based on detecting a desired change in color luminance and the other task was based on detecting a desired change in direction of motion. EEG signals were analyzed in two ways: EEG signals analysis without rhythms decomposition and EEG sub-bands analysis. After recording and preprocessing, time delay embedding method was used for state space reconstruction; embedding parameters were determined for original signals and their sub-bands. Afterwards nonlinear methods were used in feature extraction phase. To reduce the feature dimension, scalar feature selections were done by using T-test and Bhattacharyya criteria. Then, the data were classified using linear support vector machines (SVM) and k-nearest neighbor (KNN) method. The best combination of the criteria and classifiers was determined for each task by comparing performances. For both tasks, the best results were achieved by using T-test criterion and SVM classifier. For the direction-based and the color-luminance-based tasks, maximum classification performances were 93.08 and 79.79% respectively which were reached by using optimal set of features. Our results show that the nonlinear dynamic features of EEG signals seem to be useful and effective in MS diseases diagnosis.
Teaching medical students to discern ethical problems in human clinical research studies.
Roberts, Laura Weiss; Warner, Teddy D; Green Hammond, Katherine A; Brody, Janet L; Kaminsky, Alexis; Roberts, Brian B
2005-10-01
Investigators and institutional review boards are entrusted with ensuring the conduct of ethically sound human studies. Assessing ethical aspects of research protocols is a key skill in fulfilling this duty, yet no empirically validated method exists for preparing professionals to attain this skill. The authors performed a randomized controlled educational intervention, comparing a criteria-based learning method, a clinical-research- and experience-based learning method, and a control group. All 300 medical students enrolled at the University of New Mexico School of Medicine in 2001 were invited to participate. After a single half-hour educational session, a written posttest of ability to detect ethical problems in hypothetical protocol vignettes was administered. The authors analyzed responses to ten protocol vignettes that had been evaluated independently by experts. For each vignette, a global assessment of the perceived significance of ethical problems and the identification of specific ethical problems were evaluated. Eighty-three medical students (27%) volunteered: 50 (60%) were women and 55 (66%) were first- and second-year students. On global assessments, the criteria-focused group perceived ethical problems as more significant than did the other two groups (p < .02). Students in the criteria-focused group were better able than students in the control group (p < .03) to discern specific ethical problems, more closely resembling expert assessments. Unexpectedly, the group focused on clinical research participants identified fewer problems than did the control group (p < .05). The criteria-focused intervention produced enhanced ethical evaluation skills. This work supports the potential value of empirically derived methods for preparing professionals to discern ethical aspects of human studies.
Ho, Sirikit; Lukacs, Zoltan; Hoffmann, Georg F; Lindner, Martin; Wetter, Thomas
2007-07-01
In newborn screening with tandem mass spectrometry, multiple intermediary metabolites are quantified in a single analytical run for the diagnosis of fatty-acid oxidation disorders, organic acidurias, and aminoacidurias. Published diagnostic criteria for these disorders normally incorporate a primary metabolic marker combined with secondary markers, often analyte ratios, for which the markers have been chosen to reflect metabolic pathway deviations. We applied a procedure to extract new markers and diagnostic criteria for newborn screening to the data of newborns with confirmed medium-chain acyl-CoA dehydrogenase deficiency (MCADD) and a control group from the newborn screening program, Heidelberg, Germany. We validated the results with external data of the screening center in Hamburg, Germany. We extracted new markers by performing a systematic search for analyte combinations (features) with high discriminatory performance for MCADD. To select feature thresholds, we applied automated procedures to separate controls and cases on the basis of the feature values. Finally, we built classifiers from these new markers to serve as diagnostic criteria in screening for MCADD. On the basis of chi(2) scores, we identified approximately 800 of >628,000 new analyte combinations with superior discriminatory performance compared with the best published combinations. Classifiers built with the new features achieved diagnostic sensitivities and specificities approaching 100%. Feature construction methods provide ways to disclose information hidden in the set of measured analytes. Other diagnostic tasks based on high-dimensional metabolic data might also profit from this approach.
Howes, Oliver D; McCutcheon, Rob; Agid, Ofer; de Bartolomeis, Andrea; van Beveren, Nico J M; Birnbaum, Michael L; Bloomfield, Michael A P; Bressan, Rodrigo A; Buchanan, Robert W; Carpenter, William T; Castle, David J; Citrome, Leslie; Daskalakis, Zafiris J; Davidson, Michael; Drake, Richard J; Dursun, Serdar; Ebdrup, Bjørn H; Elkis, Helio; Falkai, Peter; Fleischacker, W Wolfgang; Gadelha, Ary; Gaughran, Fiona; Glenthøj, Birte Y; Graff-Guerrero, Ariel; Hallak, Jaime E C; Honer, William G; Kennedy, James; Kinon, Bruce J; Lawrie, Stephen M; Lee, Jimmy; Leweke, F Markus; MacCabe, James H; McNabb, Carolyn B; Meltzer, Herbert; Möller, Hans-Jürgen; Nakajima, Shinchiro; Pantelis, Christos; Reis Marques, Tiago; Remington, Gary; Rossell, Susan L; Russell, Bruce R; Siu, Cynthia O; Suzuki, Takefumi; Sommer, Iris E; Taylor, David; Thomas, Neil; Üçok, Alp; Umbricht, Daniel; Walters, James T R; Kane, John; Correll, Christoph U
2017-03-01
Research and clinical translation in schizophrenia is limited by inconsistent definitions of treatment resistance and response. To address this issue, the authors evaluated current approaches and then developed consensus criteria and guidelines. A systematic review of randomized antipsychotic clinical trials in treatment-resistant schizophrenia was performed, and definitions of treatment resistance were extracted. Subsequently, consensus operationalized criteria were developed through 1) a multiphase, mixed methods approach, 2) identification of key criteria via an online survey, and 3) meetings to achieve consensus. Of 2,808 studies identified, 42 met inclusion criteria. Of these, 21 studies (50%) did not provide operationalized criteria. In the remaining studies, criteria varied considerably, particularly regarding symptom severity, prior treatment duration, and antipsychotic dosage thresholds; only two studies (5%) utilized the same criteria. The consensus group identified minimum and optimal criteria, employing the following principles: 1) current symptoms of a minimum duration and severity determined by a standardized rating scale; 2) moderate or worse functional impairment; 3) prior treatment consisting of at least two different antipsychotic trials, each for a minimum duration and dosage; 4) systematic monitoring of adherence and meeting of minimum adherence criteria; 5) ideally at least one prospective treatment trial; and 6) criteria that clearly separate responsive from treatment-resistant patients. There is considerable variation in current approaches to defining treatment resistance in schizophrenia. The authors present consensus guidelines that operationalize criteria for determining and reporting treatment resistance, adequate treatment, and treatment response, providing a benchmark for research and clinical translation.
Natural Hazard Susceptibility Assessment for Road Planning Using Spatial Multi-Criteria Analysis
NASA Astrophysics Data System (ADS)
Karlsson, Caroline S. J.; Kalantari, Zahra; Mörtberg, Ulla; Olofsson, Bo; Lyon, Steve W.
2017-11-01
Inadequate infrastructural networks can be detrimental to society if transport between locations becomes hindered or delayed, especially due to natural hazards which are difficult to control. Thus determining natural hazard susceptible areas and incorporating them in the initial planning process, may reduce infrastructural damages in the long run. The objective of this study was to evaluate the usefulness of expert judgments for assessing natural hazard susceptibility through a spatial multi-criteria analysis approach using hydrological, geological, and land use factors. To utilize spatial multi-criteria analysis for decision support, an analytic hierarchy process was adopted where expert judgments were evaluated individually and in an aggregated manner. The estimates of susceptible areas were then compared with the methods weighted linear combination using equal weights and factor interaction method. Results showed that inundation received the highest susceptibility. Using expert judgment showed to perform almost the same as equal weighting where the difference in susceptibility between the two for inundation was around 4%. The results also showed that downscaling could negatively affect the susceptibility assessment and be highly misleading. Susceptibility assessment through spatial multi-criteria analysis is useful for decision support in early road planning despite its limitation to the selection and use of decision rules and criteria. A natural hazard spatial multi-criteria analysis could be used to indicate areas where more investigations need to be undertaken from a natural hazard point of view, and to identify areas thought to have higher susceptibility along existing roads where mitigation measures could be targeted after in-situ investigations.
Natural Hazard Susceptibility Assessment for Road Planning Using Spatial Multi-Criteria Analysis.
Karlsson, Caroline S J; Kalantari, Zahra; Mörtberg, Ulla; Olofsson, Bo; Lyon, Steve W
2017-11-01
Inadequate infrastructural networks can be detrimental to society if transport between locations becomes hindered or delayed, especially due to natural hazards which are difficult to control. Thus determining natural hazard susceptible areas and incorporating them in the initial planning process, may reduce infrastructural damages in the long run. The objective of this study was to evaluate the usefulness of expert judgments for assessing natural hazard susceptibility through a spatial multi-criteria analysis approach using hydrological, geological, and land use factors. To utilize spatial multi-criteria analysis for decision support, an analytic hierarchy process was adopted where expert judgments were evaluated individually and in an aggregated manner. The estimates of susceptible areas were then compared with the methods weighted linear combination using equal weights and factor interaction method. Results showed that inundation received the highest susceptibility. Using expert judgment showed to perform almost the same as equal weighting where the difference in susceptibility between the two for inundation was around 4%. The results also showed that downscaling could negatively affect the susceptibility assessment and be highly misleading. Susceptibility assessment through spatial multi-criteria analysis is useful for decision support in early road planning despite its limitation to the selection and use of decision rules and criteria. A natural hazard spatial multi-criteria analysis could be used to indicate areas where more investigations need to be undertaken from a natural hazard point of view, and to identify areas thought to have higher susceptibility along existing roads where mitigation measures could be targeted after in-situ investigations.
Performance Evaluation of Three Blood Glucose Monitoring Systems Using ISO 15197
Bedini, José Luis; Wallace, Jane F.; Pardo, Scott; Petruschke, Thorsten
2015-01-01
Background: Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients’ health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Methods: Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. Results: All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. Conclusions: All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. PMID:26445813
New Splitting Criteria for Decision Trees in Stationary Data Streams.
Jaworski, Maciej; Duda, Piotr; Rutkowski, Leszek; Jaworski, Maciej; Duda, Piotr; Rutkowski, Leszek; Rutkowski, Leszek; Duda, Piotr; Jaworski, Maciej
2018-06-01
The most popular tools for stream data mining are based on decision trees. In previous 15 years, all designed methods, headed by the very fast decision tree algorithm, relayed on Hoeffding's inequality and hundreds of researchers followed this scheme. Recently, we have demonstrated that although the Hoeffding decision trees are an effective tool for dealing with stream data, they are a purely heuristic procedure; for example, classical decision trees such as ID3 or CART cannot be adopted to data stream mining using Hoeffding's inequality. Therefore, there is an urgent need to develop new algorithms, which are both mathematically justified and characterized by good performance. In this paper, we address this problem by developing a family of new splitting criteria for classification in stationary data streams and investigating their probabilistic properties. The new criteria, derived using appropriate statistical tools, are based on the misclassification error and the Gini index impurity measures. The general division of splitting criteria into two types is proposed. Attributes chosen based on type- splitting criteria guarantee, with high probability, the highest expected value of split measure. Type- criteria ensure that the chosen attribute is the same, with high probability, as it would be chosen based on the whole infinite data stream. Moreover, in this paper, two hybrid splitting criteria are proposed, which are the combinations of single criteria based on the misclassification error and Gini index.
Licskai, Christopher J; Sands, Todd W; Paolatto, Lisa; Nicoletti, Ivan; Ferrone, Madonna
2012-01-01
BACKGROUND: Primary care office spirometry can improve access to testing and concordance between clinical practice and asthma guidelines. Compliance with test quality standards is essential to implementation. OBJECTIVE: To evaluate the quality of spirometry performed onsite in a regional primary care asthma program (RAP) by health care professionals with limited training. METHODS: Asthma educators were trained to perform spirometry during two 2 h workshops and supervised during up to six patient encounters. Quality was analyzed using American Thoracic Society (ATS) 1994 and ATS/European Respiratory Society (ERS) 2003 (ATS/ERS) standards. These results were compared with two regional reference sites: a primary care group practice (Family Medical Centre [FMC], Windsor, Ontario) and a teaching hospital pulmonary function laboratory (London Health Sciences Centre [LHSC], London, Ontario). RESULTS: A total of 12,815 flow-volume loops (FVL) were evaluated: RAP – 1606 FVL in 472 patient sessions; reference sites – FMC 4013 FVL in 573 sessions; and LHSC – 7196 in 1151 sessions. RAP: There were three acceptable FVL in 392 of 472 (83%) sessions, two reproducible FVL according to ATS criteria in 428 of 469 (91%) sessions, and 395 of 469 (84%) according to ATS/ERS criteria. All quality criteria – minimum of three acceptable and two reproducible FVL according to ATS criteria in 361 of 472 (77%) sessions and according to ATS/ERS criteria in 337 of 472 (71%) sessions. RAP met ATS criteria more often than the FMC (388 of 573 [68%]); however, less often than LHSC (1050 of 1151 [91%]; P<0.001). CONCLUSIONS: Health care providers with limited training and experience operating within a simple quality program achieved ATS/ERS quality spirometry in the majority of sessions in a primary care setting. The quality performance approached pulmonary function laboratory standards. PMID:22891184
78 FR 64030 - Monitoring Criteria and Methods To Calculate Occupational Radiation Doses
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-25
... NUCLEAR REGULATORY COMMISSION [NRC-2013-0234] Monitoring Criteria and Methods To Calculate... regulatory guide (DG), DG-8031, ``Monitoring Criteria and Methods to Calculate Occupational Radiation Doses.'' This guide describes methods that the NRC staff considers acceptable for licensees to use to determine...
NASA Technical Reports Server (NTRS)
Mikes, F.
1984-01-01
Silane primers for use as thermal protection on external tanks were subjected to various analytic techniques to determine the most effective testing method for silane lot evaluation. The analytic methods included high performance liquid chromatography, gas chromatography, thermogravimetry (TGA), and fourier transform infrared spectroscopy (FTIR). It is suggested that FTIR be used as the method for silane lot evaluation. Chromatograms, TGA profiles, bar graphs showing IR absorbances, and FTIR spectra are presented.
2012-03-01
0-486-41183-4. 15. Brown , Robert G. and Patrick Y. C. Hwang . Introduction to Random Signals and Applied Kalman Filtering. Wiley, New York, 1996. ISBN...stability and perfor- mance criteria. In the 1960’s, Kalman introduced the Linear Quadratic Regulator (LQR) method using an integral performance index...feedback of the state variables and was able to apply this method to time-varying and Multi-Input Multi-Output (MIMO) systems. Kalman further showed
Automated time series forecasting for biosurveillance.
Burkom, Howard S; Murphy, Sean Patrick; Shmueli, Galit
2007-09-30
For robust detection performance, traditional control chart monitoring for biosurveillance is based on input data free of trends, day-of-week effects, and other systematic behaviour. Time series forecasting methods may be used to remove this behaviour by subtracting forecasts from observations to form residuals for algorithmic input. We describe three forecast methods and compare their predictive accuracy on each of 16 authentic syndromic data streams. The methods are (1) a non-adaptive regression model using a long historical baseline, (2) an adaptive regression model with a shorter, sliding baseline, and (3) the Holt-Winters method for generalized exponential smoothing. Criteria for comparing the forecasts were the root-mean-square error, the median absolute per cent error (MedAPE), and the median absolute deviation. The median-based criteria showed best overall performance for the Holt-Winters method. The MedAPE measures over the 16 test series averaged 16.5, 11.6, and 9.7 for the non-adaptive regression, adaptive regression, and Holt-Winters methods, respectively. The non-adaptive regression forecasts were degraded by changes in the data behaviour in the fixed baseline period used to compute model coefficients. The mean-based criterion was less conclusive because of the effects of poor forecasts on a small number of calendar holidays. The Holt-Winters method was also most effective at removing serial autocorrelation, with most 1-day-lag autocorrelation coefficients below 0.15. The forecast methods were compared without tuning them to the behaviour of individual series. We achieved improved predictions with such tuning of the Holt-Winters method, but practical use of such improvements for routine surveillance will require reliable data classification methods.
A simple and efficient method for predicting protein-protein interaction sites.
Higa, R H; Tozzi, C L
2008-09-23
Computational methods for predicting protein-protein interaction sites based on structural data are characterized by an accuracy between 70 and 80%. Some experimental studies indicate that only a fraction of the residues, forming clusters in the center of the interaction site, are energetically important for binding. In addition, the analysis of amino acid composition has shown that residues located in the center of the interaction site can be better discriminated from the residues in other parts of the protein surface. In the present study, we implement a simple method to predict interaction site residues exploiting this fact and show that it achieves a very competitive performance compared to other methods using the same dataset and criteria for performance evaluation (success rate of 82.1%).
2016-01-01
Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the–server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models. PMID:27806061
A relative performance analysis of atmospheric Laser Doppler Velocimeter methods.
NASA Technical Reports Server (NTRS)
Farmer, W. M.; Hornkohl, J. O.; Brayton, D. B.
1971-01-01
Evaluation of the effectiveness of atmospheric applications of a Laser Doppler Velocimeter (LDV) at a wavelength of about 0.5 micrometer in conjunction with dual scatter LDV illuminating techniques, or at a wavelength of 10.6 micrometer with local oscillator LDV illuminating techniques. Equations and examples are given to provide a quantitative basis for LDV system selection and performance criteria in atmospheric research. The comparative study shows that specific ranges and conditions exist where performance of one of the methods is superior to that of the other. It is also pointed out that great care must be exercised in choosing system parameters that optimize a particular LDV designed for atmospheric applications.
van der Klink, Marcel R.; van Merriënboer, Jeroen J. G.
2010-01-01
This study investigated the effect of performance-based versus competence-based assessment criteria on task performance and self-assessment skills among 39 novice secondary vocational education students in the domain of nursing and care. In a performance-based assessment group students are provided with a preset list of performance-based assessment criteria, describing what students should do, for the task at hand. The performance-based group is compared to a competence-based assessment group in which students receive a preset list of competence-based assessment criteria, describing what students should be able to do. The test phase revealed that the performance-based group outperformed the competence-based group on test task performance. In addition, higher performance of the performance-based group was reached with lower reported mental effort during training, indicating a higher instructional efficiency for novice students. PMID:20054648
Assessment study of lichenometric methods for dating surfaces
NASA Astrophysics Data System (ADS)
Jomelli, Vincent; Grancher, Delphine; Naveau, Philippe; Cooley, Daniel; Brunstein, Daniel
2007-04-01
In this paper, we discuss the advantages and drawbacks of the most classical approaches used in lichenometry. In particular, we perform a detailed comparison among methods based on the statistical analysis of either the largest lichen diameters recorded on geomorphic features or the frequency of all lichens. To assess the performance of each method, a careful comparison design with well-defined criteria is proposed and applied to two distinct data sets. First, we study 350 tombstones. This represents an ideal test bed because tombstone dates are known and, therefore, the quality of the estimated lichen growth curve can be easily tested for the different techniques. Secondly, 37 moraines from two tropical glaciers are investigated. This analysis corresponds to our real case study. For both data sets, we apply our list of criteria that reflects precision, error measurements and their theoretical foundations when proposing estimated ages and their associated confidence intervals. From this comparison, it clearly appears that two methods, the mean of the n largest lichen diameters and the recent Bayesian method based on extreme value theory, offer the most reliable estimates of moraine and tombstones dates. Concerning the spread of the error, the latter approach provides the smallest uncertainty and it is the only one that takes advantage of the statistical nature of the observations by fitting an extreme value distribution to the largest diameters.
Assessing the performance of regional landslide early warning models: the EDuMaP method
NASA Astrophysics Data System (ADS)
Calvello, M.; Piciullo, L.
2016-01-01
A schematic of the components of regional early warning systems for rainfall-induced landslides is herein proposed, based on a clear distinction between warning models and warning systems. According to this framework an early warning system comprises a warning model as well as a monitoring and warning strategy, a communication strategy and an emergency plan. The paper proposes the evaluation of regional landslide warning models by means of an original approach, called the "event, duration matrix, performance" (EDuMaP) method, comprising three successive steps: identification and analysis of the events, i.e., landslide events and warning events derived from available landslides and warnings databases; definition and computation of a duration matrix, whose elements report the time associated with the occurrence of landslide events in relation to the occurrence of warning events, in their respective classes; evaluation of the early warning model performance by means of performance criteria and indicators applied to the duration matrix. During the first step the analyst identifies and classifies the landslide and warning events, according to their spatial and temporal characteristics, by means of a number of model parameters. In the second step, the analyst computes a time-based duration matrix with a number of rows and columns equal to the number of classes defined for the warning and landslide events, respectively. In the third step, the analyst computes a series of model performance indicators derived from a set of performance criteria, which need to be defined by considering, once again, the features of the warning model. The applicability, potentialities and limitations of the EDuMaP method are tested and discussed using real landslides and warning data from the municipal early warning system operating in Rio de Janeiro (Brazil).
Mindfulness, burnout, and effects on performance evaluations in internal medicine residents
Braun, Sarah E; Auerbach, Stephen M; Rybarczyk, Bruce; Lee, Bennett; Call, Stephanie
2017-01-01
Purpose Burnout has been documented at high levels in medical residents with negative effects on performance. Some dispositional qualities, like mindfulness, may protect against burnout. The purpose of the present study was to assess burnout prevalence among internal medicine residents at a single institution, examine the relationship between mindfulness and burnout, and provide preliminary findings on the relation between burnout and performance evaluations in internal medicine residents. Methods Residents (n = 38) completed validated measures of burnout at three time points separated by 2 months and a validated measure of dispositional mindfulness at baseline. Program director end-of-year performance evaluations were also obtained on 22 milestones used to evaluate internal medicine resident performance; notably, these milestones have not yet been validated for research purposes; therefore, the investigation here is exploratory. Results Overall, 71.1% (n = 27) of the residents met criteria for burnout during the study. Lower scores on the “acting with awareness” facet of dispositional mindfulness significantly predicted meeting burnout criteria χ2(5) = 11.88, p = 0.04. Lastly, meeting burnout criteria significantly predicted performance on three of the performance milestones, with positive effects on milestones from the “system-based practices” and “professionalism” domains and negative effects on a milestone from the “patient care” domain. Conclusion Burnout rates were high in this sample of internal medicine residents and rates were consistent with other reports of burnout during medical residency. Dispositional mindfulness was supported as a protective factor against burnout. Importantly, results from the exploratory investigation of the relationship between burnout and resident evaluations suggested that burnout may improve performance on some domains of resident evaluations while compromising performance on other domains. Implications and directions for future research are discussed. PMID:28860889
Code of Federal Regulations, 2013 CFR
2013-07-01
... scientific rationale and must contain sufficient parameters or constituents to protect the designated use... State must provide information identifying the method by which the State intends to regulate point... scientifically defensible methods; (2) Establish narrative criteria or criteria based upon biomonitoring methods...
Code of Federal Regulations, 2014 CFR
2014-07-01
... scientific rationale and must contain sufficient parameters or constituents to protect the designated use... State must provide information identifying the method by which the State intends to regulate point... scientifically defensible methods; (2) Establish narrative criteria or criteria based upon biomonitoring methods...
Code of Federal Regulations, 2011 CFR
2011-07-01
... scientific rationale and must contain sufficient parameters or constituents to protect the designated use... State must provide information identifying the method by which the State intends to regulate point... scientifically defensible methods; (2) Establish narrative criteria or criteria based upon biomonitoring methods...
Code of Federal Regulations, 2012 CFR
2012-07-01
... scientific rationale and must contain sufficient parameters or constituents to protect the designated use... State must provide information identifying the method by which the State intends to regulate point... scientifically defensible methods; (2) Establish narrative criteria or criteria based upon biomonitoring methods...
Zhang, Jiayi; Yao, Zheng; Lu, Mingquan
2016-01-01
In order to provide better navigation service for a wide range of applications, modernized global navigation satellite systems (GNSS) employs increasingly advanced and complicated techniques in modulation and multiplexing of signals. This trend correspondingly increases the complexity of signal despreading at the receiver when matched receiving is used. Considering the numerous low-end receiver who can hardly afford such receiving complexity, it is feasible to apply some receiving strategies, which uses simplified forms of local despreading signals, which is termed unmatched despreading. However, the mismatch between local signal and received signal causes performance loss in code tracking, which is necessary to be considered in the theoretical evaluation methods of signals. In this context, we generalize the theoretical signal evaluation model for unmatched receiving. Then, a series of evaluation criteria are proposed, which are decoupled from unrelated influencing factors and concentrates on the key factors related to the signal and its receiving, thus better revealing the inherent performance of signals. The proposed evaluation criteria are used to study two GNSS signals, from which constructive guidance are derived for receivers and signal designer. PMID:27447648
Zhang, Jiayi; Yao, Zheng; Lu, Mingquan
2016-07-20
In order to provide better navigation service for a wide range of applications, modernized global navigation satellite systems (GNSS) employs increasingly advanced and complicated techniques in modulation and multiplexing of signals. This trend correspondingly increases the complexity of signal despreading at the receiver when matched receiving is used. Considering the numerous low-end receiver who can hardly afford such receiving complexity, it is feasible to apply some receiving strategies, which uses simplified forms of local despreading signals, which is termed unmatched despreading. However, the mismatch between local signal and received signal causes performance loss in code tracking, which is necessary to be considered in the theoretical evaluation methods of signals. In this context, we generalize the theoretical signal evaluation model for unmatched receiving. Then, a series of evaluation criteria are proposed, which are decoupled from unrelated influencing factors and concentrates on the key factors related to the signal and its receiving, thus better revealing the inherent performance of signals. The proposed evaluation criteria are used to study two GNSS signals, from which constructive guidance are derived for receivers and signal designer.
Schold, Jesse D; Miller, Charles M; Henry, Mitchell L; Buccini, Laura D; Flechner, Stuart M; Goldfarb, David A; Poggio, Emilio D; Andreoni, Kenneth A
2017-06-01
Scientific Registry of Transplant Recipients report cards of US organ transplant center performance are publicly available and used for quality oversight. Low center performance (LP) evaluations are associated with changes in practice including reduced transplant rates and increased waitlist removals. In 2014, Scientific Registry of Transplant Recipients implemented new Bayesian methodology to evaluate performance which was not adopted by Center for Medicare and Medicaid Services (CMS). In May 2016, CMS altered their performance criteria, reducing the likelihood of LP evaluations. Our aims were to evaluate incidence, survival rates, and volume of LP centers with Bayesian, historical (old-CMS) and new-CMS criteria using 6 consecutive program-specific reports (PSR), January 2013 to July 2015 among adult kidney transplant centers. Bayesian, old-CMS and new-CMS criteria identified 13.4%, 8.3%, and 6.1% LP PSRs, respectively. Over the 3-year period, 31.9% (Bayesian), 23.4% (old-CMS), and 19.8% (new-CMS) of centers had 1 or more LP evaluation. For small centers (<83 transplants/PSR), there were 4-fold additional LP evaluations (52 vs 13 PSRs) for 1-year mortality with Bayesian versus new-CMS criteria. For large centers (>183 transplants/PSR), there were 3-fold additional LP evaluations for 1-year mortality with Bayesian versus new-CMS criteria with median differences in observed and expected patient survival of -1.6% and -2.2%, respectively. A significant proportion of kidney transplant centers are identified as low performing with relatively small survival differences compared with expected. Bayesian criteria have significantly higher flagging rates and new-CMS criteria modestly reduce flagging. Critical appraisal of performance criteria is needed to assess whether quality oversight is meeting intended goals and whether further modifications could reduce risk aversion, more efficiently allocate resources, and increase transplant opportunities.
Comparison of different criteria for diagnosis of gestational diabetes mellitus
Sagili, Haritha; Kamalanathan, Sadishkumar; Sahoo, Jayaprakash; Lakshminarayanan, Subitha; Rani, Reddi; Jayalakshmi, D.; Kumar, K. T. Hari Chandra
2015-01-01
Introduction: The International Association of Diabetes in Pregnancy Study Group (IADPSG) criteria for gestational diabetes mellitus (GDM) has been adopted by most associations across the world including the American Diabetes Association and World Health Organization (WHO). We conducted a study comparing the IADPSG and previous WHO criteria and their effects on neonatal birth weight. Methods: The study was carried out in Obstetrics and Gynaecology Department of a tertiary care institute in South India in collaboration with Endocrinology Department. Thousand two hundred and thirty-one antenatal cases with at least one risk factor for GDM and gestational age of more than 24 weeks were included in the study. Both criteria were compared on the basis of 75 g oral glucose tolerance test results. Results: The prevalence of GDM using IADPSG and previous WHO criteria were 12.6% and 12.4%, respectively. The prevalence of GDM was 9.9% when both criteria had to be satisfied. Both GDM criteria groups did not differ in neonatal birth weight and macrosomia rate. However, there was a significant increase in lower segment cesarean section in IADPSG criteria group. Elevated fasting plasma glucose alone picked up only one GDM in the previous WHO criteria group. Conclusions: A single 2 h plasma glucose is both easy to perform and economical. A revised WHO criterion using a 2 h threshold of ≥140 mg % can be adopted as a one-step screening and diagnostic procedure for GDM in our country. PMID:26693435
Geneletti, Davide
2010-02-01
This paper presents a method based on the combination of stakeholder analysis and spatial multicriteria evaluation (SMCE) to first design possible sites for an inert landfill, and then rank them according to their suitability. The method was tested for the siting of an inert landfill in the Sarca's Plain, located in south-western Trentino, an alpine region in northern Italy. Firstly, stakeholder analysis was conducted to identify a set of criteria to be satisfied by new inert landfill sites. SMCE techniques were then applied to combine the criteria, and obtain a suitability map of the study region. Subsequently, the most suitable sites were extracted by taking into account also thresholds based on size and shape. These sites were then compared and ranked according to their visibility, accessibility and dust pollution. All these criteria were assessed through GIS modelling. Sensitivity analyses were performed on the results to assess the stability of the ranking with respect to variations in the input (criterion scores and weights). The study concluded that the three top-ranking sites are located close to each other, in the northernmost sector of the study area. A more general finding was that the use of different criteria in the different stages of the analysis allowed to better differentiate the suitability of the potential landfill sites.
Analysis of EPA's endocrine screening battery and recommendations for further review.
Schapaugh, Adam W; McFadden, Lisa G; Zorrilla, Leah M; Geter, David R; Stuchal, Leah D; Sunger, Neha; Borgert, Christopher J
2015-08-01
EPA's Endocrine Disruptor Screening Program Tier 1 battery consists of eleven assays intended to identify the potential of a chemical to interact with the estrogen, androgen, thyroid, or steroidogenesis systems. We have collected control data from a subset of test order recipients from the first round of screening. The analysis undertaken herein demonstrates that the EPA should review all testing methods prior to issuing further test orders. Given the frequency with which certain performance criteria were violated, a primary focus of that review should consider adjustments to these standards to better reflect biological variability. A second focus should be to provide detailed, assay-specific direction on when results should be discarded; no clear guidance exists on the degree to which assays need to be re-run for failing to meet performance criteria. A third focus should be to identify permissible differences in study design and execution that have a large influence on endpoint variance. Experimental guidelines could then be re-defined such that endpoint variances are reduced and performance criteria are violated less frequently. It must be emphasized that because we were restricted to a subset (approximately half) of the control data, our analyses serve only as examples to underscore the importance of a detailed, rigorous, and comprehensive evaluation of the performance of the battery. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Wibowo, Santoso; Deng, Hepu
2015-06-01
This paper presents a multi-criteria group decision making approach for effectively evaluating the performance of e-waste recycling programs under uncertainty in an organization. Intuitionistic fuzzy numbers are used for adequately representing the subjective and imprecise assessments of the decision makers in evaluating the relative importance of evaluation criteria and the performance of individual e-waste recycling programs with respect to individual criteria in a given situation. An interactive fuzzy multi-criteria decision making algorithm is developed for facilitating consensus building in a group decision making environment to ensure that all the interest of individual decision makers have been appropriately considered in evaluating alternative e-waste recycling programs with respect to their corporate sustainability performance. The developed algorithm is then incorporated into a multi-criteria decision support system for making the overall performance evaluation process effectively and simple to use. Such a multi-criteria decision making system adequately provides organizations with a proactive mechanism for incorporating the concept of corporate sustainability into their regular planning decisions and business practices. An example is presented for demonstrating the applicability of the proposed approach in evaluating the performance of e-waste recycling programs in organizations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Blind predictions of protein interfaces by docking calculations in CAPRI.
Lensink, Marc F; Wodak, Shoshana J
2010-11-15
Reliable prediction of the amino acid residues involved in protein-protein interfaces can provide valuable insight into protein function, and inform mutagenesis studies, and drug design applications. A fast-growing number of methods are being proposed for predicting protein interfaces, using structural information, energetic criteria, or sequence conservation or by integrating multiple criteria and approaches. Overall however, their performance remains limited, especially when applied to nonobligate protein complexes, where the individual components are also stable on their own. Here, we evaluate interface predictions derived from protein-protein docking calculations. To this end we measure the overlap between the interfaces in models of protein complexes submitted by 76 participants in CAPRI (Critical Assessment of Predicted Interactions) and those of 46 observed interfaces in 20 CAPRI targets corresponding to nonobligate complexes. Our evaluation considers multiple models for each target interface, submitted by different participants, using a variety of docking methods. Although this results in a substantial variability in the prediction performance across participants and targets, clear trends emerge. Docking methods that perform best in our evaluation predict interfaces with average recall and precision levels of about 60%, for a small majority (60%) of the analyzed interfaces. These levels are significantly higher than those obtained for nonobligate complexes by most extant interface prediction methods. We find furthermore that a sizable fraction (24%) of the interfaces in models ranked as incorrect in the CAPRI assessment are actually correctly predicted (recall and precision ≥50%), and that these models contribute to 70% of the correct docking-based interface predictions overall. Our analysis proves that docking methods are much more successful in identifying interfaces than in predicting complexes, and suggests that these methods have an excellent potential of addressing the interface prediction challenge. © 2010 Wiley-Liss, Inc.
Malakooti, Behnam; Yang, Ziyong
2004-02-01
In many real-world problems, the range of consequences of different alternatives are considerably different. In addition, sometimes, selection of a group of alternatives (instead of only one best alternative) is necessary. Traditional decision making approaches treat the set of alternatives with the same method of analysis and selection. In this paper, we propose clustering alternatives into different groups so that different methods of analysis, selection, and implementation for each group can be applied. As an example, consider the selection of a group of functions (or tasks) to be processed by a group of processors. The set of tasks can be grouped according to their similar criteria, and hence, each cluster of tasks to be processed by a processor. The selection of the best alternative for each clustered group can be performed using existing methods; however, the process of selecting groups is different than the process of selecting alternatives within a group. We develop theories and procedures for clustering discrete multiple criteria alternatives. We also demonstrate how the set of alternatives is clustered into mutually exclusive groups based on 1) similar features among alternatives; 2) ideal (or most representative) alternatives given by the decision maker; and 3) other preferential information of the decision maker. The clustering of multiple criteria alternatives also has the following advantages. 1) It decreases the set of alternatives to be considered by the decision maker (for example, different decision makers are assigned to different groups of alternatives). 2) It decreases the number of criteria. 3) It may provide a different approach for analyzing multiple decision makers problems. Each decision maker may cluster alternatives differently, and hence, clustering of alternatives may provide a basis for negotiation. The developed approach is applicable for solving a class of telecommunication networks problems where a set of objects (such as routers, processors, or intelligent autonomous vehicles) are to be clustered into similar groups. Objects are clustered based on several criteria and the decision maker's preferences.
[Development of a consented set of criteria to evaluate post-rehabilitation support services].
Parzanka, Susanne; Himstedt, Christian; Deck, Ruth
2015-01-01
Existing rehabilitation aftercare offers in Germany are heterogeneous, and there is a lack of transparency in terms of indications and methods as well as of (nationwide) availability and financial coverage. Also, there is no systematic and transparent synopsis. To close this gap a systematic review was conducted and a web-based database created for post-rehabilitation support. To allow a consistent assessment of the included aftercare offers, a quality profile of universally valid criteria was developed. This paper aims to outline the scientific approach. The procedure adapts the RAND/UCLA method, with the participation of the advisory board of the ReNa project. Preparations for the set included systematic searches in order to find possible criteria to assess the quality of aftercare offers. These criteria first were collected without any pre-selection involved. Every item of the adjusted collection was evaluated by every single member of the advisory board considering the topics "relevance", "feasibility" and "suitability for public coverage". Interpersonal analysis was conducted by relating the median and classification into consensus and dissent. All items that were considered to be "relevant" and "feasible" in the three stages of consensus building and deemed "suitable for public coverage" were transferred into the final set of criteria (ReNa set). A total of 82 publications were selected out of the 656 findings taken into account, which delivered 3,603 criteria of possible initial relevance. After a further removal of 2,598 redundant criteria, the panel needed to assess a set of 1,005 items. Finally we performed a quality assessment of aftercare offers using a set of 35 descriptive criteria merged into 8 conceptual clusters. The consented ReNa set of 35 items delivers a first generally valid tool to describe quality of structures, standards and processes of aftercare offers. So finally, the project developed into a complete collection of profiles characterizing each post-rehabilitation support service included in the database. Copyright © 2015. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
Şahin, Rıdvan; Liu, Peide
2017-07-01
Simplified neutrosophic set (SNS) is an appropriate tool used to express the incompleteness, indeterminacy and uncertainty of the evaluation objects in decision-making process. In this study, we define the concept of possibility SNS including two types of information such as the neutrosophic performance provided from the evaluation objects and its possibility degree using a value ranging from zero to one. Then by extending the existing neutrosophic information, aggregation models for SNSs that cannot be used effectively to fusion the two different information described above, we propose two novel neutrosophic aggregation operators considering possibility, which are named as a possibility-induced simplified neutrosophic weighted arithmetic averaging operator and possibility-induced simplified neutrosophic weighted geometric averaging operator, and discuss their properties. Moreover, we develop a useful method based on the proposed aggregation operators for solving a multi-criteria group decision-making problem with the possibility simplified neutrosophic information, in which the weights of decision-makers and decision criteria are calculated based on entropy measure. Finally, a practical example is utilised to show the practicality and effectiveness of the proposed method.
PERFORMANCE CRITERIA, A SYSTEM OF COMMUNICATION FOR MOBILIZING BUILDING INDUSTRY RESOURCES.
ERIC Educational Resources Information Center
JACQUES, RICHARD G.
A PROGRAM TO TEST AND DEMONSTRATE THE EFFICACY OF PERFORMANCE CRITERIA FOR UNIVERSITY BUILDING DESIGN AND CONSTRUCTION IS UNDER WAY IN NEW YORK STATE UNDER THE AUSPICES OF THE NEW YORK STATE UNIVERSITY CONSTRUCTION FUND. THE PROGRAM IS TO RESULT IN AN EXTENSIVE LIBRARY OF PERFORMANCE CRITERIA TO AID COMMUNICATION WITH ALL SECTORS OF THE BUILDING…
Pagotto, Valéria; Silveira, Erika Aparecida
2014-01-01
The purpose of this study cross-sectional study comprising 132 community dwelling elderly (≥ 60 years) was to identify sarcopenia prevalence in the Brazilian elderly, utilizing different diagnostic criteria and analyze agreement between criteria. Sarcopenia was assessed by nine muscle mass diagnostic criteria, by two muscle strength criteria and also by the combination of criteria. Prevalence was analyzed for each method, along with differences by gender and age group through calculation of the prevalence ratio (PR) and confidence interval (CI) 95%. The Kappa coefficient was used to analyze the level of agreement between all criteria. Sarcopenia prevalence varied between 60.6% and 8.3% with the application of muscle mass criteria, and between 54.2% and 48.8% with the application of strength criteria. The combination muscle mass+strength resulted in a decrease of prevalence in all criteria, varying between 36.6% and 6.1%. There was an increase in prevalence according to age groups for all methods. Prevalence was higher for men according to three muscle mass criteria, and higher in women for strength criteria and by two combined mass+strength criteria. The best level of agreement was obtained for two methods that utilized dual energy X-ray absorptiometry (DXA). The prevalence of sarcopenia differs by gender and age and definition criteria. The low agreement levels obtained between methods and the different prevalence values encountered indicate the necessities of an operational definition for the estimation of sarcopenia in different population. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models
NASA Astrophysics Data System (ADS)
Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini
2014-12-01
The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies-both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.
Terrestrial photovoltaic cell process testing
NASA Technical Reports Server (NTRS)
Burger, D. R.
1985-01-01
The paper examines critical test parameters, criteria for selecting appropriate tests, and the use of statistical controls and test patterns to enhance PV-cell process test results. The coverage of critical test parameters is evaluated by examining available test methods and then screening these methods by considering the ability to measure those critical parameters which are most affected by the generic process, the cost of the test equipment and test performance, and the feasibility for process testing.
Terrestrial photovoltaic cell process testing
NASA Astrophysics Data System (ADS)
Burger, D. R.
The paper examines critical test parameters, criteria for selecting appropriate tests, and the use of statistical controls and test patterns to enhance PV-cell process test results. The coverage of critical test parameters is evaluated by examining available test methods and then screening these methods by considering the ability to measure those critical parameters which are most affected by the generic process, the cost of the test equipment and test performance, and the feasibility for process testing.
IMRT QA: Selecting gamma criteria based on error detection sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steers, Jennifer M.; Fraass, Benedick A., E-mail: benedick.fraass@cshs.org
Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique,more » and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. Conclusions: We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.« less
Ivan Perez, S; Bernal, Valeria; Gonzalez, Paula N
2006-01-01
Over the last decade, geometric morphometric methods have been applied increasingly to the study of human form. When too few landmarks are available, outlines can be digitized as series of discrete points. The individual points must be slid along a tangential direction so as to remove tangential variation, because contours should be homologous from subject to subject whereas their individual points need not. This variation can be removed by minimizing either bending energy (BE) or Procrustes distance (D) with respect to a mean reference form. Because these two criteria make different assumptions, it becomes necessary to study how these differences modify the results obtained. We performed bootstrapped-based Goodall's F-test, Foote's measurement, principal component (PC) and discriminant function analyses on human molars and craniometric data to compare the results obtained by the two criteria. Results show that: (1) F-scores and P-values were similar for both criteria; (2) results of Foote's measurement show that both criteria yield different estimates of within- and between-sample variation; (3) there is low correlation between the first PC axes obtained by D and BE; (4) the percentage of correct classification is similar for BE and D, but the ordination of groups along discriminant scores differs between them. The differences between criteria can alter the results when morphological variation in the sample is small, as in the analysis of modern human populations. PMID:16761977
Boosting instance prototypes to detect local dermoscopic features.
Situ, Ning; Yuan, Xiaojing; Zouridakis, George
2010-01-01
Local dermoscopic features are useful in many dermoscopic criteria for skin cancer detection. We address the problem of detecting local dermoscopic features from epiluminescence (ELM) microscopy skin lesion images. We formulate the recognition of local dermoscopic features as a multi-instance learning (MIL) problem. We employ the method of diverse density (DD) and evidence confidence (EC) function to convert MIL to a single-instance learning (SIL) problem. We apply Adaboost to improve the classification performance with support vector machines (SVMs) as the base classifier. We also propose to boost the selection of instance prototypes through changing the data weights in the DD function. We validate the methods on detecting ten local dermoscopic features from a dataset with 360 images. We compare the performance of the MIL approach, its boosting version, and a baseline method without using MIL. Our results show that boosting can provide performance improvement compared to the other two methods.
The effectiveness of strategies to change organisational culture to improve healthcare performance.
Parmelli, Elena; Flodgren, Gerd; Schaafsma, Mary Ellen; Baillie, Nick; Beyer, Fiona R; Eccles, Martin P
2011-01-19
Organisational culture is an anthropological metaphor used to inform research and consultancy and to explain organisational environments. Great emphasis has been placed during the last years on the need to change organisational culture in order to pursue effective improvement of healthcare performance. However, the precise nature of organisational culture in healthcare policy often remains underspecified and the desirability and feasibility of strategies to be adopted has been called into question. To determine the effectiveness of strategies to change organisational culture in order to improve healthcare performance.To examine the effectiveness of these strategies according to different patterns of organisational culture. We searched the following electronic databases for primary studies: The Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE, CINAHL, Sociological Abstracts, Web of Knowledge, PsycINFO, Business and Management, EThOS, Index to Theses, Intute, HMIC, SIGLE, and Scopus until October 2009. The Database of Abstracts of Reviews of Effectiveness (DARE) was searched for related reviews. We also searched the reference lists of all papers and relevant reviews identified, and we contacted experts in the field for advice on further potential studies. We considered randomised controlled trials (RCTs) or well designed quasi-experimental studies, controlled clinical trials (CCTs), controlled before and after studies (CBAs) and interrupted time series analyses (ITS) meeting the quality criteria used by the Cochrane Effective Practice and Organisation of Care Group (EPOC). Studies should be set in any type of healthcare organisation in which strategies to change organisational culture in order to improve healthcare performance were applied. Our main outcomes were objective measures of professional performance and patient outcome. At least two review authors independently applied the criteria for inclusion and exclusion criteria to scan titles and abstracts and then to screen the full reports of selected citations. At each stage results were compared and discrepancies solved through discussion. The search strategy yielded 4239 records. After the full text assessment, no studies met the quality criteria used by the EPOC Group and evaluated the effectiveness of strategies to change organisational culture to improve healthcare performance. It is not possible to draw any conclusions about the effectiveness of strategies to change organisational culture because we found no studies that fulfilled the methodological criteria for this review. Research efforts should focus on strengthening the evidence about the effectiveness of methods to change organisational culture to improve health care performance.
ERIC Educational Resources Information Center
Zhou, P.; Ang, B. W.
2009-01-01
Composite indicators have been increasingly recognized as a useful tool for performance monitoring, benchmarking comparisons and public communication in a wide range of fields. The usefulness of a composite indicator depends heavily on the underlying data aggregation scheme where multiple criteria decision analysis (MCDA) is commonly used. A…
40 CFR Appendix A to Subpart Wwww... - Test Method for Determining Vapor Suppressant Effectiveness
Code of Federal Regulations, 2014 CFR
2014-07-01
...-alone test for emissions determination. This test is designed to evaluate the performance of film... production. This comparative test quantifies the loss of volatiles from a fiberglass reinforced laminate...-suppressed resins. 11.5Data Acceptance Criteria: 11.5.1A test set is designed as twelve individual test runs...
40 CFR Appendix A to Subpart Wwww... - Test Method for Determining Vapor Suppressant Effectiveness
Code of Federal Regulations, 2013 CFR
2013-07-01
...-alone test for emissions determination. This test is designed to evaluate the performance of film... production. This comparative test quantifies the loss of volatiles from a fiberglass reinforced laminate...-suppressed resins. 11.5Data Acceptance Criteria: 11.5.1A test set is designed as twelve individual test runs...
40 CFR Appendix A to Subpart Wwww... - Test Method for Determining Vapor Suppressant Effectiveness
Code of Federal Regulations, 2011 CFR
2011-07-01
... test for emissions determination. This test is designed to evaluate the performance of film forming... production. This comparative test quantifies the loss of volatiles from a fiberglass reinforced laminate...-suppressed resins. 11.5Data Acceptance Criteria: 11.5.1A test set is designed as twelve individual test runs...
40 CFR Appendix A to Subpart Wwww... - Test Method for Determining Vapor Suppressant Effectiveness
Code of Federal Regulations, 2010 CFR
2010-07-01
... test for emissions determination. This test is designed to evaluate the performance of film forming... production. This comparative test quantifies the loss of volatiles from a fiberglass reinforced laminate...-suppressed resins. 11.5Data Acceptance Criteria: 11.5.1A test set is designed as twelve individual test runs...
40 CFR Appendix A to Subpart Wwww... - Test Method for Determining Vapor Suppressant Effectiveness
Code of Federal Regulations, 2012 CFR
2012-07-01
...-alone test for emissions determination. This test is designed to evaluate the performance of film... production. This comparative test quantifies the loss of volatiles from a fiberglass reinforced laminate...-suppressed resins. 11.5Data Acceptance Criteria: 11.5.1A test set is designed as twelve individual test runs...
10 CFR 963.16 - Postclosure suitability evaluation method.
Code of Federal Regulations, 2010 CFR
2010-01-01
... radionuclide concentrations in the case where there is no human intrusion into the repository. DOE will model... where there is a human intrusion as specified by 10 CFR 63.322. DOE will model the performance of the... criteria in § 963.17. If required by applicable NRC regulations regarding a human intrusion standard, § 63...
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2010 CFR
2010-10-01
... results of the application of safety design principles as noted in Appendix C to this part. The MTTHE is... fault/failure analysis must be based on the assessment of the design and implementation of all safety... associated device drivers, as well as historical performance data, analytical methods and experimental safety...
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2013 CFR
2013-10-01
... results of the application of safety design principles as noted in Appendix C to this part. The MTTHE is... fault/failure analysis must be based on the assessment of the design and implementation of all safety... associated device drivers, as well as historical performance data, analytical methods and experimental safety...
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2017-04-01
Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts compared to simpler methods. It is pointed out that the ML methods do not differ dramatically from the stochastic methods, while it is interesting that the NN, RF and SVM algorithms used in this study offer potentially very good performance in terms of accuracy. It should be noted that, although this study focuses on hydrological processes, the results are of general scientific interest. Another important point in this study is the use of several methods and metrics. Using fewer methods and fewer metrics would have led to a very different overall picture, particularly if those fewer metrics corresponded to fewer criteria. For this reason, we consider that the proposed methodology is appropriate for the evaluation of forecasting methods.
Bowe, Sarah N; Laury, Adrienne M; Gray, Stacey T
2017-06-01
Objective This systematic review aims to evaluate which applicant characteristics available to an otolaryngology selection committee are associated with future performance in residency or practice. Data Sources PubMed, Scopus, ERIC, Health Business, Psychology and Behavioral Sciences Collection, and SocINDEX. Review Methods Study eligibility was performed by 2 independent investigators in accordance with the PRISMA protocol (Preferred Reporting Items for Systematic Reviews and Meta-analyses). Data obtained from each article included research questions, study design, predictors, outcomes, statistical analysis, and results/findings. Study bias was assessed with the Quality in Prognosis Studies tool. Results The initial search identified 439 abstracts. Six articles fulfilled all inclusion and exclusion criteria. All studies were retrospective cohort studies (level 4). Overall, the studies yielded relatively few criteria that correlated with residency success, with generally conflicting results. Most studies were found to have a high risk of bias. Conclusion Previous resident selection research has lacked a theoretical background, thus predisposing this work to inconsistent results and high risk of bias. The included studies provide historical insight into the predictors and criteria (eg, outcomes) previously deemed pertinent by the otolaryngology field. Additional research is needed, possibly integrating aspects of personnel selection, to engage in an evidence-based approach to identify highly qualified candidates who will succeed as future otolaryngologists.
ERIC Educational Resources Information Center
Blom, Diana; Encarnacao, John
2012-01-01
The study investigates criteria chosen by music students for peer and self assessment of both the rehearsal process and performance outcome of their rock groups. The student-chosen criteria and their explanations of these criteria were analysed in relation to Birkett's skills taxonomy of "soft" and "hard" skills. In the rehearsal process, students…
Cervical vertebral maturation as a biologic indicator of skeletal maturity.
Santiago, Rodrigo César; de Miranda Costa, Luiz Felipe; Vitral, Robert Willer Farinazzo; Fraga, Marcelo Reis; Bolognese, Ana Maria; Maia, Lucianne Cople
2012-11-01
To identify and review the literature regarding the reliability of cervical vertebrae maturation (CVM) staging to predict the pubertal spurt. The selection criteria included cross-sectional and longitudinal descriptive studies in humans that evaluated qualitatively or quantitatively the accuracy and reproducibility of the CVM method on lateral cephalometric radiographs, as well as the correlation with a standard method established by hand-wrist radiographs. The searches retrieved 343 unique citations. Twenty-three studies met the inclusion criteria. Six articles had moderate to high scores, while 17 of 23 had low scores. Analysis also showed a moderate to high statistically significant correlation between CVM and hand-wrist maturation methods. There was a moderate to high reproducibility of the CVM method, and only one specific study investigated the accuracy of the CVM index in detecting peak pubertal growth. This systematic review has shown that the studies on CVM method for radiographic assessment of skeletal maturation stages suffer from serious methodological failures. Better-designed studies with adequate accuracy, reproducibility, and correlation analysis, including studies with appropriate sensitivity-specificity analysis, should be performed.
On the predictive information criteria for model determination in seismic hazard analysis
NASA Astrophysics Data System (ADS)
Varini, Elisa; Rotondi, Renata
2016-04-01
Many statistical tools have been developed for evaluating, understanding, and comparing models, from both frequentist and Bayesian perspectives. In particular, the problem of model selection can be addressed according to whether the primary goal is explanation or, alternatively, prediction. In the former case, the criteria for model selection are defined over the parameter space whose physical interpretation can be difficult; in the latter case, they are defined over the space of the observations, which has a more direct physical meaning. In the frequentist approaches, model selection is generally based on an asymptotic approximation which may be poor for small data sets (e.g. the F-test, the Kolmogorov-Smirnov test, etc.); moreover, these methods often apply under specific assumptions on models (e.g. models have to be nested in the likelihood ratio test). In the Bayesian context, among the criteria for explanation, the ratio of the observed marginal densities for two competing models, named Bayes Factor (BF), is commonly used for both model choice and model averaging (Kass and Raftery, J. Am. Stat. Ass., 1995). But BF does not apply to improper priors and, even when the prior is proper, it is not robust to the specification of the prior. These limitations can be extended to two famous penalized likelihood methods as the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), since they are proved to be approximations of -2log BF . In the perspective that a model is as good as its predictions, the predictive information criteria aim at evaluating the predictive accuracy of Bayesian models or, in other words, at estimating expected out-of-sample prediction error using a bias-correction adjustment of within-sample error (Gelman et al., Stat. Comput., 2014). In particular, the Watanabe criterion is fully Bayesian because it averages the predictive distribution over the posterior distribution of parameters rather than conditioning on a point estimate, but it is hardly applicable to data which are not independent given parameters (Watanabe, J. Mach. Learn. Res., 2010). A solution is given by Ando and Tsay criterion where the joint density may be decomposed into the product of the conditional densities (Ando and Tsay, Int. J. Forecast., 2010). The above mentioned criteria are global summary measures of model performance, but more detailed analysis could be required to discover the reasons for poor global performance. In this latter case, a retrospective predictive analysis is performed on each individual observation. In this study we performed the Bayesian analysis of Italian data sets by four versions of a long-term hazard model known as the stress release model (Vere-Jones, J. Physics Earth, 1978; Bebbington and Harte, Geophys. J. Int., 2003; Varini and Rotondi, Environ. Ecol. Stat., 2015). Then we illustrate the results on their performance evaluated by Bayes Factor, predictive information criteria and retrospective predictive analysis.
Detection of Operator Performance Breakdown as an Automation Triggering Mechanism
NASA Technical Reports Server (NTRS)
Yoo, Hyo-Sang; Lee, Paul U.; Landry, Steven J.
2015-01-01
Performance breakdown (PB) has been anecdotally described as a state where the human operator "loses control of context" and "cannot maintain required task performance." Preventing such a decline in performance is critical to assure the safety and reliability of human-integrated systems, and therefore PB could be useful as a point at which automation can be applied to support human performance. However, PB has never been scientifically defined or empirically demonstrated. Moreover, there is no validated objective way of detecting such a state or the transition to that state. The purpose of this work is: 1) to empirically demonstrate a PB state, and 2) to develop an objective way of detecting such a state. This paper defines PB and proposes an objective method for its detection. A human-in-the-loop study was conducted: 1) to demonstrate PB by increasing workload until the subject reported being in a state of PB, and 2) to identify possible parameters of a detection method for objectively identifying the subjectively-reported PB point, and 3) to determine if the parameters are idiosyncratic to an individual/context or are more generally applicable. In the experiment, fifteen participants were asked to manage three concurrent tasks (one primary and two secondary) for 18 minutes. The difficulty of the primary task was manipulated over time to induce PB while the difficulty of the secondary tasks remained static. The participants' task performance data was collected. Three hypotheses were constructed: 1) increasing workload will induce subjectively-identified PB, 2) there exists criteria that identifies the threshold parameters that best matches the subjectively-identified PB point, and 3) the criteria for choosing the threshold parameters is consistent across individuals. The results show that increasing workload can induce subjectively-identified PB, although it might not be generalizable-only 12 out of 15 participants declared PB. The PB detection method based on signal detection analysis was applied to the performance data and the results showed that PB can be identified using the method, particularly when the values of the parameters for the detection method were calibrated individually.
Van der Fels-Klerx, H J; Van Asselt, E D; Raley, M; Poulsen, M; Korsgaard, H; Bredsdorff, L; Nauta, M; D'agostino, M; Coles, D; Marvin, H J P; Frewer, L J
2018-01-22
This study aimed to critically review methods for ranking risks related to food safety and dietary hazards on the basis of their anticipated human health impacts. A literature review was performed to identify and characterize methods for risk ranking from the fields of food, environmental science and socio-economic sciences. The review used a predefined search protocol, and covered the bibliographic databases Scopus, CAB Abstracts, Web of Sciences, and PubMed over the period 1993-2013. All references deemed relevant, on the basis of predefined evaluation criteria, were included in the review, and the risk ranking method characterized. The methods were then clustered-based on their characteristics-into eleven method categories. These categories included: risk assessment, comparative risk assessment, risk ratio method, scoring method, cost of illness, health adjusted life years (HALY), multi-criteria decision analysis, risk matrix, flow charts/decision trees, stated preference techniques and expert synthesis. Method categories were described by their characteristics, weaknesses and strengths, data resources, and fields of applications. It was concluded there is no single best method for risk ranking. The method to be used should be selected on the basis of risk manager/assessor requirements, data availability, and the characteristics of the method. Recommendations for future use and application are provided.
Chandnani, Sonia R.; Ramakrishna, C. D.; Dave, Bhargav A.; Kothavade, Pankaj S.
2017-01-01
Introduction The performance of Blood Glucose Monitoring System (BGMS) is critical as the information provided by the system guide the patient or health care professional in making treatment decisions. However, besides evaluating accuracy of the BGMS in laboratory setting, it is equally important that the intended users (healthcare professionals and patients) should be able to achieve blood glucose measurements with similar level of high accuracy. Aim To assess the performance of EXIMO™ (Meril Diagnostics Pvt. Ltd., Vapi, Gujarat, India) BGMS as per International Organization for Standardization (ISO) 15197:2013 section 8 user performance criteria. Materials and Methods This was a non-randomized and post-marketing study conducted at a tertiary care centre of India. A total of 1005 patients with diabetes themselves performed fingertip blood glucose measurement using EXIMO™ BGMS. Immediately after capillary blood glucose measurement using the blood glucose monitoring system, venous blood sample from each patient was obtained by a trained technician which was assessed by reference laboratory method- Cobas Integra 400 plus (Roche Instrument Centre, Rotkreuz, Switzerland). All the blood glucose measurements assessed by EXIMO™ were compared with laboratory results. Performance of the system was assessed as per ISO 15197:2013 criteria using Bland-Altman plot, Parkes-Consensus Error Grid (CEG) and Surveillance Error Grid analyses (SEG). Results A total of 1005 patients participated in the study. Average age of the patients was 44.93±14.65 years. Evaluation of capillary fingertip blood glucose measurements demonstrated that 95.82% measurements fulfilled ISO 15197:2013 section 8 user performance criteria. All the results lie within clinically non-critical zones; Zone A (99.47%; n=1000) and Zone B (0.53%; n=05) of the CEG analysis. As per SEG analysis, majority of the results fell within “no-risk” zone (risk score 0 to 0.5; 90.42%). Conclusion The result of the study confirmed that intended users are able to obtain accurate glucose measurements when operating EXIMO™ BGMS, given only the instructions and training materials routinely provided with the system, in clinical practice. PMID:28658800
Multi-criteria evaluation methods in the production scheduling
NASA Astrophysics Data System (ADS)
Kalinowski, K.; Krenczyk, D.; Paprocka, I.; Kempa, W.; Grabowik, C.
2016-08-01
The paper presents a discussion on the practical application of different methods of multi-criteria evaluation in the process of scheduling in manufacturing systems. Among the methods two main groups are specified: methods based on the distance function (using metacriterion) and methods that create a Pareto set of possible solutions. The basic criteria used for scheduling were also described. The overall procedure of evaluation process in production scheduling was presented. It takes into account the actions in the whole scheduling process and human decision maker (HDM) participation. The specified HDM decisions are related to creating and editing a set of evaluation criteria, selection of multi-criteria evaluation method, interaction in the searching process, using informal criteria and making final changes in the schedule for implementation. According to need, process scheduling may be completely or partially automated. Full automatization is possible in case of metacriterion based objective function and if Pareto set is selected - the final decision has to be done by HDM.
Collective feature selection to identify crucial epistatic variants.
Verma, Shefali S; Lucas, Anastasia; Zhang, Xinyuan; Veturi, Yogasudha; Dudek, Scott; Li, Binglan; Li, Ruowang; Urbanowicz, Ryan; Moore, Jason H; Kim, Dokyoon; Ritchie, Marylyn D
2018-01-01
Machine learning methods have gained popularity and practicality in identifying linear and non-linear effects of variants associated with complex disease/traits. Detection of epistatic interactions still remains a challenge due to the large number of features and relatively small sample size as input, thus leading to the so-called "short fat data" problem. The efficiency of machine learning methods can be increased by limiting the number of input features. Thus, it is very important to perform variable selection before searching for epistasis. Many methods have been evaluated and proposed to perform feature selection, but no single method works best in all scenarios. We demonstrate this by conducting two separate simulation analyses to evaluate the proposed collective feature selection approach. Through our simulation study we propose a collective feature selection approach to select features that are in the "union" of the best performing methods. We explored various parametric, non-parametric, and data mining approaches to perform feature selection. We choose our top performing methods to select the union of the resulting variables based on a user-defined percentage of variants selected from each method to take to downstream analysis. Our simulation analysis shows that non-parametric data mining approaches, such as MDR, may work best under one simulation criteria for the high effect size (penetrance) datasets, while non-parametric methods designed for feature selection, such as Ranger and Gradient boosting, work best under other simulation criteria. Thus, using a collective approach proves to be more beneficial for selecting variables with epistatic effects also in low effect size datasets and different genetic architectures. Following this, we applied our proposed collective feature selection approach to select the top 1% of variables to identify potential interacting variables associated with Body Mass Index (BMI) in ~ 44,000 samples obtained from Geisinger's MyCode Community Health Initiative (on behalf of DiscovEHR collaboration). In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Predicting Performance in Higher Education Using Proximal Predictors.
Niessen, A Susan M; Meijer, Rob R; Tendeiro, Jorge N
2016-01-01
We studied the validity of two methods for predicting academic performance and student-program fit that were proximal to important study criteria. Applicants to an undergraduate psychology program participated in a selection procedure containing a trial-studying test based on a work sample approach, and specific skills tests in English and math. Test scores were used to predict academic achievement and progress after the first year, achievement in specific course types, enrollment, and dropout after the first year. All tests showed positive significant correlations with the criteria. The trial-studying test was consistently the best predictor in the admission procedure. We found no significant differences between the predictive validity of the trial-studying test and prior educational performance, and substantial shared explained variance between the two predictors. Only applicants with lower trial-studying scores were significantly less likely to enroll in the program. In conclusion, the trial-studying test yielded predictive validities similar to that of prior educational performance and possibly enabled self-selection. In admissions aimed at student-program fit, or in admissions in which past educational performance is difficult to use, a trial-studying test is a good instrument to predict academic performance.
An Intuitionistic Multiplicative ORESTE Method for Patients’ Prioritization of Hospitalization
Zhang, Cheng; Wu, Xingli; Wu, Di; Luo, Li; Herrera-Viedma, Enrique
2018-01-01
The tension brought about by sickbeds is a common and intractable issue in public hospitals in China due to the large population. Assigning the order of hospitalization of patients is difficult because of complex patient information such as disease type, emergency degree, and severity. It is critical to rank the patients taking full account of various factors. However, most of the evaluation criteria for hospitalization are qualitative, and the classical ranking method cannot derive the detailed relations between patients based on these criteria. Motivated by this, a comprehensive multiple criteria decision making method named the intuitionistic multiplicative ORESTE (organísation, rangement et Synthèse dedonnées relarionnelles, in French) was proposed to handle the problem. The subjective and objective weights of criteria were considered in the proposed method. To do so, first, considering the vagueness of human perceptions towards the alternatives, an intuitionistic multiplicative preference relation model is applied to represent the experts’ preferences over the pairwise alternatives with respect to the predetermined criteria. Then, a correlation coefficient-based weight determining method is developed to derive the objective weights of criteria. This method can overcome the biased results caused by highly-related criteria. Afterwards, we improved the general ranking method, ORESTE, by introducing a new score function which considers both the subjective and objective weights of criteria. An intuitionistic multiplicative ORESTE method was then developed and further highlighted by a case study concerning the patients’ prioritization. PMID:29673212
Roberti, Joshua A.; SanClements, Michael D.; Loescher, Henry W.; Ayres, Edward
2014-01-01
Even though fine-root turnover is a highly studied topic, it is often poorly understood as a result of uncertainties inherent in its sampling, e.g., quantifying spatial and temporal variability. While many methods exist to quantify fine-root turnover, use of minirhizotrons has increased over the last two decades, making sensor errors another source of uncertainty. Currently, no standardized methodology exists to test and compare minirhizotron camera capability, imagery, and performance. This paper presents a reproducible, laboratory-based method by which minirhizotron cameras can be tested and validated in a traceable manner. The performance of camera characteristics was identified and test criteria were developed: we quantified the precision of camera location for successive images, estimated the trueness and precision of each camera's ability to quantify root diameter and root color, and also assessed the influence of heat dissipation introduced by the minirhizotron cameras and electrical components. We report detailed and defensible metrology analyses that examine the performance of two commercially available minirhizotron cameras. These cameras performed differently with regard to the various test criteria and uncertainty analyses. We recommend a defensible metrology approach to quantify the performance of minirhizotron camera characteristics and determine sensor-related measurement uncertainties prior to field use. This approach is also extensible to other digital imagery technologies. In turn, these approaches facilitate a greater understanding of measurement uncertainties (signal-to-noise ratio) inherent in the camera performance and allow such uncertainties to be quantified and mitigated so that estimates of fine-root turnover can be more confidently quantified. PMID:25391023
Continuous performance measurement in flight systems. [sequential control model
NASA Technical Reports Server (NTRS)
Connelly, E. M.; Sloan, N. A.; Zeskind, R. M.
1975-01-01
The desired response of many man machine control systems can be formulated as a solution to an optimal control synthesis problem where the cost index is given and the resulting optimal trajectories correspond to the desired trajectories of the man machine system. Optimal control synthesis provides the reference criteria and the significance of error information required for performance measurement. The synthesis procedure described provides a continuous performance measure (CPM) which is independent of the mechanism generating the control action. Therefore, the technique provides a meaningful method for online evaluation of man's control capability in terms of total man machine performance.
Crew workload-management strategies - A critical factor in system performance
NASA Technical Reports Server (NTRS)
Hart, Sandra G.
1989-01-01
This paper reviews the philosophy and goals of the NASA/USAF Strategic Behavior/Workload Management Program. The philosophical foundation of the program is based on the assumption that an improved understanding of pilot strategies will clarify the complex and inconsistent relationships observed among objective task demands and measures of system performance and pilot workload. The goals are to: (1) develop operationally relevant figures of merit for performance, (2) quantify the effects of strategic behaviors on system performance and pilot workload, (3) identify evaluation criteria for workload measures, and (4) develop methods of improving pilots' abilities to manage workload extremes.
Performance analysis of landslide early warning systems at regional scale: the EDuMaP method
NASA Astrophysics Data System (ADS)
Piciullo, Luca; Calvello, Michele
2016-04-01
Landslide early warning systems (LEWSs) reduce landslide risk by disseminating timely and meaningful warnings when the level of risk is judged intolerably high. Two categories of LEWSs, can be defined on the basis of their scale of analysis: "local" systems and "regional" systems. LEWSs at regional scale (ReLEWSs) are used to assess the probability of occurrence of landslides over appropriately-defined homogeneous warning zones of relevant extension, typically through the prediction and monitoring of meteorological variables, in order to give generalized warnings to the public. Despite many studies on ReLEWSs, no standard requirements exist for assessing their performance. Empirical evaluations are often carried out by simply analysing the time frames during which significant high-consequence landslides occurred in the test area. Alternatively, the performance evaluation is based on 2x2 contingency tables computed for the joint frequency distribution of landslides and alerts, both considered as dichotomous variables. In all these cases, model performance is assessed neglecting some important aspects which are peculiar to ReLEWSs, among which: the possible occurrence of multiple landslides in the warning zone; the duration of the warnings in relation to the time of occurrence of the landslides; the level of the warning issued in relation to the landslide spatial density in the warning zone; the relative importance system managers attribute to different types of errors. An original approach, called EDuMaP method, is proposed to assess the performance of landslide early warning models operating at regional scale. The method is composed by three main phases: Events analysis, Duration Matrix, Performance analysis. The events analysis phase focuses on the definition of landslide (LEs) and warning events (WEs), which are derived from available landslides and warnings databases according to their spatial and temporal characteristics by means of ten input parameters. The evaluation of time associated with the occurrence of landslide events (LE) in relation to the occurrence of warning events (WE) in their respective classes is a fundamental step to determine the duration matrix elements. On the other hand the classification of LEs and WEs establishes the structure of the duration matrix. Indeed, the number of rows and columns of the matrix is equal to the number of classes defined for the warning and landslide events, respectively. Thus the matrix is not expressed as a 2x2 contingency and LEs and WEs are not expressed as dichotomous variables. The final phase of the method is the evaluation of the duration matrix based on a set of performance criteria assigning a performance meaning to the element of the matrix. To this aim different criteria can be defined, for instance employing an alert classification scheme derived from 2x2 contingency tables or assigning a colour code to the elements of the matrix in relation to their grade of correctness. Finally, performance indicators can be derived from the performance criteria to quantify successes and errors of the early warning models. EDuMaP has been already applied to different real case studies, highlighting the adaptability of the method to analyse the performance of structurally different ReLEWSs.
Reference Proteome Extracts for Mass Spec Instrument Performance Validation and Method Development
Rosenblatt, Mike; Urh, Marjeta; Saveliev, Sergei
2014-01-01
Biological samples of high complexity are required to test protein mass spec sample preparation procedures and validate mass spec instrument performance. Total cell protein extracts provide the needed sample complexity. However, to be compatible with mass spec applications, such extracts should meet a number of design requirements: compatibility with LC/MS (free of detergents, etc.)high protein integrity (minimal level of protein degradation and non-biological PTMs)compatibility with common sample preparation methods such as proteolysis, PTM enrichment and mass-tag labelingLot-to-lot reproducibility Here we describe total protein extracts from yeast and human cells that meet the above criteria. Two extract formats have been developed: Intact protein extracts with primary use for sample preparation method development and optimizationPre-digested extracts (peptides) with primary use for instrument validation and performance monitoring
Eysenbach, Gunther; Powell, John; Kuss, Oliver; Sa, Eun-Ryoung
The quality of consumer health information on the World Wide Web is an important issue for medicine, but to date no systematic and comprehensive synthesis of the methods and evidence has been performed. To establish a methodological framework on how quality on the Web is evaluated in practice, to determine the heterogeneity of the results and conclusions, and to compare the methodological rigor of these studies, to determine to what extent the conclusions depend on the methodology used, and to suggest future directions for research. We searched MEDLINE and PREMEDLINE (1966 through September 2001), Science Citation Index (1997 through September 2001), Social Sciences Citation Index (1997 through September 2001), Arts and Humanities Citation Index (1997 through September 2001), LISA (1969 through July 2001), CINAHL (1982 through July 2001), PsychINFO (1988 through September 2001), EMBASE (1988 through June 2001), and SIGLE (1980 through June 2001). We also conducted hand searches, general Internet searches, and a personal bibliographic database search. We included published and unpublished empirical studies in any language in which investigators searched the Web systematically for specific health information, evaluated the quality of Web sites or pages, and reported quantitative results. We screened 7830 citations and retrieved 170 potentially eligible full articles. A total of 79 distinct studies met the inclusion criteria, evaluating 5941 health Web sites and 1329 Web pages, and reporting 408 evaluation results for 86 different quality criteria. Two reviewers independently extracted study characteristics, medical domains, search strategies used, methods and criteria of quality assessment, results (percentage of sites or pages rated as inadequate pertaining to a quality criterion), and quality and rigor of study methods and reporting. Most frequently used quality criteria used include accuracy, completeness, readability, design, disclosures, and references provided. Fifty-five studies (70%) concluded that quality is a problem on the Web, 17 (22%) remained neutral, and 7 studies (9%) came to a positive conclusion. Positive studies scored significantly lower in search (P =.02) and evaluation (P =.04) methods. Due to differences in study methods and rigor, quality criteria, study population, and topic chosen, study results and conclusions on health-related Web sites vary widely. Operational definitions of quality criteria are needed.
Boutkhoum, Omar; Hanine, Mohamed; Agouti, Tarik; Tikniouine, Abdessadek
2015-01-01
In this paper, we examine the issue of strategic industrial location selection in uncertain decision making environments for implanting new industrial corporation. In fact, the industrial location issue is typically considered as a crucial factor in business research field which is related to many calculations about natural resources, distributors, suppliers, customers, and most other things. Based on the integration of environmental, economic and social decisive elements of sustainable development, this paper presents a hybrid decision making model combining fuzzy multi-criteria analysis with analytical capabilities that OLAP systems can provide for successful and optimal industrial location selection. The proposed model mainly consists in three stages. In the first stage, a decision-making committee has been established to identify the evaluation criteria impacting the location selection process. In the second stage, we develop fuzzy AHP software based on the extent analysis method to assign the importance weights to the selected criteria, which allows us to model the linguistic vagueness, ambiguity, and incomplete knowledge. In the last stage, OLAP analysis integrated with multi-criteria analysis employs these weighted criteria as inputs to evaluate, rank and select the strategic industrial location for implanting new business corporation in the region of Casablanca, Morocco. Finally, a sensitivity analysis is performed to evaluate the impact of criteria weights and the preferences given by decision makers on the final rankings of strategic industrial locations.
Tumor volumetric measurements in surgically inaccessible pediatric low-grade glioma.
Kilday, John-Paul; Branson, Helen; Rockel, Conrad; Laughlin, Suzanne; Mabbott, Donald; Bouffet, Eric; Bartels, Ute
2015-01-01
Tumor measurement is important in unresectable pediatric low-grade gliomas (pLGGs) to determine either the need for treatment or assess response. Standard methods measure the product of the largest 2 lengths from transverse, anterior-posterior, and cranio-caudal dimensions (SM, cm). This single-institution study evaluated tumor volume measurements (VM, cm) in such pLGGs. Of 50 patients treated with chemotherapy for surgically inaccessible pLGG, 8 met the inclusion criteria of having 2 or more sequential MRI studies of T1-weighted Fast-Spoiled Gradient Recalled acquisition. SM and VM were performed by 2 independent neuroradiologists. Associations of measurement methods with defined therapeutic response criteria and patient clinical status were assessed. The mean tumor size at the first MRI scan was 20 cm and 398 cm according to SM and VM, respectively. VM results did not differ significantly from SM-derived spherical volume calculations (Pearson correlation, P<0.0001) with a high interrater reliability. Both methods were concordant in defining the tumor response according to the current criteria, although radiologic progressive disease was not associated with clinical status (SM: P=0.491, VM: P=0.208). In this limited experience, volumetric analysis of unresectable pLGGs did not seem superior to the standard linear measurements for defining tumor response.
NL(q) Theory: A Neural Control Framework with Global Asymptotic Stability Criteria.
Vandewalle, Joos; De Moor, Bart L.R.; Suykens, Johan A.K.
1997-06-01
In this paper a framework for model-based neural control design is presented, consisting of nonlinear state space models and controllers, parametrized by multilayer feedforward neural networks. The models and closed-loop systems are transformed into so-called NL(q) system form. NL(q) systems represent a large class of nonlinear dynamical systems consisting of q layers with alternating linear and static nonlinear operators that satisfy a sector condition. For such NL(q)s sufficient conditions for global asymptotic stability, input/output stability (dissipativity with finite L(2)-gain) and robust stability and performance are presented. The stability criteria are expressed as linear matrix inequalities. In the analysis problem it is shown how stability of a given controller can be checked. In the synthesis problem two methods for neural control design are discussed. In the first method Narendra's dynamic backpropagation for tracking on a set of specific reference inputs is modified with an NL(q) stability constraint in order to ensure, e.g., closed-loop stability. In a second method control design is done without tracking on specific reference inputs, but based on the input/output stability criteria itself, within a standard plant framework as this is done, for example, in H( infinity ) control theory and &mgr; theory. Copyright 1997 Elsevier Science Ltd.
NASA Astrophysics Data System (ADS)
Stepanova, Larisa; Bronnikov, Sergej
2018-03-01
The crack growth directional angles in the isotropic linear elastic plane with the central crack under mixed-mode loading conditions for the full range of the mixity parameter are found. Two fracture criteria of traditional linear fracture mechanics (maximum tangential stress and minimum strain energy density criteria) are used. Atomistic simulations of the central crack growth process in an infinite plane medium under mixed-mode loading using Large-scale Molecular Massively Parallel Simulator (LAMMPS), a classical molecular dynamics code, are performed. The inter-atomic potential used in this investigation is Embedded Atom Method (EAM) potential. The plane specimens with initial central crack were subjected to Mixed-Mode loadings. The simulation cell contains 400000 atoms. The crack propagation direction angles under different values of the mixity parameter in a wide range of values from pure tensile loading to pure shear loading in a wide diapason of temperatures (from 0.1 К to 800 К) are obtained and analyzed. It is shown that the crack propagation direction angles obtained by molecular dynamics method coincide with the crack propagation direction angles given by the multi-parameter fracture criteria based on the strain energy density and the multi-parameter description of the crack-tip fields.
NASA Astrophysics Data System (ADS)
Izie Adiana Abidin, Nur; Aminuddin, Eeydzah; Zakaria, Rozana; Mazzuana Shamsuddin, Siti; Sahamir, Shaza Rina; Shahzaib, Jam; Nafis Abas, Darul
2018-04-01
Campus university building is the Higher Learning Institution (HLI) involves complex activities and operations, conserving the energy has become paramount important. There are several efforts taken by universities to improve its current energy use such as policy development, education, and adaption of energy conservation solution through retrofitting. This paper aims to highlight the importance of the criteria affecting in retrofitting of existing buildings with clean energy in order to achieve zero energy balance in buildings. The focus is given to the development of criteria for solar photovoltaic (solar PV), wind turbines and small-scale hydropower. A questionnaire survey was employed and distributed to the green building expert practitioner. Factor Analysis, Factor Score, and Weightage Factor were adapted as a method of analysis in order to produce the final result with weightage output for prioritization and ranking of the relevant criteria. The result performed assists to provide the stakeholders an overview of the important criteria that should be considered especially during the decision making to retrofit the existing buildings with clean energy resources. The criteria developed are also to establish a structured decision-making process and to ensure the selection of the decision or alternatives achieve the desired outcome.
Education Criteria for Performance Excellence, 2002.
ERIC Educational Resources Information Center
National Inst. of Standards and Technology, Gaithersburg, MD.
The education criteria presented in this guide are designed to help organizations use an integrated approach to organizational performance management that results in delivery of ever-improving value to students and stakeholders. Implementation of the criteria will contribute to improvement of education quality, improvement of overall…
49 CFR 240.129 - Criteria for monitoring operational performance of certified engineers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... certified engineers. 240.129 Section 240.129 Transportation Other Regulations Relating to Transportation... LOCOMOTIVE ENGINEERS Component Elements of the Certification Process § 240.129 Criteria for monitoring operational performance of certified engineers. (a) Each railroad's program shall include criteria and...
49 CFR 240.129 - Criteria for monitoring operational performance of certified engineers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... certified engineers. 240.129 Section 240.129 Transportation Other Regulations Relating to Transportation... LOCOMOTIVE ENGINEERS Component Elements of the Certification Process § 240.129 Criteria for monitoring operational performance of certified engineers. (a) Each railroad's program shall include criteria and...
49 CFR 240.129 - Criteria for monitoring operational performance of certified engineers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... certified engineers. 240.129 Section 240.129 Transportation Other Regulations Relating to Transportation... LOCOMOTIVE ENGINEERS Component Elements of the Certification Process § 240.129 Criteria for monitoring operational performance of certified engineers. (a) Each railroad's program shall include criteria and...
The Airline Quality Rating 1999
NASA Technical Reports Server (NTRS)
Bowen, Brent D.; Headley, Dean E.
1999-01-01
The Airline Quality Rating (AQR) was developed and first announced in early 1991 as an objective method of comparing airline performance on combined multiple criteria. This current report, Airline Quality Rating 1999, reflects an updated approach to calculating monthly Airline Quality Rating scores for 1998. AQR scores for the calendar year 1998 are based on 15 elements that focus on airline performance areas important to air travel consumers. The Airline Quality Rating is a summary of month-by-month quality ratings for the ten major U.S. airlines operating during 1998. Using the Airline Quality Rating system of weighted averages and monthly performance data in the areas of on-time arrivals, involuntary denied boardings, mishandled baggage, and a combination of 12 customer complaint categories, major airlines comparative performance for the calendar year 1998 is reported. This research monograph contains a brief summary of the AQR methodology, detailed data and charts that track comparative quality for major airlines domestic operations for the 12 month period of 1998, and industry average results. Also, comparative Airline Quality Rating data for 1997, using the updated criteria, are included to provide a reference point regarding quality in the industry.
Adaptive Response Criteria in Road Hazard Detection Among Older Drivers
Feng, Jing; Choi, HeeSun; Craik, Fergus I. M.; Levine, Brian; Moreno, Sylvain; Naglie, Gary; Zhu, Motao
2018-01-01
OBJECTIVES The majority of existing investigations on attention, aging, and driving have focused on the negative impacts of age-related declines in attention on hazard detection and driver performance. However, driving skills and behavioral compensation may accommodate the negative effects that age-related attentional decline places on driving performance. In this study, we examined an important question that had been largely neglected in the literature linking attention, aging, and driving: can top-down factors such as behavioral compensation, specifically adaptive response criteria, accommodate the negative impacts from age-related attention declines on hazard detection during driving? METHODS In the experiment, we used the Drive Aware Task, a task combining the driving context with well-controlled laboratory procedures measuring attention. We compared younger (n = 16, age 21 – 30) and older drivers (n = 21, age 65 – 79) on their attentional processing of hazards in driving scenes, indexed by percentage of correct and reaction time of hazard detection, as well as sensitivity and response criterion using the signal detection analysis. RESULTS Older drivers, in general, were less accurate and slower on the task than younger drivers. However, results from this experiment also revealed that older, but not younger, drivers adapted their response criteria when the traffic condition changed in the driving scenes. When there was more traffic in the driving scene, older drivers became more liberal in their responses, meaning that they were more likely to report that a driving hazard was detected. CONCLUSIONS Older drivers adopt compensatory strategies on hazard detection during driving . Our findings showed that, in the driving context, even at an old age our attentional functions are still adaptive according to environmental conditions. This leads to considerations on potential training methods to promote adaptive strategies which may help older drivers maintaining performance in road hazard detection. PMID:28898116
Conveyor Performance based on Motor DC 12 Volt Eg-530ad-2f using K-Means Clustering
NASA Astrophysics Data System (ADS)
Arifin, Zaenal; Artini, Sri DP; Much Ibnu Subroto, Imam
2017-04-01
To produce goods in industry, a controlled tool to improve production is required. Separation process has become a part of production process. Separation process is carried out based on certain criteria to get optimum result. By knowing the characteristics performance of a controlled tools in separation process the optimum results is also possible to be obtained. Clustering analysis is popular method for clustering data into smaller segments. Clustering analysis is useful to divide a group of object into a k-group in which the member value of the group is homogeny or similar. Similarity in the group is set based on certain criteria. The work in this paper based on K-Means method to conduct clustering of loading in the performance of a conveyor driven by a dc motor 12 volt eg-530-2f. This technique gives a complete clustering data for a prototype of conveyor driven by dc motor to separate goods in term of height. The parameters involved are voltage, current, time of travelling. These parameters give two clusters namely optimal cluster with center of cluster 10.50 volt, 0.3 Ampere, 10.58 second, and unoptimal cluster with center of cluster 10.88 volt, 0.28 Ampere and 40.43 second.
Efficiency of polymerization of bulk-fill composite resins: a systematic review.
Reis, André Figueiredo; Vestphal, Mariana; Amaral, Roberto Cesar do; Rodrigues, José Augusto; Roulet, Jean-François; Roscoe, Marina Guimarães
2017-08-28
This systematic review assessed the literature to evaluate the efficiency of polymerization of bulk-fill composite resins at 4 mm restoration depth. PubMed, Cochrane, Scopus and Web of Science databases were searched with no restrictions on year, publication status, or article's language. Selection criteria included studies that evaluated bulk-fill composite resin when inserted in a minimum thickness of 4 mm, followed by curing according to the manufacturers' instructions; presented sound statistical data; and comparison with a control group and/or a reference measurement of quality of polymerization. The evidence level was evaluated by qualitative scoring system and classified as high-, moderate- and low- evidence level. A total of 534 articles were retrieved in the initial search. After the review process, only 10 full-text articles met the inclusion criteria. Most articles included (80%) were classified as high evidence level. Among several techniques, microhardness was the most frequently method performed by the studies included in this systematic review. Irrespective to the "in vitro" method performed, bulk fill RBCs were partially likely to fulfill the important requirement regarding properly curing in 4 mm of cavity depth measured by depth of cure and / or degree of conversion. In general, low viscosities BFCs performed better regarding polymerization efficiency compared to the high viscosities BFCs.
An application of business process method to the clinical efficiency of hospital.
Leu, Jun-Der; Huang, Yu-Tsung
2011-06-01
The concept of Total Quality Management (TQM) has come to be applied in healthcare over the last few years. The process management category in the Baldrige Health Care Criteria for Performance Excellence model is designed to evaluate the quality of medical services. However, a systematic approach for implementation support is necessary to achieve excellence in the healthcare business process. The Architecture of Integrated Information Systems (ARIS) is a business process architecture developed by IDS Scheer AG and has been applied in a variety of industrial application. It starts with a business strategy to identify the core and support processes, and encompasses the whole life-cycle range, from business process design to information system deployment, which is compatible with the concept of healthcare performance excellence criteria. In this research, we apply the basic ARIS framework to optimize the clinical processes of an emergency department in a mid-size hospital with 300 clinical beds while considering the characteristics of the healthcare organization. Implementation of the case is described, and 16 months of clinical data are then collected, which are used to study the performance and feasibility of the method. The experience gleaned in this case study can be used a reference for mid-size hospitals with similar business models.
Biau, D J; Meziane, M; Bhumbra, R S; Dumaine, V; Babinet, A; Anract, P
2011-09-01
The purpose of this study was to define immediate post-operative 'quality' in total hip replacements and to study prospectively the occurrence of failure based on these definitions of quality. The evaluation and assessment of failure were based on ten radiological and clinical criteria. The cumulative summation (CUSUM) test was used to study 200 procedures over a one-year period. Technical criteria defined failure in 17 cases (8.5%), those related to the femoral component in nine (4.5%), the acetabular component in 32 (16%) and those relating to discharge from hospital in five (2.5%). Overall, the procedure was considered to have failed in 57 of the 200 total hip replacements (28.5%). The use of a new design of acetabular component was associated with more failures. For the CUSUM test, the level of adequate performance was set at a rate of failure of 20% and the level of inadequate performance set at a failure rate of 40%; no alarm was raised by the test, indicating that there was no evidence of inadequate performance. The use of a continuous monitoring statistical method is useful to ensure that the quality of total hip replacement is maintained, especially as newer implants are introduced.
1983-06-01
Field Control Results 18 - Record Test Results 18 GRAVEL DRAIN MATERIAL, 19 FILTER MATERIAL, 20 ABUTMET INFILL MATERIAL- 20 X. EMBANKMENT ANALYSIS 21 XI...Thirty-three in-situ density tests were conducted in the near surface embankment foundation materials by the sand displacement method . An additional...seven densities were obtained from undisturbed samples by the bulk density method . The results of density tests in the foundation are shown on plate
NASA Astrophysics Data System (ADS)
Mitchell, Sarah L.; Ortiz, Michael
2016-09-01
This study utilizes computational topology optimization methods for the systematic design of optimal multifunctional silicon anode structures for lithium-ion batteries. In order to develop next generation high performance lithium-ion batteries, key design challenges relating to the silicon anode structure must be addressed, namely the lithiation-induced mechanical degradation and the low intrinsic electrical conductivity of silicon. As such this work considers two design objectives, the first being minimum compliance under design dependent volume expansion, and the second maximum electrical conduction through the structure, both of which are subject to a constraint on material volume. Density-based topology optimization methods are employed in conjunction with regularization techniques, a continuation scheme, and mathematical programming methods. The objectives are first considered individually, during which the influence of the minimum structural feature size and prescribed volume fraction are investigated. The methodology is subsequently extended to a bi-objective formulation to simultaneously address both the structural and conduction design criteria. The weighted sum method is used to derive the Pareto fronts, which demonstrate a clear trade-off between the competing design objectives. A rigid frame structure was found to be an excellent compromise between the structural and conduction design criteria, providing both the required structural rigidity and direct conduction pathways. The developments and results presented in this work provide a foundation for the informed design and development of silicon anode structures for high performance lithium-ion batteries.
Domènech, Albert; Cortés-Francisco, Nuria; Palacios, Oscar; Franco, José M; Riobó, Pilar; Llerena, José J; Vichi, Stefania; Caixach, Josep
2014-02-07
A multitoxin method has been developed for quantification and confirmation of lipophilic marine biotoxins in mussels by liquid chromatography coupled to high resolution mass spectrometry (HRMS), using an Orbitrap-Exactive HCD mass spectrometer. Okadaic acid (OA), yessotoxin, azaspiracid-1, gymnodimine, 13-desmethyl spirolide C, pectenotoxin-2 and Brevetoxin B were analyzed as representative compounds of each lipophilic toxin group. HRMS identification and confirmation criteria were established. Fragment and isotope ions and ion ratios were studied and evaluated for confirmation purpose. In depth characterization of full scan and fragmentation spectrum of the main toxins were carried out. Accuracy (trueness and precision), linearity, calibration curve check, limit of quantification (LOQ) and specificity were the parameters established for the method validation. The validation was performed at 0.5 times the current European Union permitted levels. The method performed very well for the parameters investigated. The trueness, expressed as recovery, ranged from 80% to 94%, the precision, expressed as intralaboratory reproducibility, ranged from 5% to 22% and the LOQs range from 0.9 to 4.8pg on column. Uncertainty of the method was also estimated for OA, using a certified reference material. A top-down approach considering two main contributions: those arising from the trueness studies and those coming from the precision's determination, was used. An overall expanded uncertainty of 38% was obtained. Copyright © 2014 Elsevier B.V. All rights reserved.
IMRT QA: Selecting gamma criteria based on error detection sensitivity.
Steers, Jennifer M; Fraass, Benedick A
2016-04-01
The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.
Winchester, David E; Wolinsky, David; Beyth, Rebecca J; Shaw, Leslee J
2016-05-01
Appropriate use criteria (AUC) assist health care professionals in making decisions about procedures and diagnostic testing. In some cases, multiple AUC exist for a single procedure or test. To date, the extent of agreement between multiple AUC has not been evaluated. To measure discordance between the American College of Cardiology Foundation (ACCF) AUC and the American College of Radiology (ACR) Appropriateness Criteria for gauging the appropriateness of nuclear myocardial perfusion imaging. Retrospective cohort study at an academically affiliated Veterans Affairs medical center. Participants were Veteran patients who underwent nuclear myocardial perfusion imaging between December 2010 and July 2011 with rating of appropriateness by the ACCF and ACR criteria. Analysis was performed in March 2015. The primary outcome was the agreement of appropriateness category as measured by κ statistic. The secondary outcome was a comparison of nuclear myocardial perfusion imaging results and frequency of ischemia across appropriateness categories for the 2 rating methods. Of 67 indications in the ACCF AUC, 35 (52.2%) could not be matched to an ACR rating, 18 (26.9%) had the same appropriateness category, and 14 (20.9%) disagreed on appropriateness. The study cohort comprised 592 individuals. Their mean (SD) age was 62.6 (9.4) years, and 570 of 592 (96.2%) were male. When applied to the patient cohort, 111 patients (18.8%) could not be matched to an ACR rating, 349 patients (59.0%) had the same appropriateness category for the ACR and ACCF methods, and 132 patients (22.3%) were discordant. Overall, the agreement of appropriateness between the 2 methods was poor (κ = 0.34, P < .001). Ischemia was rare among patients rated as "inappropriate" by the ACCF AUC (1 of 39 patients [2.6%]), while ischemia was more common among patients rated as "usually not appropriate" by the ACR Appropriateness Criteria (14 of 80 patients [17.5%]). Substantial discordance may exist between methods for assessing the appropriateness of advanced imaging tests. Discordance in methods may translate into differences in clinically relevant outcomes, such as the detection of myocardial ischemia.
Sepriano, Alexandre; Rubio, Roxana; Ramiro, Sofia; Landewé, Robert; van der Heijde, Désirée
2017-05-01
To summarise the evidence on the performance of the Assessment of SpondyloArthritis international Society (ASAS) classification criteria for axial spondyloarthritis (axSpA) (also imaging and clinical arm separately), peripheral (p)SpA and the entire set, when tested against the rheumatologist's diagnosis ('reference standard'). A systematic literature review was performed to identify eligible studies. Raw data on SpA diagnosis and classification were extracted or, if necessary, obtained from the authors of the selected publications. A meta-analysis was performed to obtain pooled estimates for sensitivity, specificity, positive and negative likelihood ratios, by fitting random effects models. Nine papers fulfilled the inclusion criteria (N=5739 patients). The entire set of the ASAS SpA criteria yielded a high pooled sensitivity (73%) and specificity (88%). Similarly, good results were found for the axSpA criteria (sensitivity: 82%; specificity: 88%). Splitting the axSpA criteria in 'imaging arm only' and 'clinical arm only' resulted in much lower sensitivity (30% and 23% respectively), but very high specificity was retained (97% and 94% respectively). The pSpA criteria were less often tested than the axSpA criteria and showed a similarly high pooled specificity (87%) but lower sensitivity (63%). Accumulated evidence from studies with more than 5500 patients confirms the good performance of the various ASAS SpA criteria as tested against the rheumatologist's diagnosis. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Arnold, W Ray; Warren-Hicks, William J
2007-01-01
The object of this study was to estimate site- and region-specific dissolved copper criteria for a large embayment, the Chesapeake Bay, USA. The intent is to show the utility of 2 copper saltwater quality site-specific criteria estimation models and associated region-specific criteria selection methods. The criteria estimation models and selection methods are simple, efficient, and cost-effective tools for resource managers. The methods are proposed as potential substitutes for the US Environmental Protection Agency's water effect ratio methods. Dissolved organic carbon data and the copper criteria models were used to produce probability-based estimates of site-specific copper saltwater quality criteria. Site- and date-specific criteria estimations were made for 88 sites (n = 5,296) in the Chesapeake Bay. The average and range of estimated site-specific chronic dissolved copper criteria for the Chesapeake Bay were 7.5 and 5.3 to 16.9 microg Cu/L. The average and range of estimated site-specific acute dissolved copper criteria for the Chesapeake Bay were 11.7 and 8.3 to 26.4 microg Cu/L. The results suggest that applicable national and state copper criteria can increase in much of the Chesapeake Bay and remain protective. Virginia Department of Environmental Quality copper criteria near the mouth of the Chesapeake Bay, however, need to decrease to protect species of equal or greater sensitivity to that of the marine mussel, Mytilus sp.
40 CFR 262.104 - What are the minimum performance criteria?
Code of Federal Regulations, 2013 CFR
2013-07-01
...) SOLID WASTES (CONTINUED) STANDARDS APPLICABLE TO GENERATORS OF HAZARDOUS WASTE University Laboratories... criteria? The Minimum Performance Criteria that each University must meet in managing its Laboratory Waste are: (a) Each University must label all laboratory waste with the general hazard class and either the...
40 CFR 262.104 - What are the minimum performance criteria?
Code of Federal Regulations, 2014 CFR
2014-07-01
...) SOLID WASTES (CONTINUED) STANDARDS APPLICABLE TO GENERATORS OF HAZARDOUS WASTE University Laboratories... criteria? The Minimum Performance Criteria that each University must meet in managing its Laboratory Waste are: (a) Each University must label all laboratory waste with the general hazard class and either the...
40 CFR 262.104 - What are the minimum performance criteria?
Code of Federal Regulations, 2012 CFR
2012-07-01
...) SOLID WASTES (CONTINUED) STANDARDS APPLICABLE TO GENERATORS OF HAZARDOUS WASTE University Laboratories... criteria? The Minimum Performance Criteria that each University must meet in managing its Laboratory Waste are: (a) Each University must label all laboratory waste with the general hazard class and either the...
Detection of circuit-board components with an adaptive multiclass correlation filter
NASA Astrophysics Data System (ADS)
Diaz-Ramirez, Victor H.; Kober, Vitaly
2008-08-01
A new method for reliable detection of circuit-board components is proposed. The method is based on an adaptive multiclass composite correlation filter. The filter is designed with the help of an iterative algorithm using complex synthetic discriminant functions. The impulse response of the filter contains information needed to localize and classify geometrically distorted circuit-board components belonging to different classes. Computer simulation results obtained with the proposed method are provided and compared with those of known multiclass correlation based techniques in terms of performance criteria for recognition and classification of objects.
Anismus as a cause of functional constipation--experience from Serbia.
Jovanović, Igor; Jovanović, Dragana; Uglješić, Milenko; Milinić, Nikola; Cvetković, Mirjana; Branković, Marija; Nikolić, Goran
2015-01-01
BACKROUND/AIM: Anismus is paradoxal pressure increase or pressure decrease less than 20% of external anal sphincter during defecation straining. This study analyzed the presence of anismus as within a group of patients with the positive Rome III criteria for functional constipation. We used anorectal manometry as the determination method for anismus. We used anorectal water-perfused manometry in 60 patients with obstructive defecation defined by the Rome III criteria for functional constipation. We also analyzed anorectal function in 30 healthy subjects. The presence of anismus is more frequent in the group of patients with obstructive defecation compared to the control group (a highly statistically significant difference, p < 0.01). Furthermore, we found that the Rome III criteria for functional constipation showed 90% accuracy in predicting obstructive defecation. We analyzed the correlation of anismus with the presence of weak external anal sphincter, rectal sensibility disorders, enlarged piles, diverticular disease and anatomic variations of colon. We found no correlation between them in any of these cases. There is a significant correlation between anismus and positive Rome III criteria for functional constipation. Anorectal manometry should be performed in all patients with the positive Rome III criteria for functional constipation.
Lundberg, Ingrid E; Tjärnlund, Anna; Bottai, Matteo; Werth, Victoria P; Pilkington, Clarissa; Visser, Marianne de; Alfredsson, Lars; Amato, Anthony A; Barohn, Richard J; Liang, Matthew H; Singh, Jasvinder A; Aggarwal, Rohit; Arnardottir, Snjolaug; Chinoy, Hector; Cooper, Robert G; Dankó, Katalin; Dimachkie, Mazen M; Feldman, Brian M; Torre, Ignacio Garcia-De La; Gordon, Patrick; Hayashi, Taichi; Katz, James D; Kohsaka, Hitoshi; Lachenbruch, Peter A; Lang, Bianca A; Li, Yuhui; Oddis, Chester V; Olesinska, Marzena; Reed, Ann M; Rutkowska-Sak, Lidia; Sanner, Helga; Selva-O'Callaghan, Albert; Song, Yeong-Wook; Vencovsky, Jiri; Ytterberg, Steven R; Miller, Frederick W; Rider, Lisa G
2017-12-01
To develop and validate new classification criteria for adult and juvenile idiopathic inflammatory myopathies (IIM) and their major subgroups. Candidate variables were assembled from published criteria and expert opinion using consensus methodology. Data were collected from 47 rheumatology, dermatology, neurology and paediatric clinics worldwide. Several statistical methods were used to derive the classification criteria. Based on data from 976 IIM patients (74% adults; 26% children) and 624 non-IIM patients with mimicking conditions (82% adults; 18% children), new criteria were derived. Each item is assigned a weighted score. The total score corresponds to a probability of having IIM. Subclassification is performed using a classification tree. A probability cut-off of 55%, corresponding to a score of 5.5 (6.7 with muscle biopsy) 'probable IIM', had best sensitivity/specificity (87%/82% without biopsies, 93%/88% with biopsies) and is recommended as a minimum to classify a patient as having IIM. A probability of ≥90%, corresponding to a score of ≥7.5 (≥8.7 with muscle biopsy), corresponds to 'definite IIM'. A probability of <50%, corresponding to a score of <5.3 (<6.5 with muscle biopsy), rules out IIM, leaving a probability of ≥50 to <55% as 'possible IIM'. The European League Against Rheumatism/American College of Rheumatology (EULAR/ACR) classification criteria for IIM have been endorsed by international rheumatology, dermatology, neurology and paediatric groups. They employ easily accessible and operationally defined elements, and have been partially validated. They allow classification of 'definite', 'probable' and 'possible' IIM, in addition to the major subgroups of IIM, including juvenile IIM. They generally perform better than existing criteria. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Ahn, J; Yun, I S; Yoo, H G; Choi, J-J; Lee, M
2017-01-01
Purpose To evaluate a progression-detecting algorithm for a new automated matched alternation flicker (AMAF) in glaucoma patients. Methods Open-angle glaucoma patients with a baseline mean deviation of visual field (VF) test>−6 dB were included in this longitudinal and retrospective study. Functional progression was detected by two VF progression criteria and structural progression by both AMAF and conventional comparison methods using optic disc and retinal nerve fiber layer (RNFL) photography. Progression-detecting performances of AMAF and the conventional method were evaluated by an agreement between functional and structural progression criteria. RNFL thickness changes measured by optical coherence tomography (OCT) were compared between progressing and stable eyes determined by each method. Results Among 103 eyes, 47 (45.6%), 21 (20.4%), and 32 (31.1%) eyes were evaluated as glaucoma progression using AMAF, the conventional method, and guided progression analysis (GPA) of the VF test, respectively. The AMAF showed better agreement than the conventional method, using GPA of the VF test (κ=0.337; P<0.001 and κ=0.124; P=0.191, respectively). The rates of RNFL thickness decay using OCT were significantly different between the progressing and stable eyes when progression was determined by AMAF (−3.49±2.86 μm per year vs −1.83±3.22 μm per year; P=0.007) but not by the conventional method (−3.24±2.42 μm per year vs −2.42±3.33 μm per year; P=0.290). Conclusions The AMAF was better than the conventional comparison method in discriminating structural changes during glaucoma progression, and showed a moderate agreement with functional progression criteria. PMID:27662466
Agapova, Maria; Devine, Emily Beth; Bresnahan, Brian W; Higashi, Mitchell K; Garrison, Louis P
2014-09-01
Health agencies making regulatory marketing-authorization decisions use qualitative and quantitative approaches to assess expected benefits and expected risks associated with medical interventions. There is, however, no universal standard approach that regulatory agencies consistently use to conduct benefit-risk assessment (BRA) for pharmaceuticals or medical devices, including for imaging technologies. Economics, health services research, and health outcomes research use quantitative approaches to elicit preferences of stakeholders, identify priorities, and model health conditions and health intervention effects. Challenges to BRA in medical devices are outlined, highlighting additional barriers in radiology. Three quantitative methods--multi-criteria decision analysis, health outcomes modeling and stated-choice survey--are assessed using criteria that are important in balancing benefits and risks of medical devices and imaging technologies. To be useful in regulatory BRA, quantitative methods need to: aggregate multiple benefits and risks, incorporate qualitative considerations, account for uncertainty, and make clear whose preferences/priorities are being used. Each quantitative method performs differently across these criteria and little is known about how BRA estimates and conclusions vary by approach. While no specific quantitative method is likely to be the strongest in all of the important areas, quantitative methods may have a place in BRA of medical devices and radiology. Quantitative BRA approaches have been more widely applied in medicines, with fewer BRAs in devices. Despite substantial differences in characteristics of pharmaceuticals and devices, BRA methods may be as applicable to medical devices and imaging technologies as they are to pharmaceuticals. Further research to guide the development and selection of quantitative BRA methods for medical devices and imaging technologies is needed. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Abdul-Razak, Suraya; Rahmat, Radzi; Mohd Kasim, Alicezah; Rahman, Thuhairah Abdul; Muid, Suhaila; Nasir, Nadzimah Mohd; Ibrahim, Zubin; Kasim, Sazzli; Ismail, Zaliha; Abdul Ghani, Rohana; Sanusi, Abdul Rais; Rosman, Azhari; Nawawi, Hapizah
2017-10-16
Familial hypercholesterolaemia (FH) is a genetic disorder with a high risk of developing premature coronary artery disease that should be diagnosed as early as possible. Several clinical diagnostic criteria for FH are available, with the Dutch Lipid Clinic Criteria (DLCC) being widely used. Information regarding diagnostic performances of the other criteria against the DLCC is scarce. We aimed to examine the diagnostic performance of the Simon-Broom (SB) Register criteria, the US Make Early Diagnosis to Prevent Early Deaths (US MEDPED) and the Japanese FH Management Criteria (JFHMC) compared to the DLCC. Seven hundered fifty five individuals from specialist clinics and community health screenings with LDL-c level ≥ 4.0 mmol/L were selected and diagnosed as FH using the DLCC, the SB Register criteria, the US MEDPED and the JFHMC. The sensitivity, specificity, efficiency, positive and negative predictive values of individuals screened with the SB register criteria, US MEDPED and JFHMC were assessed against the DLCC. We found the SB register criteria identified more individuals with FH compared to the US MEDPED and the JFHMC (212 vs. 105 vs. 195; p < 0.001) when assessed against the DLCC. The SB Register criteria, the US MEDPED and the JFHMC had low sensitivity (51.1% vs. 25.3% vs. 47.0% respectively). The SB Register criteria showed better diagnostic performance than the other criteria with 98.8% specificity, 28.6% efficiency value, 98.1% and 62.3% for positive and negative predictive values respectively. The SB Register criteria appears to be more useful in identifying positive cases leading to genetic testing compared to the JFHMC and US MEDPED in this Asian population. However, further research looking into a suitable diagnosis criterion with high likelihood of positive genetic findings is required in the Asian population including in Malaysia.
Equipment Selection by using Fuzzy TOPSIS Method
NASA Astrophysics Data System (ADS)
Yavuz, Mahmut
2016-10-01
In this study, Fuzzy TOPSIS method was performed for the selection of open pit truck and the optimal solution of the problem was investigated. Data from Turkish Coal Enterprises was used in the application of the method. This paper explains the Fuzzy TOPSIS approaches with group decision-making application in an open pit coal mine in Turkey. An algorithm of the multi-person multi-criteria decision making with fuzzy set approach was applied an equipment selection problem. It was found that Fuzzy TOPSIS with a group decision making is a method that may help decision-makers in solving different decision-making problems in mining.
Alexander, C L; Currie, S; Pollock, K; Smith-Palmer, A; Jones, B L
2017-06-01
Giardia duodenalis and Cryptosporidium species are protozoan parasites capable of causing gastrointestinal disease in humans and animals through the ingestion of infective faeces. Whereas Cryptosporidium species can be acquired locally or through foreign travel, there is the mis-conception that giardiasis is considered to be largely travel-associated, which results in differences in laboratory testing algorithms. In order to determine the level of variation in testing criteria and detection methods between diagnostic laboratories for both pathogens across Scotland, an audit was performed. Twenty Scottish diagnostic microbiology laboratories were invited to participate with questions on sample acceptance criteria, testing methods, testing rates and future plans for pathogen detection. Reponses were received from 19 of the 20 laboratories representing each of the 14 territorial Health Boards. Detection methods varied between laboratories with the majority performing microscopy, one using a lateral flow immunochromatographic antigen assay, another using a manually washed plate-based enzyme immunoassay (EIA) and one laboratory trialling a plate-based EIA automated with an EIA plate washer. Whereas all laboratories except one screened every stool for Cryptosporidium species, an important finding was that significant variation in the testing algorithm for detecting Giardia was noted with only four laboratories testing all diagnostic stools. The most common criteria were 'travel history' (11 laboratories) and/or 'when requested' (14 laboratories). Despite only a small proportion of stools being examined in 15 laboratories for Giardia (2%-18% of the total number of stools submitted), of interest is the finding that a higher positivity rate was observed for Giardia than Cryptosporidium in 10 of these 15 laboratories. These findings highlight that the underreporting of Giardia in Scotland is likely based on current selection and testing algorithms.
Method for analyzing the chemical composition of liquid effluent from a direct contact condenser
Bharathan, Desikan; Parent, Yves; Hassani, A. Vahab
2001-01-01
A computational modeling method for predicting the chemical, physical, and thermodynamic performance of a condenser using calculations based on equations of physics for heat, momentum and mass transfer and equations of equilibrium thermodynamics to determine steady state profiles of parameters throughout the condenser. The method includes providing a set of input values relating to a condenser including liquid loading, vapor loading, and geometric characteristics of the contact medium in the condenser. The geometric and packing characteristics of the contact medium include the dimensions and orientation of a channel in the contact medium. The method further includes simulating performance of the condenser using the set of input values to determine a related set of output values such as outlet liquid temperature, outlet flow rates, pressures, and the concentration(s) of one or more dissolved noncondensable gas species in the outlet liquid. The method may also include iteratively performing the above computation steps using a plurality of sets of input values and then determining whether each of the resulting output values and performance profiles satisfies acceptance criteria.
NASA Astrophysics Data System (ADS)
Panfil, Wawrzyniec; Moczulski, Wojciech
2017-10-01
In the paper presented is a control system of a mobile robots group intended for carrying out inspection missions. The main research problem was to define such a control system in order to facilitate a cooperation of the robots resulting in realization of the committed inspection tasks. Many of the well-known control systems use auctions for tasks allocation, where a subject of an auction is a task to be allocated. It seems that in the case of missions characterized by much larger number of tasks than number of robots it will be better if robots (instead of tasks) are subjects of auctions. The second identified problem concerns the one-sided robot-to-task fitness evaluation. Simultaneous assessment of the robot-to-task fitness and task attractiveness for robot should affect positively for the overall effectiveness of the multi-robot system performance. The elaborated system allows to assign tasks to robots using various methods for evaluation of fitness between robots and tasks, and using some tasks allocation methods. There is proposed the method for multi-criteria analysis, which is composed of two assessments, i.e. robot's concurrency position for task among other robots and task's attractiveness for robot among other tasks. Furthermore, there are proposed methods for tasks allocation applying the mentioned multi-criteria analysis method. The verification of both the elaborated system and the proposed tasks' allocation methods was carried out with the help of simulated experiments. The object under test was a group of inspection mobile robots being a virtual counterpart of the real mobile-robot group.
Hypothesis Testing Using Factor Score Regression
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2015-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886
Green material selection for sustainability: A hybrid MCDM approach.
Zhang, Honghao; Peng, Yong; Tian, Guangdong; Wang, Danqi; Xie, Pengpeng
2017-01-01
Green material selection is a crucial step for the material industry to comprehensively improve material properties and promote sustainable development. However, because of the subjectivity and conflicting evaluation criteria in its process, green material selection, as a multi-criteria decision making (MCDM) problem, has been a widespread concern to the relevant experts. Thus, this study proposes a hybrid MCDM approach that combines decision making and evaluation laboratory (DEMATEL), analytical network process (ANP), grey relational analysis (GRA) and technique for order performance by similarity to ideal solution (TOPSIS) to select the optimal green material for sustainability based on the product's needs. A nonlinear programming model with constraints was proposed to obtain the integrated closeness index. Subsequently, an empirical application of rubbish bins was used to illustrate the proposed method. In addition, a sensitivity analysis and a comparison with existing methods were employed to validate the accuracy and stability of the obtained final results. We found that this method provides a more accurate and effective decision support tool for alternative evaluation or strategy selection.
Green material selection for sustainability: A hybrid MCDM approach
Zhang, Honghao; Peng, Yong; Tian, Guangdong; Wang, Danqi; Xie, Pengpeng
2017-01-01
Green material selection is a crucial step for the material industry to comprehensively improve material properties and promote sustainable development. However, because of the subjectivity and conflicting evaluation criteria in its process, green material selection, as a multi-criteria decision making (MCDM) problem, has been a widespread concern to the relevant experts. Thus, this study proposes a hybrid MCDM approach that combines decision making and evaluation laboratory (DEMATEL), analytical network process (ANP), grey relational analysis (GRA) and technique for order performance by similarity to ideal solution (TOPSIS) to select the optimal green material for sustainability based on the product's needs. A nonlinear programming model with constraints was proposed to obtain the integrated closeness index. Subsequently, an empirical application of rubbish bins was used to illustrate the proposed method. In addition, a sensitivity analysis and a comparison with existing methods were employed to validate the accuracy and stability of the obtained final results. We found that this method provides a more accurate and effective decision support tool for alternative evaluation or strategy selection. PMID:28498864
Voting systems for environmental decisions.
Burgman, Mark A; Regan, Helen M; Maguire, Lynn A; Colyvan, Mark; Justus, James; Martin, Tara G; Rothley, Kris
2014-04-01
Voting systems aggregate preferences efficiently and are often used for deciding conservation priorities. Desirable characteristics of voting systems include transitivity, completeness, and Pareto optimality, among others. Voting systems that are common and potentially useful for environmental decision making include simple majority, approval, and preferential voting. Unfortunately, no voting system can guarantee an outcome, while also satisfying a range of very reasonable performance criteria. Furthermore, voting methods may be manipulated by decision makers and strategic voters if they have knowledge of the voting patterns and alliances of others in the voting populations. The difficult properties of voting systems arise in routine decision making when there are multiple criteria and management alternatives. Because each method has flaws, we do not endorse one method. Instead, we urge organizers to be transparent about the properties of proposed voting systems and to offer participants the opportunity to approve the voting system as part of the ground rules for operation of a group. © 2014 The Authors. Conservation Biology published by Wiley Periodicals, Inc., on behalf of the Society for Conservation Biology.
Computational compliance criteria in water hammer modelling
NASA Astrophysics Data System (ADS)
Urbanowicz, Kamil
2017-10-01
Among many numerical methods (finite: difference, element, volume etc.) used to solve the system of partial differential equations describing unsteady pipe flow, the method of characteristics (MOC) is most appreciated. With its help, it is possible to examine the effect of numerical discretisation carried over the pipe length. It was noticed, based on the tests performed in this study, that convergence of the calculation results occurred on a rectangular grid with the division of each pipe of the analysed system into at least 10 elements. Therefore, it is advisable to introduce computational compliance criteria (CCC), which will be responsible for optimal discretisation of the examined system. The results of this study, based on the assumption of various values of the Courant-Friedrichs-Levy (CFL) number, indicate also that the CFL number should be equal to one for optimum computational results. Application of the CCC criterion to own written and commercial computer programmes based on the method of characteristics will guarantee fast simulations and the necessary computational coherence.
ERIC Educational Resources Information Center
Spain, Seth M.; Miner, Andrew G.; Kroonenberg, Pieter M.; Drasgow, Fritz
2010-01-01
Questions about the dynamic processes that drive behavior at work have been the focus of increasing attention in recent years. Models describing behavior at work and research on momentary behavior indicate that substantial variation exists within individuals. This article examines the rationale behind this body of work and explores a method of…
Adult Outcome for Children with Autism
ERIC Educational Resources Information Center
Howlin, Patricia; Goode, Susan; Hutton, Jane; Rutter, Michael
2004-01-01
Background: Information on long-term prognosis in autism is limited. Outcome is known to be poor for those with an IQ below 50, but there have been few systematic studies of individuals with an IQ above this. Method: Sixty-eight individuals meeting criteria for autism and with a performance IQ of 50 or above in childhood were followed up as…
ERIC Educational Resources Information Center
Ferm Almqvist, Cecilia; Vinge, John; Väkevä, Lauri; Zandén, Olle
2017-01-01
Recent reforms in England and the USA give evidence that teaching methods and content can change rapidly, given a strong external pressure, for example through economic incentives, inspections, school choice, and public display of schools' and pupils' performances. Educational activities in the Scandinavian countries have increasingly become…
Dynamic Transfers Of Tasks Among Computers
NASA Technical Reports Server (NTRS)
Liu, Howard T.; Silvester, John A.
1989-01-01
Allocation scheme gives jobs to idle computers. Ideal resource-sharing algorithm should have following characteristics: Dynamics, decentralized, and heterogeneous. Proposed enhanced receiver-initiated dynamic algorithm (ERIDA) for resource sharing fulfills all above criteria. Provides method balancing workload among hosts, resulting in improvement in response time and throughput performance of total system. Adjusts dynamically to traffic load of each station.
ERIC Educational Resources Information Center
McGair, Charles D.
2012-01-01
Many theories, methods, and practices are utilized to evaluate teachers with the intention of determining teacher effectiveness to better inform decisions about retention, tenure, certification and performance-based pay. In the 21st century there has been a renewed emphasis on teacher evaluation in public schools, largely due to federal "Race…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-30
... products designed to meet new customer needs for access to postage. In addition, changes within the United... opportunities for PES providers to propose new concepts, methods, and processes to enable customers to print pre... support the USPS PES Test and Evaluation Program (the ``Program''). The intent is for the volumes to fully...
Caetano, Ana C; Santa-Cruz, André; Rolanda, Carla
2016-01-01
Background . Rome III criteria add physiological criteria to symptom-based criteria of chronic constipation (CC) for the diagnosis of defecatory disorders (DD). However, a gold-standard test is still lacking and physiological examination is expensive and time-consuming. Aim . Evaluate the usefulness of two low-cost tests-digital rectal examination (DRE) and balloon expulsion test (BET)-as screening or excluding tests of DD. Methods . We performed a systematic search in PUBMED and MEDLINE. We selected studies where constipated patients were evaluated by DRE or BET. Heterogeneity was assessed and random effect models were used to calculate the sensitivity, specificity, and negative predictive value (NPV) of the DRE and the BET. Results . Thirteen studies evaluating BET and four studies evaluating DRE (2329 patients) were selected. High heterogeneity ( I 2 > 80%) among studies was demonstrated. The studies evaluating the BET showed a sensitivity and specificity of 67% and 80%, respectively. Regarding the DRE, a sensitivity of 80% and specificity of 84% were calculated. NPV of 72% for the BET and NPV of 64% for the DRE were estimated. The sensitivity and specificity were similar when we restrict the analysis to studies using Rome criteria to define CC. The BET seems to perform better when a cut-off time of 2 minutes is used and when it is compared with a combination of physiological tests. Considering the DRE, strict criteria seem to improve the sensitivity but not the specificity of the test. Conclusion . Neither of the low-cost tests seems suitable for screening or excluding DD.
2016-01-01
Background The cathodic polarization seems to be an electrochemical method capable of modifying and coat biomolecules on titanium surfaces, improving the surface activity and promoting better biological responses. Objective The aim of the systematic review is to assess the scientific literature to evaluate the cellular response produced by treatment of titanium surfaces by applying the cathodic polarization technique. Data, Sources, and Selection The literature search was performed in several databases including PubMed, Web of Science, Scopus, Science Direct, Scielo and EBSCO Host, until June 2016, with no limits used. Eligibility criteria were used and quality assessment was performed following slightly modified ARRIVE and SYRCLE guidelines for cellular studies and animal research. Results Thirteen studies accomplished the inclusion criteria and were considered in the review. The quality of reporting studies in animal models was low and for the in vitro studies it was high. The in vitro and in vivo results reported that the use of cathodic polarization promoted hydride surfaces, effective deposition, and adhesion of the coated biomolecules. In the experimental groups that used the electrochemical method, cellular viability, proliferation, adhesion, differentiation, or bone growth were better or comparable with the control groups. Conclusions The use of the cathodic polarization method to modify titanium surfaces seems to be an interesting method that could produce active layers and consequently enhance cellular response, in vitro and in vivo animal model studies. PMID:27441840
Minority Group Status and Bias in College Admissions Criteria
ERIC Educational Resources Information Center
Silverman, Bernie I.; And Others
1976-01-01
Cleary's and Thorndike's definition of bias in college admissions criteria (ACT scores and high school percentile rank) were examined for black, white, and Jewish students. Use of the admissions criteria tended to overpredict blacks' performance, accurately predict whites' performance, and underpredict that of Jews. In light of Cleary's…
DOT National Transportation Integrated Search
2011-10-01
Criteria and procedures have been developed for assessing crashworthiness and occupant protection performance of alternatively designed trainsets to be used in Tier I (not exceeding 125 mph) passenger service. These criteria and procedures take advan...
Improving performances of suboptimal greedy iterative biclustering heuristics via localization.
Erten, Cesim; Sözdinler, Melih
2010-10-15
Biclustering gene expression data is the problem of extracting submatrices of genes and conditions exhibiting significant correlation across both the rows and the columns of a data matrix of expression values. Even the simplest versions of the problem are computationally hard. Most of the proposed solutions therefore employ greedy iterative heuristics that locally optimize a suitably assigned scoring function. We provide a fast and simple pre-processing algorithm called localization that reorders the rows and columns of the input data matrix in such a way as to group correlated entries in small local neighborhoods within the matrix. The proposed localization algorithm takes its roots from effective use of graph-theoretical methods applied to problems exhibiting a similar structure to that of biclustering. In order to evaluate the effectivenesss of the localization pre-processing algorithm, we focus on three representative greedy iterative heuristic methods. We show how the localization pre-processing can be incorporated into each representative algorithm to improve biclustering performance. Furthermore, we propose a simple biclustering algorithm, Random Extraction After Localization (REAL) that randomly extracts submatrices from the localization pre-processed data matrix, eliminates those with low similarity scores, and provides the rest as correlated structures representing biclusters. We compare the proposed localization pre-processing with another pre-processing alternative, non-negative matrix factorization. We show that our fast and simple localization procedure provides similar or even better results than the computationally heavy matrix factorization pre-processing with regards to H-value tests. We next demonstrate that the performances of the three representative greedy iterative heuristic methods improve with localization pre-processing when biological correlations in the form of functional enrichment and PPI verification constitute the main performance criteria. The fact that the random extraction method based on localization REAL performs better than the representative greedy heuristic methods under same criteria also confirms the effectiveness of the suggested pre-processing method. Supplementary material including code implementations in LEDA C++ library, experimental data, and the results are available at http://code.google.com/p/biclustering/ cesim@khas.edu.tr; melihsozdinler@boun.edu.tr Supplementary data are available at Bioinformatics online.
Faramarzi, Salar; Moradi, Mohammadreza; Abedi, Ahmad
2018-06-01
The present study aimed to develop the thinking maps training package and compare its training effect with the thinking maps method on the reading performance of second and fifth grade of elementary school male dyslexic students. For this mixed method exploratory study, from among the above mentioned grades' students in Isfahan, 90 students who met the inclusion criteria were selected by multistage sampling and randomly assigned into six experimental and control groups. The data were collected by reading and dyslexia test and Wechsler Intelligence Scale for Children-fourth edition. The results of covariance analysis indicated a significant difference between the reading performance of the experimental (thinking maps training package and thinking maps method groups) and control groups ([Formula: see text]). Moreover, there were significant differences between the thinking maps training package group and thinking maps method group in some of the subtests ([Formula: see text]). It can be concluded that thinking maps training package and the thinking maps method exert a positive influence on the reading performance of dyslexic students; therefore, thinking maps can be used as an effective training and treatment method.
How to determine an optimal threshold to classify real-time crash-prone traffic conditions?
Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang
2018-08-01
One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Luize, Ana P; Menezes, Ana Maria B; Perez-Padilla, Rogelio; Muiño, Adriana; López, Maria Victorina; Valdivia, Gonzalo; Lisboa, Carmem; Montes de Oca, Maria; Tálamo, Carlos; Celli, Bartolomé; Nascimento, Oliver A; Gazzotti, Mariana R; Jardim, José R
2014-01-01
Background: Spirometry is the gold standard for diagnosing chronic obstructive pulmonary disease (COPD). Although there are a number of different guideline criteria for deciding who should be selected for spirometric screening, to date it is not known which criteria are the best based on sensitivity and specificity. Aims: Firstly, to evaluate the proportion of subjects in the PLATINO Study that would be recommended for spirometry testing according to Global initiative for Obstructive Lung Disease (GOLD)-modified, American College of Chest Physicians (ACCP), National Lung Health Education Program (NLHEP), GOLD and American Thoracic Society/European Respiratory Society (ATS/ERS) criteria. Secondly, we aimed to compare the sensitivity, specificity, and positive predictive and negative predictive values, of these five different criteria. Methods: Data from the PLATINO study included information on respiratory symptoms, smoking and previous spirometry testing. The GOLD-modified spirometry indication criteria are based on three positive answers out of five questions: the presence of cough, phlegm in the morning, dyspnoea, age over 40 years and smoking status. Results: Data from 5,315 subjects were reviewed. Fewer people had an indication for spirometry (41.3%) according to the GOLD-modified criteria, and more people had an indication for spirometry (80.4%) by the GOLD and ATS/ERS criteria. A low percentage had previously had spirometry performed: GOLD-modified (14.5%); ACCP (13.2%); NLHEP (12.6%); and GOLD and ATS/ERS (12.3%). The GOLD-modified criteria showed the least sensitivity (54.9) and the highest specificity (61.0) for detecting COPD, whereas GOLD and ATS/ERS criteria showed the highest sensitivity (87.9) and the least specificity (20.8). Conclusion: There is a considerable difference in the indication for spirometry according to the five different guideline criteria. The GOLD-modified criteria recruit less people with the greatest sum of sensitivity and specificity. PMID:25358021
Menard, J-P; Mazouni, C; Fenollar, F; Raoult, D; Boubli, L; Bretelle, F
2010-12-01
The purpose of this investigation was to determine the diagnostic accuracy of quantitative real-time polymerase chain reaction (PCR) assay in diagnosing bacterial vaginosis versus the standard methods, the Amsel criteria and the Nugent score. The Amsel criteria, the Nugent score, and results from the molecular tool were obtained independently from vaginal samples of 163 pregnant women who reported abnormal vaginal symptoms before 20 weeks gestation. To determine the performance of the molecular tool, we calculated the kappa value, sensitivity, specificity, and positive and negative predictive values. Either or both of the Amsel criteria (≥3 criteria) and the Nugent score (score ≥7) indicated that 25 women (15%) had bacterial vaginosis, and the remaining 138 women did not. DNA levels of Gardnerella vaginalis or Atopobium vaginae exceeded 10(9) copies/mL or 10(8) copies/mL, respectively, in 34 (21%) of the 163 samples. Complete agreement between both reference methods and high concentrations of G. vaginalis and A. vaginae was found in 94.5% of women (154/163 samples, kappa value = 0.81, 95% confidence interval 0.70-0.81). The nine samples with discordant results were categorized as intermediate flora by the Nugent score. The molecular tool predicted bacterial vaginosis with a sensitivity of 100%, a specificity of 93%, a positive predictive value of 73%, and a negative predictive value of 100%. The quantitative real-time PCR assay shows excellent agreement with the results of both reference methods for the diagnosis of bacterial vaginosis.
Standards of nutrition for athletes in Germany.
Diel, F; Khanferyan, R A
2013-01-01
The Deutscher Olympische Sportbund (DOSB) founded recently an advisory board for German elite athlete nutrition, the 'Arbeitsgruppe (AG) Ernahrungsberatung an den Olympiastutzpunkten'. The 'Performance codex and quality criteria for the food supply in facilities of German elite sports' have been established since 1997. The biochemical equivalent (ATP) for the energy demand is calculated using the DLW (Double Labeled Water)-method on the basis of RMR (Resting Metabolic Rate) and BMR (Basic Metabolic Rate) at sport type specific exercises and performances. Certain nutraceutical ingredients for dietary supplements can be recommended. However, quality criteria for nutrition, cooking and food supply are defined on the basis of Health Food and the individual physiological/social-psychological status of the athlete. Especially food supplements and instant food have to be avoided for young athletes. The German advisory board for elite athlete nutrition publishes 'colour lists' for highly recommended (green), acceptable (yellow), and less recommended (red) food stuff.
Solodinina, E N; Starkov, Iu G; Shumkina, L V
2016-01-01
To define criteria and to estimate diagnostic significance of endosonography in differential diagnosis of benign and malignant stenoses of common bile duct. We presented the results of survey and treatment of 57 patients with benign and malignant stenoses of common bile duct. The technique of endosonography is described. We have formulated major criteria of differential diagnostics of tumoral and non-tumoral lesion of extrahepatic bile ducts. Comparative analysis of endosonography, ultrasound, computed tomography and magnetic resonance cholangiopancreatography was performed. Sensitivity, specificity and accuracy of endosonography in diagnosis of stenosis cause is 97.7%, 100% and 98.2% respectively. So it exceeds the efficacy of other diagnostic X-ray methods. In modern surgical clinic endosonography should be mandatory performed. It is necessary for final diagnostics of cause of common bile duct stenosis especially in case of its low location.
On optimal infinite impulse response edge detection filters
NASA Technical Reports Server (NTRS)
Sarkar, Sudeep; Boyer, Kim L.
1991-01-01
The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated directly in the expression for spurious responses. The three criteria are maximized using the variational method and nonlinear constrained optimization. The optimal filter parameters are tabulated for various values of the filter performance criteria. A complete methodology for implementing the optimal filter using approximating recursive digital filtering is presented. The approximating recursive digital filter is separable into two linear filters operating in two orthogonal directions. The implementation is very simple and computationally efficient, has a constant time of execution for different sizes of the operator, and is readily amenable to real-time hardware implementation.
Shevlin, Mark; Hyland, Philip; Roberts, Neil P.; Bisson, Jonathan I.; Brewin, Chris R; Cloitre, Marylene
2018-01-01
ABSTRACT Background: Two ‘sibling disorders’ have been proposed for the 11th version of the International Classification of Diseases (ICD-11): Posttraumatic Stress Disorder (PTSD) and Complex PTSD (CPTSD). To date, no research has attempted to identify the optimal symptom indicators for the ‘Disturbances in Self-Organization’ (DSO) symptom cluster. Objective: The aim of the current study was to assess the psychometric performance of scores of 16 potential DSO symptom indicators from the International Trauma Questionnaire (ITQ). Criteria relating to score variability and their ability to discriminate were employed. Method: Participants (N = 1839) were a nationally representative household sample of non-institutionalized adults currently residing in the US. Item scores from the ITQ were examined in relation to basic criteria associated with interpretability, variability, homogeneity, and association with functional impairment. The performance of the DSO symptoms was also assessed using 1- and 2-parameter item response theory (IRT) models. Results: The distribution of responses for all DSO indicators met the criteria associated with interpretability, variability, homogeneity, and association with functional impairment. The 1-parameter graded response model was considered the best model and indicated that each set of indictors performed very similarly. Conclusions: The ITQ contains 16 DSO symptom indicators and they perform well in measuring their respective symptom cluster. There was no evidence that particular indicators were ‘better’ than others, and it was concluded that the indicators are essentially interchangeable. PMID:29372014
Shevlin, Mark; Hyland, Philip; Roberts, Neil P; Bisson, Jonathan I; Brewin, Chris R; Cloitre, Marylene
2018-01-01
Background : Two 'sibling disorders' have been proposed for the 11 th version of the International Classification of Diseases (ICD-11): Posttraumatic Stress Disorder (PTSD) and Complex PTSD (CPTSD). To date, no research has attempted to identify the optimal symptom indicators for the 'Disturbances in Self-Organization' (DSO) symptom cluster. Objective : The aim of the current study was to assess the psychometric performance of scores of 16 potential DSO symptom indicators from the International Trauma Questionnaire (ITQ). Criteria relating to score variability and their ability to discriminate were employed. Method : Participants ( N = 1839) were a nationally representative household sample of non-institutionalized adults currently residing in the US. Item scores from the ITQ were examined in relation to basic criteria associated with interpretability, variability, homogeneity, and association with functional impairment. The performance of the DSO symptoms was also assessed using 1- and 2-parameter item response theory (IRT) models. Results : The distribution of responses for all DSO indicators met the criteria associated with interpretability, variability, homogeneity, and association with functional impairment. The 1-parameter graded response model was considered the best model and indicated that each set of indictors performed very similarly. Conclusions : The ITQ contains 16 DSO symptom indicators and they perform well in measuring their respective symptom cluster. There was no evidence that particular indicators were 'better' than others, and it was concluded that the indicators are essentially interchangeable.
Development and validation of a new Prescription Quality Index
Hassan, Norul Badriah; Ismail, Hasanah Che; Naing, Lin; Conroy, Ronán M; Abdul Rahman, Abdul Rashid
2010-01-01
AIMS The aims were to develop and validate a new Prescription Quality Index (PQI) for the measurement of prescription quality in chronic diseases. METHODS The PQI were developed and validated based on three separate surveys and one pilot study. Criteria were developed based on literature search, discussions and brainstorming sessions. Validity of the criteria was examined using modified Delphi method. Pre-testing was performed on 30 patients suffering from chronic diseases. The modified version was then subjected to reviews by pharmacists and clinicians in two separate surveys. The rater-based PQI with 22 criteria was then piloted in 120 patients with chronic illnesses. Results were analysed using SPSS version 12.0.1 RESULTS Exploratory principal components analysis revealed multiple factors contributing to prescription quality. Cronbach's α for the entire 22 criteria was 0.60. The average intra-rater and inter-rater reliability showed good to moderate stability (intraclass correlation coefficient 0.76 and 0.52, respectively). The PQI was significantly and negatively correlated with age (correlation coefficient −0.34, P < 0.001), number of drugs in prescriptions (correlation coefficient −0.51, P < 0.001) and number of chronic diseases/conditions (correlation coefficient −0.35, P < 0.001). CONCLUSIONS The PQI is a promising new instrument for measuring prescription quality. It has been shown that the PQI is a valid, reliable and responsive tool to measure quality of prescription in chronic diseases. PMID:20840442
NASA Technical Reports Server (NTRS)
1975-01-01
Gas turbine engines were assessed for application to hear duty transportation. A summary of the assumptions, applications, and methods of analysis is included along with a discussion of the approach taken, the technical program flow chart, and weighting criteria used for performance evaluation. The various engines are compared on the bases of weight, performance, emissions and noise, technology status, and growth potential. The results of the engine screening phase and the conceptual design phase are presented.
The Spanish external quality assessment scheme for mercury in urine.
Quintana, M J; Mazarrasa, O
1996-01-01
In 1986 the Instituto Nacional de Seguridad e Higiene en el Trabajo (INSHT), established the "Programa interlaboratorios de control de calidad de mercurio en orina (PICC-HgU)". The operation of this scheme is explained, criteria for evaluation of laboratory performance are defined and some results obtained are reviewed. Since the scheme started, an improvement in the overall performance of laboratories has been observed. The differences in the analytical methods used by laboratories do not seem to have a clear influence on the results.
NASA Astrophysics Data System (ADS)
Kang, Hong; Zhang, Yun; Hou, Haochen; Sun, Xiaoyang; Qin, Chenglu
2018-03-01
The textile industry has a high environmental impact so that implementing cleaner production audit is an effective way to achieve energy conservation and emissions reduction. But the evaluation method in current cleaner production audit divided the evaluation of CPOs into two parts: environment and economy. The evaluation index system was constructed from three criteria of environment benefits, economy benefits and product performance; weights of five indicators were determined by combination weights of entropy method and factor weight sorting method. Then efficiencies were evaluated comprehensively. The results showed that the best alkali recovery option was the nanofiltration membrane method (S=0.80).
Pythagorean fuzzy analytic hierarchy process to multi-criteria decision making
NASA Astrophysics Data System (ADS)
Mohd, Wan Rosanisah Wan; Abdullah, Lazim
2017-11-01
A numerous approaches have been proposed in the literature to determine the criteria of weight. The weight of criteria is very significant in the process of decision making. One of the outstanding approaches that used to determine weight of criteria is analytic hierarchy process (AHP). This method involves decision makers (DMs) to evaluate the decision to form the pair-wise comparison between criteria and alternatives. In classical AHP, the linguistic variable of pairwise comparison is presented in terms of crisp value. However, this method is not appropriate to present the real situation of the problems because it involved the uncertainty in linguistic judgment. For this reason, AHP has been extended by incorporating the Pythagorean fuzzy sets. In addition, no one has found in the literature proposed how to determine the weight of criteria using AHP under Pythagorean fuzzy sets. In order to solve the MCDM problem, the Pythagorean fuzzy analytic hierarchy process is proposed to determine the criteria weight of the evaluation criteria. Using the linguistic variables, pairwise comparison for evaluation criteria are made to the weights of criteria using Pythagorean fuzzy numbers (PFNs). The proposed method is implemented in the evaluation problem in order to demonstrate its applicability. This study shows that the proposed method provides us with a useful way and a new direction in solving MCDM problems with Pythagorean fuzzy context.
Nouri, Dorra; Lucas, Yves; Treuillet, Sylvie
2016-12-01
Hyperspectral imaging is an emerging technology recently introduced in medical applications inasmuch as it provides a powerful tool for noninvasive tissue characterization. In this context, a new system was designed to be easily integrated in the operating room in order to detect anatomical tissues hardly noticed by the surgeon's naked eye. Our LCTF-based spectral imaging system is operative over visible, near- and middle-infrared spectral ranges (400-1700 nm). It is dedicated to enhance critical biological tissues such as the ureter and the facial nerve. We aim to find the best three relevant bands to create a RGB image to display during the intervention with maximal contrast between the target tissue and its surroundings. A comparative study is carried out between band selection methods and band transformation methods. Combined band selection methods are proposed. All methods are compared using different evaluation criteria. Experimental results show that the proposed combined band selection methods provide the best performance with rich information, high tissue separability and short computational time. These methods yield a significant discrimination between biological tissues. We developed a hyperspectral imaging system in order to enhance some biological tissue visualization. The proposed methods provided an acceptable trade-off between the evaluation criteria especially in SWIR spectral band that outperforms the naked eye's capacities.
Measurement of the resistivity of porous materials with an alternating air-flow method.
Dragonetti, Raffaele; Ianniello, Carmine; Romano, Rosario A
2011-02-01
Air-flow resistivity is a main parameter governing the acoustic behavior of porous materials for sound absorption. The international standard ISO 9053 specifies two different methods to measure the air-flow resistivity, namely a steady-state air-flow method and an alternating air-flow method. The latter is realized by the measurement of the sound pressure at 2 Hz in a small rigid volume closed partially by the test sample. This cavity is excited with a known volume-velocity sound source implemented often with a motor-driven piston oscillating with prescribed area and displacement magnitude. Measurements at 2 Hz require special instrumentation and care. The authors suggest an alternating air-flow method based on the ratio of sound pressures measured at frequencies higher than 2 Hz inside two cavities coupled through a conventional loudspeaker. The basic method showed that the imaginary part of the sound pressure ratio is useful for the evaluation of the air-flow resistance. Criteria are discussed about the choice of a frequency range suitable to perform simplified calculations with respect to the basic method. These criteria depend on the sample thickness, its nonacoustic parameters, and the measurement apparatus as well. The proposed measurement method was tested successfully with various types of acoustic materials.
Guillemin, F; Saraux, A; Fardellone, P; Guggenbuhl, P; Behier, J; Coste, J
2003-01-01
Objective: To assess the performance in the detection of cases of rheumatoid arthritis (RA) and the spondyloarthropathies (SpA) of a questionnaire suitable for use in telephone surveys conducted by patient interviewers. Methods: A questionnaire was designed with reference to the signs, symptoms, and epidemiological criteria for RA (ACR 1987) and SpA (ESSG 1991). Three groups of respondents were recruited from the rheumatology outpatient clinics of 10 university hospitals: 235 with RA, 175 with SpA, and 195 controls with other rheumatological disorders. All diagnoses were confirmed by a rheumatologist. Patient from self help groups and social organisations were trained by a polling company professional to conduct a standard telephone interview using the new questionnaire. Results: In an RA-control comparison, logistic regression showed that a set of five items, predominantly ACR criteria, were the most informative. Self reported diagnosis performed best (sensitivity 0.99, specificity 0.87). In an SpA-control comparison, a set of three items from the ESSG criteria were the most informative, with self reported diagnosis again performing best (sensitivity 0.85, specificity 0.96). Overall agreements with clinical diagnoses were 97.7% for RA and 94.4% SpA, dropping to 90.4% and 79.1%, respectively, when self reported diagnosis was excluded. Without self reported diagnosis, questions about peripheral joint and spinal pain made significant contributions to diagnostic performance. Conclusion: A questionnaire in plain language was developed for use in detecting cases of RA and SpA. It performed satisfactorily when administered by patient interviewers and is now available for epidemiological surveys of the general population. PMID:12972474
Multi objective decision making in hybrid energy system design
NASA Astrophysics Data System (ADS)
Merino, Gabriel Guillermo
The design of grid-connected photovoltaic wind generator system supplying a farmstead in Nebraska has been undertaken in this dissertation. The design process took into account competing criteria that motivate the use of different sources of energy for electric generation. The criteria considered were 'Financial', 'Environmental', and 'User/System compatibility'. A distance based multi-objective decision making methodology was developed to rank design alternatives. The method is based upon a precedence order imposed upon the design objectives and a distance metric describing the performance of each alternative. This methodology advances previous work by combining ambiguous information about the alternatives with a decision-maker imposed precedence order in the objectives. Design alternatives, defined by the photovoltaic array and wind generator installed capacities, were analyzed using the multi-objective decision making approach. The performance of the design alternatives was determined by simulating the system using hourly data for an electric load for a farmstead and hourly averages of solar irradiation, temperature and wind speed from eight wind-solar energy monitoring sites in Nebraska. The spatial variability of the solar energy resource within the region was assessed by determining semivariogram models to krige hourly and daily solar radiation data. No significant difference was found in the predicted performance of the system when using kriged solar radiation data, with the models generated vs. using actual data. The spatial variability of the combined wind and solar energy resources was included in the design analysis by using fuzzy numbers and arithmetic. The best alternative was dependent upon the precedence order assumed for the main criteria. Alternatives with no PV array or wind generator dominated when the 'Financial' criteria preceded the others. In contrast, alternatives with a nil component of PV array but a high wind generator component, dominated when the 'Environment' objective or the 'User/System compatibility' objectives were more important than the 'Financial' objectives and they also dominated when the three criteria were considered equally important.
Pyrotechnic shock: A literature survey of the Linear Shaped Charge (LSC)
NASA Technical Reports Server (NTRS)
Smith, J. L.
1984-01-01
Linear shaped charge (LSC) literature for the past 20 years is reviewed. The following topics are discussed: (1) LSC configuration; (2) LSC usage; (3) LSC induced pyroshock; (4) simulated pyrotechnic testing; (5) actual pyrotechnic testing; (6) data collection methods; (7) data analysis techniques; (8) shock reduction methods; and (9) design criteria. Although no new discoveries have been made in LSC research, charge shapes are improved to allow better cutting performance, testing instrumentation is refined, and some new explosives, for use in LSC, are formulated.
Haptic exploratory behavior during object discrimination: a novel automatic annotation method.
Jansen, Sander E M; Bergmann Tiest, Wouter M; Kappers, Astrid M L
2015-01-01
In order to acquire information concerning the geometry and material of handheld objects, people tend to execute stereotypical hand movement patterns called haptic Exploratory Procedures (EPs). Manual annotation of haptic exploration trials with these EPs is a laborious task that is affected by subjectivity, attentional lapses, and viewing angle limitations. In this paper we propose an automatic EP annotation method based on position and orientation data from motion tracking sensors placed on both hands and inside a stimulus. A set of kinematic variables is computed from these data and compared to sets of predefined criteria for each of four EPs. Whenever all criteria for a specific EP are met, it is assumed that that particular hand movement pattern was performed. This method is applied to data from an experiment where blindfolded participants haptically discriminated between objects differing in hardness, roughness, volume, and weight. In order to validate the method, its output is compared to manual annotation based on video recordings of the same trials. Although mean pairwise agreement is less between human-automatic pairs than between human-human pairs (55.7% vs 74.5%), the proposed method performs much better than random annotation (2.4%). Furthermore, each EP is linked to a specific object property for which it is optimal (e.g., Lateral Motion for roughness). We found that the percentage of trials where the expected EP was found does not differ between manual and automatic annotation. For now, this method cannot yet completely replace a manual annotation procedure. However, it could be used as a starting point that can be supplemented by manual annotation.
ERIC Educational Resources Information Center
Fastre, Greet Mia Jos; van der Klink, Marcel R.; van Merrienboer, Jeroen J. G.
2010-01-01
This study investigated the effect of performance-based versus competence-based assessment criteria on task performance and self-assessment skills among 39 novice secondary vocational education students in the domain of nursing and care. In a performance-based assessment group students are provided with a preset list of performance-based…
Bex, Axel; Fournier, Laure; Lassau, Nathalie; Mulders, Peter; Nathan, Paul; Oyen, Wim J G; Powles, Thomas
2014-04-01
The introduction of targeted agents for the treatment of renal cell carcinoma (RCC) has resulted in new challenges for assessing response to therapy, and conventional response criteria using computed tomography (CT) are limited. It is widely recognised that targeted therapies may lead to significant necrosis without significant reduction in tumour size. In addition, the vascular effects of antiangiogenic therapy may occur long before there is any reduction in tumour size. To perform a systematic review of conventional and novel imaging methods for the assessment of response to targeted agents in RCC and to discuss their use from a clinical perspective. Relevant databases covering the period January 2006 to April 2013 were searched for studies reporting on the use of anatomic and functional imaging techniques to predict response to targeted therapy in RCC. Inclusion criteria were randomised trials, nonrandomised controlled studies, retrospective case series, and cohort studies. Reviews, animal and preclinical studies, case reports, and commentaries were excluded. A narrative synthesis of the evidence is presented. A total of 331 abstracts and 76 full-text articles were assessed; 34 studies met the inclusion criteria. Current methods of response assessment in RCC include anatomic methods--based on various criteria including Choi, size and attenuation CT, and morphology, attenuation, size, and structure--and functional techniques including dynamic contrast-enhanced (DCE) CT, DCE-magnetic resonance imaging, DCE-ultrasonography, positron emission tomography, and approaches utilising radiolabelled monoclonal antibodies. Functional imaging techniques are promising surrogate biomarkers of response in RCC and may be more appropriate than anatomic CT-based methods. By enabling quantification of tumour vascularisation, functional techniques can directly and rapidly detect the biologic effects of antiangiogenic therapies compared with the indirect detection of belated effects on tumour size by anatomic methods. However, larger prospective studies are needed to validate early results and standardise techniques. Copyright © 2013 European Association of Urology. All rights reserved.
Siting process for disposal site of low level radiactive waste in Thailand
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamkate, P.; Sriyotha, P.; Thiengtrongjit, S.
The radioactive waste in Thailand is composed of low level waste from the application of radioisotopes in medical treatment and industry, the operation of the 2 MW TRIGA Mark III Research Reactor and the production of radioisotopes at OAEP. In addition, the high activity of sealed radiation sources i.e. Cs-137 Co-60 and Ra-226 are also accumulated. Since the volume of treated waste has been gradually increased, the general needs for a repository become apparent. The near surface disposal method has been chosen for this aspect. The feasibility study on the underground disposal site has been done since 1982. The sitemore » selection criteria have been established, consisting of the rejection criteria, the technical performance criteria and the economic criteria. About 50 locations have been picked for consideration and 5 candidate sites have been selected and subsequent investigated. After thoroughly investigation, a definite location in Ratchburi Province, about 180 kilometers southwest of Bangkok, has been selected as the most suitable place for the near surface disposal of radioactive waste in Thailand.« less
Maximum likelihood of phylogenetic networks.
Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir
2006-11-01
Horizontal gene transfer (HGT) is believed to be ubiquitous among bacteria, and plays a major role in their genome diversification as well as their ability to develop resistance to antibiotics. In light of its evolutionary significance and implications for human health, developing accurate and efficient methods for detecting and reconstructing HGT is imperative. In this article we provide a new HGT-oriented likelihood framework for many problems that involve phylogeny-based HGT detection and reconstruction. Beside the formulation of various likelihood criteria, we show that most of these problems are NP-hard, and offer heuristics for efficient and accurate reconstruction of HGT under these criteria. We implemented our heuristics and used them to analyze biological as well as synthetic data. In both cases, our criteria and heuristics exhibited very good performance with respect to identifying the correct number of HGT events as well as inferring their correct location on the species tree. Implementation of the criteria as well as heuristics and hardness proofs are available from the authors upon request. Hardness proofs can also be downloaded at http://www.cs.tau.ac.il/~tamirtul/MLNET/Supp-ML.pdf
Near-Earth object hazardous impact: A Multi-Criteria Decision Making approach.
Sánchez-Lozano, J M; Fernández-Martínez, M
2016-11-16
The impact of a near-Earth object (NEO) may release large amounts of energy and cause serious damage. Several NEO hazard studies conducted over the past few years provide forecasts, impact probabilities and assessment ratings, such as the Torino and Palermo scales. These high-risk NEO assessments involve several criteria, including impact energy, mass, and absolute magnitude. The main objective of this paper is to provide the first Multi-Criteria Decision Making (MCDM) approach to classify hazardous NEOs. Our approach applies a combination of two methods from a widely utilized decision making theory. Specifically, the Analytic Hierarchy Process (AHP) methodology is employed to determine the criteria weights, which influence the decision making, and the Technique for Order Performance by Similarity to Ideal Solution (TOPSIS) is used to obtain a ranking of alternatives (potentially hazardous NEOs). In addition, NEO datasets provided by the NASA Near-Earth Object Program are utilized. This approach allows the classification of NEOs by descending order of their TOPSIS ratio, a single quantity that contains all of the relevant information for each object.
Müllersdorf, M
2000-12-01
The aim of the study was to elucidate selection criteria for need of rehabilitation/occupational therapy, and to state criteria for participation in occupational therapy, among persons with long-term and/or recurrent pain causing activity limitations or restricting participation in daily life. The study involved 914 persons aged 18-58 years who answered a postal questionnaire concerning demography, pain, occupations in daily life, work, treatments and health care staff visited. The direct method in logistic regression analysis was used to test two models: (1) need of rehabilitation/occupational therapy and (2) participation in occupational therapy. The results for the first model revealed the selection criteria (1) 'feelings of irresolution', (2) 'gnawing/searing pain' and (3) 'use of technical aids'. The odds for need of rehabilitation/occupational therapy were higher for women than for men. The criteria derived from the second model, participation in occupational therapy, were whether (1) the participants had 'used tricks and/or compensated ways to perform tasks', (2) the participants had 'pain in shoulders' and (3) 'changes had been made at work due to health conditions'.
2010-01-01
Background The modular approach to analysis of genetically modified organisms (GMOs) relies on the independence of the modules combined (i.e. DNA extraction and GM quantification). The validity of this assumption has to be proved on the basis of specific performance criteria. Results An experiment was conducted using, as a reference, the validated quantitative real-time polymerase chain reaction (PCR) module for detection of glyphosate-tolerant Roundup Ready® GM soybean (RRS). Different DNA extraction modules (CTAB, Wizard and Dellaporta), were used to extract DNA from different food/feed matrices (feed, biscuit and certified reference material [CRM 1%]) containing the target of the real-time PCR module used for validation. Purity and structural integrity (absence of inhibition) were used as basic criteria that a DNA extraction module must satisfy in order to provide suitable template DNA for quantitative real-time (RT) PCR-based GMO analysis. When performance criteria were applied (removal of non-compliant DNA extracts), the independence of GMO quantification from the extraction method and matrix was statistically proved, except in the case of Wizard applied to biscuit. A fuzzy logic-based procedure also confirmed the relatively poor performance of the Wizard/biscuit combination. Conclusions For RRS, this study recognises that modularity can be generally accepted, with the limitation of avoiding combining highly processed material (i.e. biscuit) with a magnetic-beads system (i.e. Wizard). PMID:20687918
Improving the Performance of the Prony Method Using a Wavelet Domain Filter for MRI Denoising
Lentini, Marianela; Paluszny, Marco
2014-01-01
The Prony methods are used for exponential fitting. We use a variant of the Prony method for abnormal brain tissue detection in sequences of T 2 weighted magnetic resonance images. Here, MR images are considered to be affected only by Rician noise, and a new wavelet domain bilateral filtering process is implemented to reduce the noise in the images. This filter is a modification of Kazubek's algorithm and we use synthetic images to show the ability of the new procedure to suppress noise and compare its performance with respect to the original filter, using quantitative and qualitative criteria. The tissue classification process is illustrated using a real sequence of T 2 MR images, and the filter is applied to each image before using the variant of the Prony method. PMID:24834108
Improving the performance of the prony method using a wavelet domain filter for MRI denoising.
Jaramillo, Rodney; Lentini, Marianela; Paluszny, Marco
2014-01-01
The Prony methods are used for exponential fitting. We use a variant of the Prony method for abnormal brain tissue detection in sequences of T 2 weighted magnetic resonance images. Here, MR images are considered to be affected only by Rician noise, and a new wavelet domain bilateral filtering process is implemented to reduce the noise in the images. This filter is a modification of Kazubek's algorithm and we use synthetic images to show the ability of the new procedure to suppress noise and compare its performance with respect to the original filter, using quantitative and qualitative criteria. The tissue classification process is illustrated using a real sequence of T 2 MR images, and the filter is applied to each image before using the variant of the Prony method.
Ha, Sang Ook; Park, Sang Hyuk; Hong, Sang Bum; Jang, Seongsoo
2016-11-01
Disseminated intravascular coagulation (DIC) is a major complication in sepsis patients. We compared the performance of five DIC diagnostic criteria, focusing on the prediction of mortality. One hundred patients with severe sepsis or septic shock admitted to intensive care unit (ICU) were enrolled. Routine DIC laboratory tests were performed over the first 4 days after admission. The overall ICU and 28-day mortality in DIC patients diagnosed from five criteria (International Society on Thrombosis and Haemostasis [ISTH], the Japanese Association for Acute Medicine [JAAM], the revised JAAM [R-JAAM], the Japanese Ministry of Health and Welfare [JMHW] and the Korean Society on Thrombosis and Hemostasis [KSTH]) were compared. Both KSTH and JMHW criteria showed superior performance than ISTH, JAAM and R-JAAM criteria in the prediction of overall ICU mortality in DIC patients (odds ratio 3.828 and 5.181, P = 0.018 and 0.006, 95% confidence interval 1.256-11.667 and 1.622-16.554, respectively) when applied at day 1 after admission, and survival analysis demonstrated significant prognostic impact of KSTH and JMHW criteria on the prediction of 28-day mortality (P = 0.007 and 0.049, respectively) when applied at day 1 after admission. In conclusion, both KSTH and JMHW criteria would be more useful than other three criteria in predicting prognosis in DIC patients with severe sepsis or septic shock.
Diagnosis of IBS: symptoms, symptom-based criteria, biomarkers or 'psychomarkers'?
Sood, Ruchit; Law, Graham R; Ford, Alexander C
2014-11-01
IBS is estimated to have a prevalence of up to 20% in Western populations and results in substantial costs to health-care services worldwide, estimated to be US$1 billion per year in the USA. IBS remains difficult to diagnose due to its multifactorial aetiology, heterogeneous nature and overlap of symptoms with organic pathologies, such as coeliac disease and IBD. As a result, IBS often continues to be a diagnosis of exclusion, resulting in unnecessary investigations. Available methods for the diagnosis of IBS-including the current gold standard, the Rome III criteria-perform only moderately well. Visceral hypersensitivity and altered pain perception do not discriminate between IBS and other functional gastrointestinal diseases or health with any great accuracy. Attention has now turned to developing novel biomarkers and using psychological markers (so-called psychomarkers) to aid the diagnosis of IBS. This Review describes how useful symptoms, symptom-based criteria, biomarkers and psychomarkers, and indeed combinations of all these approaches, are in the diagnosis of IBS. Future directions in diagnosing IBS could include combining demographic data, gastrointestinal symptoms, biomarkers and psychomarkers using statistical methods. Latent class analysis to distinguish between IBS and non-IBS symptom profiles might also represent a promising avenue for future research.
Optimal control theory for non-scalar-valued performance criteria. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Gerring, H. P.
1971-01-01
The theory of optimal control for nonscalar-valued performance criteria is discussed. In the space, where the performance criterion attains its value, the relations better than, worse than, not better than, and not worse than are defined by a partial order relation. The notion of optimality splits up into superiority and non-inferiority, because worse than is not the complement of better than, in general. A superior solution is better than every other solution. A noninferior solution is not worse than any other solution. Noninferior solutions have been investigated particularly for vector-valued performance criteria. Superior solutions for non-scalar-valued performance criteria attaining their values in abstract partially ordered spaces are emphasized. The main result is the infimum principle which constitutes necessary conditions for a control to be a superior solution to an optimal control problem.
The performance of a sampled data delay lock loop implemented with a Kalman loop filter
NASA Astrophysics Data System (ADS)
Eilts, H. S.
1980-01-01
The purpose of this study is to evaluate the steady-state and transient (lock-up) performance of a tracking loop implemented with a Kalman filter. Steady-state performance criteria are errors due to measurement noise (jitter) and Doppler errors due to motion of the tracking loop. Trade-offs exist between the two criteria such that increasing performance with respect to either one will cause performance decrease with respect to the other. It is shown that by carefully selecting filter parameters reasonable performance can be obtained for both criteria simultaneously. It is also shown that lock-up performance for the loop is acceptable when these parameters are used.
5 CFR 430.404 - Certification criteria.
Code of Federal Regulations, 2011 CFR
2011-01-01
... responsibility; reflect expected agency and/or organizational outcomes and outputs, performance targets or... 430.404 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERFORMANCE MANAGEMENT Performance Appraisal Certification for Pay Purposes § 430.404 Certification criteria. (a) To be...
NASA Astrophysics Data System (ADS)
Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.
2014-12-01
This study investigated the performance of the Lee-Carter (LC) method and it variants in modeling and forecasting Malaysia mortality. These include the original LC, the Lee-Miller (LM) variant and the Booth-Maindonald-Smith (BMS) variant. These methods were evaluated using Malaysia's mortality data which was measured based on age specific death rates (ASDR) for 1971 to 2009 for overall population while those for 1980-2009 were used in separate models for male and female population. The performance of the variants has been examined in term of the goodness of fit of the models and forecasting accuracy. Comparison was made based on several criteria namely, mean square error (MSE), root mean square error (RMSE), mean absolute deviation (MAD) and mean absolute percentage error (MAPE). The results indicate that BMS method was outperformed in in-sample fitting for overall population and when the models were fitted separately for male and female population. However, in the case of out-sample forecast accuracy, BMS method only best when the data were fitted to overall population. When the data were fitted separately for male and female, LCnone performed better for male population and LM method is good for female population.
Selection criteria of residents for residency programs in Kuwait
2013-01-01
Background In Kuwait, 21 residency training programs were offered in the year 2011; however, no data is available regarding the criteria of selecting residents for these programs. This study aims to provide information about the importance of these criteria. Methods A self-administered questionnaire was used to collect data from members (e.g. chairmen, directors, assistants …etc.) of residency programs in Kuwait. A total of 108 members were invited to participate. They were asked to rate the importance level (scale from 1 to 5) of criteria that may affect the acceptance of an applicant to their residency programs. Average scores were calculated for each criterion. Results Of the 108 members invited to participate, only 12 (11.1%) declined to participate. Interview performance was ranked as the most important criteria for selecting residents (average score: 4.63/5.00), followed by grade point average (average score: 3.78/5.00) and honors during medical school (average score: 3.67/5.00). On the other hand, receiving disciplinary action during medical school and failure in a required clerkship were considered as the most concerning among other criteria used to reject applicants (average scores: 3.83/5.00 and 3.54/5.00 respectively). Minor differences regarding the importance level of each criterion were noted across different programs. Conclusions This study provided general information about the criteria that are used to accept/reject applicants to residency programs in Kuwait. Future studies should be conducted to investigate each criterion individually, and to assess if these criteria are related to residents' success during their training. PMID:23331670
NASA Technical Reports Server (NTRS)
McCloskey, John
2016-01-01
This paper describes the electromagnetic compatibility (EMC) tests performed on the Integrated Science Instrument Module (ISIM), the science payload of the James Webb Space Telescope (JWST), at NASAs Goddard Space Flight Center (GSFC) in August 2015. By its very nature of being an integrated payload, it could be treated as neither a unit level test nor an integrated spacecraft/observatory test. Non-standard test criteria are described along with non-standard test methods that had to be developed in order to evaluate them. Results are presented to demonstrate that all test criteria were met in less than the time allocated.
Li, Wei; Zhang, Min; Wang, Mingyu; Han, Zhantao; Liu, Jiankai; Chen, Zhezhou; Liu, Bo; Yan, Yan; Liu, Zhu
2018-06-01
Brownfield sites pollution and remediation is an urgent environmental issue worldwide. The screening and assessment of remedial alternatives is especially complex owing to its multiple criteria that involves technique, economy, and policy. To help the decision-makers selecting the remedial alternatives efficiently, the criteria framework conducted by the U.S. EPA is improved and a comprehensive method that integrates multiple criteria decision analysis (MCDA) with numerical simulation is conducted in this paper. The criteria framework is modified and classified into three categories: qualitative, semi-quantitative, and quantitative criteria, MCDA method, AHP-PROMETHEE (analytical hierarchy process-preference ranking organization method for enrichment evaluation) is used to determine the priority ranking of the remedial alternatives and the solute transport simulation is conducted to assess the remedial efficiency. A case study was present to demonstrate the screening method in a brownfield site in Cangzhou, northern China. The results show that the systematic method provides a reliable way to quantify the priority of the remedial alternatives.
Dharmarajan, Lekshmi; Hale, Theodore M; Velastegui, Zoila; Castillo, Emerita; Kanna, Balavenkatesh
2009-01-01
Evaluate the practice and appropriateness of requesting echocardiograms in patients with suspected or documented cardiac disease during gestation and puerperium, using the American College of Cardiology Foundation (ACCF) appropriateness criteria, in conjunction with clinical picture. Retrospective observational study, to analyze echocardiograms performed during pregnancy and puerperium at a teaching hospital from 2001 to 2006 for appropriateness criteria and studying its impact on management. Sixty-seven patients pregnant or in the puerperal stage had an echocardiogram performed during that period; 58 met our criteria for inclusion. Based on clinical information and criteria of the ACCF, 51 of the 58 echocardiograms met the appropriateness criteria. Of the 51, results of 40 impacted on management; 14 of the 40 echocardiograms that had an impact were abnormal. Although the ACCF appropriateness criteria have not been specifically studied in pregnancy, our study demonstrates that the criteria are applicable if used appropriately in pregnancy. Most indications in our study correlated with the appropriateness criteria. Although most findings were normal, information from echocardiograms impacted on management in the majority of patients, contributing to therapeutic decision-making. The reliability of echocardiograms performed according to appropriate criteria to assist clinical decisions was excellent even in patients with physiologic cardiovascular changes.
Adherence to Standards for Reporting Diagnostic Accuracy in Emergency Medicine Research.
Gallo, Lucas; Hua, Nadia; Mercuri, Mathew; Silveira, Angela; Worster, Andrew
2017-08-01
Diagnostic tests are used frequently in the emergency department (ED) to guide clinical decision making and, hence, influence clinical outcomes. The Standards for Reporting of Diagnostic Accuracy (STARD) criteria were developed to ensure that diagnostic test studies are performed and reported to best inform clinical decision making in the ED. The objective was to determine the extent to which diagnostic studies published in emergency medicine journals adhered to STARD 2003 criteria. Diagnostic studies published in eight MEDLINE-listed, peer-reviewed, emergency medicine journals over a 5-year period were reviewed for compliance to STARD criteria. A total of 12,649 articles were screened and 114 studies were included in our study. Twenty percent of these were randomly selected for assessment using STARD 2003 criteria. Adherence to STARD 2003 reporting standards for each criteria ranged from 8.7% adherence (criteria-reporting adverse events from performing index test or reference standard) to 100% (multiple criteria). Just over half of STARD criteria are reported in more than 80% studies. As poorly reported studies may negatively impact their clinical usefulness, it is essential that studies of diagnostic test accuracy be performed and reported adequately. Future studies should assess whether studies have improved compliance with the STARD 2015 criteria amendment. © 2017 by the Society for Academic Emergency Medicine.
A channel estimation scheme for MIMO-OFDM systems
NASA Astrophysics Data System (ADS)
He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen
2017-08-01
In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.
Structural optimization of large structural systems by optimality criteria methods
NASA Technical Reports Server (NTRS)
Berke, Laszlo
1992-01-01
The fundamental concepts of the optimality criteria method of structural optimization are presented. The effect of the separability properties of the objective and constraint functions on the optimality criteria expressions is emphasized. The single constraint case is treated first, followed by the multiple constraint case with a more complex evaluation of the Lagrange multipliers. Examples illustrate the efficiency of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunn, Andrew J., E-mail: agunn@uabmc.edu; Sheth, Rahul A.; Luber, Brandon
2017-01-15
PurposeThe purpse of this study was to evaluate the ability of various radiologic response criteria to predict patient outcomes after trans-arterial chemo-embolization with drug-eluting beads (DEB-TACE) in patients with advanced-stage (BCLC C) hepatocellular carcinoma (HCC).Materials and methodsHospital records from 2005 to 2011 were retrospectively reviewed. Non-infiltrative lesions were measured at baseline and on follow-up scans after DEB-TACE according to various common radiologic response criteria, including guidelines of the World Health Organization (WHO), Response Evaluation Criteria in Solid Tumors (RECIST), the European Association for the Study of the Liver (EASL), and modified RECIST (mRECIST). Statistical analysis was performed to see which,more » if any, of the response criteria could be used as a predictor of overall survival (OS) or time-to-progression (TTP).Results75 patients met inclusion criteria. Median OS and TTP were 22.6 months (95 % CI 11.6–24.8) and 9.8 months (95 % CI 7.1–21.6), respectively. Univariate and multivariate Cox analyses revealed that none of the evaluated criteria had the ability to be used as a predictor for OS or TTP. Analysis of the C index in both univariate and multivariate models showed that the evaluated criteria were not accurate predictors of either OS (C-statistic range: 0.51–0.58 in the univariate model; range: 0.54–0.58 in the multivariate model) or TTP (C-statistic range: 0.55–0.59 in the univariate model; range: 0.57–0.61 in the multivariate model).ConclusionCurrent response criteria are not accurate predictors of OS or TTP in patients with advanced-stage HCC after DEB-TACE.« less
Variance-Based Cluster Selection Criteria in a K-Means Framework for One-Mode Dissimilarity Data.
Vera, J Fernando; Macías, Rodrigo
2017-06-01
One of the main problems in cluster analysis is that of determining the number of groups in the data. In general, the approach taken depends on the cluster method used. For K-means, some of the most widely employed criteria are formulated in terms of the decomposition of the total point scatter, regarding a two-mode data set of N points in p dimensions, which are optimally arranged into K classes. This paper addresses the formulation of criteria to determine the number of clusters, in the general situation in which the available information for clustering is a one-mode [Formula: see text] dissimilarity matrix describing the objects. In this framework, p and the coordinates of points are usually unknown, and the application of criteria originally formulated for two-mode data sets is dependent on their possible reformulation in the one-mode situation. The decomposition of the variability of the clustered objects is proposed in terms of the corresponding block-shaped partition of the dissimilarity matrix. Within-block and between-block dispersion values for the partitioned dissimilarity matrix are derived, and variance-based criteria are subsequently formulated in order to determine the number of groups in the data. A Monte Carlo experiment was carried out to study the performance of the proposed criteria. For simulated clustered points in p dimensions, greater efficiency in recovering the number of clusters is obtained when the criteria are calculated from the related Euclidean distances instead of the known two-mode data set, in general, for unequal-sized clusters and for low dimensionality situations. For simulated dissimilarity data sets, the proposed criteria always outperform the results obtained when these criteria are calculated from their original formulation, using dissimilarities instead of distances.
The differences in electrocardiogram interpretation in top-level athletes.
Jakubiak, Agnieszka A; Burkhard-Jagodzińska, Krystyna; Król, Wojciech; Konopka, Marcin; Bursa, Dominik; Sitkowski, Dariusz; Kuch, Marek; Braksator, Wojciech
2017-01-01
The Ministry of Health in Poland recommends electrocardiogram (ECG)-based cardiovascular screening in athletes, but so far there has been a lack of guidelines on preparticipation assessment. We compared different criteria of ECG screening assessment in a group of top-level athletes. The aims were to evaluate the prevalence of ECG changes in athletes that necessitate further cardiological work-up according to three criteria in various age groups as well as to identify factors determining the occurrence of changes related and unrelated to the training. 262 high-dynamic, high-static Polish athletes (rowers, cyclists, canoeists) were divided into two age categories: young (≤ 18 years of age; n = 177, mean age 16.9 ± 0.8; 15-18 years) and elite (> 18 years of age; n = 85, mean age 22.9 ± 3.4; 19-34 years). All sports persons had a 12-lead ECG performed and evaluated according to 2010 European Society of Cardiology (ESC) recommendations, 2012 Seattle criteria, and 2014 Refined criteria. The Refined criteria reduced (p < 0.001) the number of training-unrelated ECG findings to 8.0% vs. 12.6% (Seattle criteria) and 30.5% (ESC recommendations). All three criteria revealed more training-related changes in the group of older athletes (76.5% vs. 55.9%, p = 0.001). Predictors that significantly (p < 0.005) affected the occurrence of adaptive changes were the age of the athlete, training duration (in years), and male gender. 1. The ESC criteria identified a group of athletes that was unacceptably large, as for the screening test, requiring verification with other methods (every fourth athlete). 2. The use of the Refined criteria helps to significantly reduce the frequency and necessity for additional tests. 3. The dependence of adaptive changes on training duration and athletes' age confirms the benign nature of those ECG findings.
Combining MCDM Methods and AHP to Improve TTQS: A Case Study of the VETC
ERIC Educational Resources Information Center
Chung, Kuo-Cheng; Chang, Ling-Chen
2015-01-01
This study proposed the use of the benchmarking framework to evaluate the performance of vocational education and training centers (VETC) in using the Taiwan Training Quality System (TTQS) to ensure the advantages and disadvantages of each factor and to confirm the priority of the weights of the criteria and alternative solutions. This study used…
Getting the Right Wheelchair for Travel: A WC19-Compliant Wheelchair
ERIC Educational Resources Information Center
Manary, Miriam A.; Hobson, Douglas A.; Schneider, Lawrence W.
2007-01-01
Children and adults who must remain seated in their wheelchairs while traveling are often at a disadvantage in terms of crash safety. The new voluntary wheelchair industry standard WC19 (short for Section 19 of the ANSI/RESNA wheelchair standards) works to close the safety gap by providing design and performance criteria and test methods to assess…
Adaptive coding of MSS imagery. [Multi Spectral band Scanners
NASA Technical Reports Server (NTRS)
Habibi, A.; Samulon, A. S.; Fultz, G. L.; Lumb, D.
1977-01-01
A number of adaptive data compression techniques are considered for reducing the bandwidth of multispectral data. They include adaptive transform coding, adaptive DPCM, adaptive cluster coding, and a hybrid method. The techniques are simulated and their performance in compressing the bandwidth of Landsat multispectral images is evaluated and compared using signal-to-noise ratio and classification consistency as fidelity criteria.
Relationship between time management in construction industry and project management performance
NASA Astrophysics Data System (ADS)
Nasir, Najuwa; Nawi, Mohd Nasrun Mohd; Radzuan, Kamaruddin
2016-08-01
Nowadays, construction industry particularly in Malaysia struggle in achieving status of eminent time management for construction project. Project managers have a great responsibility to keep the project success under time of project completion. However, studies shows that delays especially in Malaysian construction industry still unresolved due to weakness in managing the project. In addition, quality of time management on construction projects is generally poor. Due to the progressively extended delays issue, time performance becomes an important subject to be explored to investigate delay factors. The method of this study is review of literature towards issues in construction industry which affecting time performance of project in general by focusing towards process involved for project management. Based on study, it was found that knowledge, commitment, cooperation are the main criteria as an overall to manage the project into a smooth process during project execution until completion. It can be concluded that, the strength between project manager and team members in these main criteria while conducting the project towards good time performance is highly needed. However, there is lack of establishment towards factors of poor time performance which strongly related with project management. Hence, this study has been conducted to establish factors of poor time performance and its relations with project management.
Lee, Seul Chan; Cha, Min Chul; Hwangbo, Hwan; Mo, Sookhee; Ji, Yong Gu
2018-02-01
This study aimed at investigating the effect of two smartphone form factors (width and bottom bezel) on touch behaviors with one-handed interaction. User experiments on tapping tasks were conducted for four widths (67, 70, 72, and 74 mm) and five bottom bezel levels (2.5, 5, 7.5, 10, and 12.5 mm). Task performance, electromyography, and subjective workload data were collected to examine the touch behavior. The success rate and task completion time were collected as task performance measures. The NASA-TLX method was used to observe the subjective workload. The electromyogram signals of two thumb muscles, namely the first dorsal interosseous and abductor pollicis brevis, were observed. The task performances deteriorated with increasing width level. The subjective workload and electromyography data showed similar patterns with the task performances. The task performances of the bottom bezel devices were analyzed by using three different evaluation criteria. The results from these criteria indicated that tasks became increasingly difficult as the bottom bezel level decreased. The results of this study provide insights into the optimal range of smartphone form factors for one-handed interaction, which could contribute to the design of new smartphones. Copyright © 2017. Published by Elsevier Ltd.
Barthel, Alexander; Johnson, Alex; Osgood, Greg; Kazanzides, Peter; Navab, Nassir; Fuerst, Bernhard
2018-01-01
Purpose Optical see-through head-mounted displays (OST-HMD) feature an unhindered and instantaneous view of the surgery site and can enable a mixed reality experience for surgeons during procedures. In this paper, we present a systematic approach to identify the criteria for evaluation of OST-HMD technologies for specific clinical scenarios, which benefit from using an object-anchored 2D-display visualizing medical information. Methods Criteria for evaluating the performance of OST-HMDs for visualization of medical information and its usage are identified and proposed. These include text readability, contrast perception, task load, frame rate, and system lag. We choose to compare three commercially available OST-HMDs, which are representatives of currently available head-mounted display technologies. A multi-user study and an offline experiment are conducted to evaluate their performance. Results Statistical analysis demonstrates that Microsoft HoloLens performs best among the three tested OST-HMDs, in terms of contrast perception, task load, and frame rate, while ODG R-7 offers similar text readability. The integration of indoor localization and fiducial tracking on the HoloLens provides significantly less system lag in a relatively motionless scenario. Conclusions With ever more OST-HMDs appearing on the market, the proposed criteria could be used in the evaluation of their suitability for mixed reality surgical intervention. Currently, Microsoft HoloLens may be more suitable than ODG R-7 and Epson Moverio BT-200 for clinical usability in terms of the evaluated criteria. To the best of our knowledge, this is the first paper that presents a methodology and conducts experiments to evaluate and compare OST-HMDs for their use as object-anchored 2D-display during interventions. PMID:28343301
Bedini, José Luis; Wallace, Jane F; Pardo, Scott; Petruschke, Thorsten
2015-10-07
Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients' health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. © 2015 Diabetes Technology Society.
Park, Sang Hyuk; Kim, So-Young; Lee, Woochang; Chun, Sail; Min, Won-Ki
2012-09-01
Many laboratories use 4 delta check methods: delta difference, delta percent change, rate difference, and rate percent change. However, guidelines regarding decision criteria for selecting delta check methods have not yet been provided. We present new decision criteria for selecting delta check methods for each clinical chemistry test item. We collected 811,920 and 669,750 paired (present and previous) test results for 27 clinical chemistry test items from inpatients and outpatients, respectively. We devised new decision criteria for the selection of delta check methods based on the ratio of the delta difference to the width of the reference range (DD/RR). Delta check methods based on these criteria were compared with those based on the CV% of the absolute delta difference (ADD) as well as those reported in 2 previous studies. The delta check methods suggested by new decision criteria based on the DD/RR ratio corresponded well with those based on the CV% of the ADD except for only 2 items each in inpatients and outpatients. Delta check methods based on the DD/RR ratio also corresponded with those suggested in the 2 previous studies, except for 1 and 7 items in inpatients and outpatients, respectively. The DD/RR method appears to yield more feasible and intuitive selection criteria and can easily explain changes in the results by reflecting both the biological variation of the test item and the clinical characteristics of patients in each laboratory. We suggest this as a measure to determine delta check methods.
Influence of diagnostic criteria on the interpretation of adrenal vein sampling.
Lethielleux, Gaëlle; Amar, Laurence; Raynaud, Alain; Plouin, Pierre-François; Steichen, Olivier
2015-04-01
Guidelines promote the use of adrenal vein sampling (AVS) to document lateralized aldosterone hypersecretion in primary aldosteronism. However, there are large discrepancies between institutions in the criteria used to interpret its results. This study evaluates the consequences of these differences on the classification and management of patients. The results of all 537 AVS procedures performed between January 2001 and July 2010 in our institution were interpreted with 4 diagnostic criteria used in experienced institutions where AVS is performed without cosyntropin (Brisbane, Padua, Paris, and Turin) and with criteria proposed by a recent consensus statement. AVS procedures were classified as unsuccessful, lateralized, or not lateralized according to each set of criteria. Almost 5× more AVS procedures were classified as unsuccessful with the strictest criteria than with the least strict criteria (18% versus 4%, respectively). Similarly, over 2× more AVS procedures were classified as lateralized with the least stringent criteria than with the most stringent criteria (60% versus 26%, respectively). Multiple samples were available from ≥1 side for 155 AVS procedures. These procedures were classified differently by ≥2 right-left sample pairs in 12% to 20% of cases. Thus, different sets of criteria used to interpret AVS in experienced institutions translate into heterogeneous classifications and hence management decisions, for patients with primary aldosteronism. Defining the most appropriate procedures and diagnostic criteria is needed for AVS to achieve optimal performance and fully justify its status as a gold standard. © 2015 American Heart Association, Inc.
Comparison of fuzzy AHP and fuzzy TODIM methods for landfill location selection.
Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik
2016-01-01
Landfill location selection is a multi-criteria decision problem and has a strategic importance for many regions. The conventional methods for landfill location selection are insufficient in dealing with the vague or imprecise nature of linguistic assessment. To resolve this problem, fuzzy multi-criteria decision-making methods are proposed. The aim of this paper is to use fuzzy TODIM (the acronym for Interactive and Multi-criteria Decision Making in Portuguese) and the fuzzy analytic hierarchy process (AHP) methods for the selection of landfill location. The proposed methods have been applied to a landfill location selection problem in the region of Casablanca, Morocco. After determining the criteria affecting the landfill location decisions, fuzzy TODIM and fuzzy AHP methods are applied to the problem and results are presented. The comparisons of these two methods are also discussed.
Montenegro, Diego; Cunha, Ana Paula da; Ladeia-Andrade, Simone; Vera, Mauricio; Pedroso, Marcel; Junqueira, Angela
2017-10-01
Chagas disease (CD), caused by the protozoan Trypanosoma cruzi, is a neglected human disease. It is endemic to the Americas and is estimated to have an economic impact, including lost productivity and disability, of 7 billion dollars per year on average. To assess vulnerability to vector-borne transmission of T. cruzi in domiciliary environments within an area undergoing domiciliary vector interruption of T. cruzi in Colombia. Multi-criteria decision analysis [preference ranking method for enrichment evaluation (PROMETHEE) and geometrical analysis for interactive assistance (GAIA) methods] and spatial statistics were performed on data from a socio-environmental questionnaire and an entomological survey. In the construction of multi-criteria descriptors, decision-making processes and indicators of five determinants of the CD vector pathway were summarily defined, including: (1) house indicator (HI); (2) triatominae indicator (TI); (3) host/reservoir indicator (Ho/RoI); (4) ecotope indicator (EI); and (5) socio-cultural indicator (S-CI). Determination of vulnerability to CD is mostly influenced by TI, with 44.96% of the total weight in the model, while the lowest contribution was from S-CI, with 7.15%. The five indicators comprise 17 indices, and include 78 of the original 104 priority criteria and variables. The PROMETHEE and GAIA methods proved very efficient for prioritisation and quantitative categorisation of socio-environmental determinants and for better determining which criteria should be considered for interrupting the man-T. cruzi-vector relationship in endemic areas of the Americas. Through the analysis of spatial autocorrelation it is clear that there is a spatial dependence in establishing categories of vulnerability, therefore, the effect of neighbors' setting (border areas) on local values should be incorporated into disease management for establishing programs of surveillance and control of CD via vector. The study model proposed here is flexible and can be adapted to various eco-epidemiological profiles and is suitable for focusing anti-T. cruzi serological surveillance programs in vulnerable human populations.
Tabrizi, Jafar-Sadegh; Farahbakhsh, Mostafa; Shahgoli, Javad; Rahbar, Mohammad Reza; Naghavi-Behzad, Mohammad; Ahadi, Hamid-Reza; Azami-Aghdash, Saber
2015-10-01
Excellence and quality models are comprehensive methods for improving the quality of healthcare. The aim of this study was to design excellence and quality model for training centers of primary health care using Delphi method. In this study, Delphi method was used. First, comprehensive information were collected using literature review. In extracted references, 39 models were identified from 34 countries and related sub-criteria and standards were extracted from 34 models (from primary 39 models). Then primary pattern including 8 criteria, 55 sub-criteria, and 236 standards was developed as a Delphi questionnaire and evaluated in four stages by 9 specialists of health care system in Tabriz and 50 specialists from all around the country. Designed primary model (8 criteria, 55 sub-criteria, and 236 standards) were concluded with 8 criteria, 45 sub-criteria, and 192 standards after 4 stages of evaluations by specialists. Major criteria of the model are leadership, strategic and operational planning, resource management, information analysis, human resources management, process management, costumer results, and functional results, where the top score was assigned as 1000 by specialists. Functional results had the maximum score of 195 whereas planning had the minimum score of 60. Furthermore the most and the least sub-criteria was for leadership with 10 sub-criteria and strategic planning with 3 sub-criteria, respectively. The model that introduced in this research has been designed following 34 reference models of the world. This model could provide a proper frame for managers of health system in improving quality.
Computer modeling of a two-junction, monolithic cascade solar cell
NASA Technical Reports Server (NTRS)
Lamorte, M. F.; Abbott, D.
1979-01-01
The theory and design criteria for monolithic, two-junction cascade solar cells are described. The departure from the conventional solar cell analytical method and the reasons for using the integral form of the continuity equations are briefly discussed. The results of design optimization are presented. The energy conversion efficiency that is predicted for the optimized structure is greater than 30% at 300 K, AMO and one sun. The analytical method predicts device performance characteristics as a function of temperature. The range is restricted to 300 to 600 K. While the analysis is capable of determining most of the physical processes occurring in each of the individual layers, only the more significant device performance characteristics are presented.
Design Criteria for X-CRV Honeycomb Panels: A Preliminary Study
NASA Technical Reports Server (NTRS)
Caccese, Vincent; Verinder, Irene
1997-01-01
The objective of this project is to perform the first step in developing structural design criteria for composite sandwich panels that are to be used in the aeroshell of the crew return vehicle (X-CRV). The preliminary concept includes a simplified method for assessing the allowable strength in the laminate material. Ultimately, it is intended that the design criteria be extended to address the global response of the vehicle. This task will require execution of a test program as outlined in the recommendation section of this report. The aeroshell of the X-CRV is comprised of composite sandwich panels consisting of fiberite face sheets and a phenolic honeycomb core. The function of the crew return vehicle is to enable the safe return of injured or ill crewpersons from space station, the evacuation of crew in case of emergency or the return of crew if an orbiter is not available. A significant objective of the X-CRV project is to demonstrate that this vehicle can be designed, built and operated at lower cost and at a significantly faster development time. Development time can be reduced by driving out issues in both structural design and manufacturing concurrently. This means that structural design and analysis progresses in conjunction with manufacturing and testing. Preliminary tests results on laminate coupons are presented in the report. Based on these results a method for detection material failure in the material is presented. In the long term, extrapolation of coupon data to large scale structures may be inadequate. Test coupons used to develop failure criteria at the material scale are typically small when compared to the overall structure. Their inherent small size indicates that the material failure criteria can be used to predict localized failure of the structure, however, it can not be used to predict failure for all failure modes. Some failure modes occur only when the structure or one of its sub-components are studied as a whole. Conversely, localized failure may not indicate failure of the structure as a whole and the amount of reserve capacity, if any, should be assessed. To develop a complete design criteria experimental studies of the sandwich panel are needed. Only then can a conservative and accurate design criteria be developed. This criteria should include effects of flaws and defects, and environmental factors such as temperature and moisture. Preliminary results presented in this report suggest that a simplified analysis can be used to predict the strength of a laminate. Testing for environmental effects have yet to be included in this work. The so called 'rogue flaw test' appears to be a promising method for assessing the effect of a defect in a laminate. This method fits in quite well with the philosophy of achieving a damage tolerant design.
Real-time ultrasonic weld evaluation system
NASA Astrophysics Data System (ADS)
Katragadda, Gopichand; Nair, Satish; Liu, Harry; Brown, Lawrence M.
1996-11-01
Ultrasonic testing techniques are currently used as an alternative to radiography for detecting, classifying,and sizing weld defects, and for evaluating weld quality. Typically, ultrasonic weld inspections are performed manually, which require significant operator expertise and time. Thus, in recent years, the emphasis is to develop automated methods to aid or replace operators in critical weld inspections where inspection time, reliability, and operator safety are major issues. During this period, significant advances wee made in the areas of weld defect classification and sizing. Very few of these methods, however have found their way into the market, largely due to the lack of an integrated approach enabling real-time implementation. Also, not much research effort was directed in improving weld acceptance criteria. This paper presents an integrated system utilizing state-of-the-art techniques for a complete automation of the weld inspection procedure. The modules discussed include transducer tracking, classification, sizing, and weld acceptance criteria. Transducer tracking was studied by experimentally evaluating sonic and optical position tracking techniques. Details for this evaluation are presented. Classification is obtained using a multi-layer perceptron. Results from different feature extraction schemes, including a new method based on a combination of time and frequency-domain signal representations are given. Algorithms developed to automate defect registration and sizing are discussed. A fuzzy-logic acceptance criteria for weld acceptance is presented describing how this scheme provides improved robustness compared to the traditional flow-diagram standards.
Chen, Ruoying; Zhang, Zhiwang; Wu, Di; Zhang, Peng; Zhang, Xinyang; Wang, Yong; Shi, Yong
2011-01-21
Protein-protein interactions are fundamentally important in many biological processes and it is in pressing need to understand the principles of protein-protein interactions. Mutagenesis studies have found that only a small fraction of surface residues, known as hot spots, are responsible for the physical binding in protein complexes. However, revealing hot spots by mutagenesis experiments are usually time consuming and expensive. In order to complement the experimental efforts, we propose a new computational approach in this paper to predict hot spots. Our method, Rough Set-based Multiple Criteria Linear Programming (RS-MCLP), integrates rough sets theory and multiple criteria linear programming to choose dominant features and computationally predict hot spots. Our approach is benchmarked by a dataset of 904 alanine-mutated residues and the results show that our RS-MCLP method performs better than other methods, e.g., MCLP, Decision Tree, Bayes Net, and the existing HotSprint database. In addition, we reveal several biological insights based on our analysis. We find that four features (the change of accessible surface area, percentage of the change of accessible surface area, size of a residue, and atomic contacts) are critical in predicting hot spots. Furthermore, we find that three residues (Tyr, Trp, and Phe) are abundant in hot spots through analyzing the distribution of amino acids. Copyright © 2010 Elsevier Ltd. All rights reserved.
Ouyang, Hui; Guo, Yicheng; He, Mingzhen; Zhang, Jinlian; Huang, Xiaofang; Zhou, Xin; Jiang, Hongliang; Feng, Yulin; Yang, Shilin
2015-03-01
A simple, sensitive and specific liquid chromatography-tandem mass spectrometry method was developed and validated for the determination of Pulsatilla saponin D, a potential antitumor constituent isolated from Pulsatilla chinensis in rat plasma. Rat plasma samples were pretreated by protein precipitation with methanol. The method validation was performed in accordance with US Food and Drug Administration guidelines and the results met the acceptance criteria. The method was successfully applied to assess the pharmacokinetics and oral bioavailability of Pulsatilla saponin D in rats. Copyright © 2014 John Wiley & Sons, Ltd.
Montenegro, Diego; da Cunha, Ana Paula; Ladeia-Andrade, Simone; Vera, Mauricio; Pedroso, Marcel; Junqueira, Angela
2017-01-01
BACKGROUND Chagas disease (CD), caused by the protozoan Trypanosoma cruzi, is a neglected human disease. It is endemic to the Americas and is estimated to have an economic impact, including lost productivity and disability, of 7 billion dollars per year on average. OBJECTIVES To assess vulnerability to vector-borne transmission of T. cruzi in domiciliary environments within an area undergoing domiciliary vector interruption of T. cruzi in Colombia. METHODS Multi-criteria decision analysis [preference ranking method for enrichment evaluation (PROMETHEE) and geometrical analysis for interactive assistance (GAIA) methods] and spatial statistics were performed on data from a socio-environmental questionnaire and an entomological survey. In the construction of multi-criteria descriptors, decision-making processes and indicators of five determinants of the CD vector pathway were summarily defined, including: (1) house indicator (HI); (2) triatominae indicator (TI); (3) host/reservoir indicator (Ho/RoI); (4) ecotope indicator (EI); and (5) socio-cultural indicator (S-CI). FINDINGS Determination of vulnerability to CD is mostly influenced by TI, with 44.96% of the total weight in the model, while the lowest contribution was from S-CI, with 7.15%. The five indicators comprise 17 indices, and include 78 of the original 104 priority criteria and variables. The PROMETHEE and GAIA methods proved very efficient for prioritisation and quantitative categorisation of socio-environmental determinants and for better determining which criteria should be considered for interrupting the man-T. cruzi-vector relationship in endemic areas of the Americas. Through the analysis of spatial autocorrelation it is clear that there is a spatial dependence in establishing categories of vulnerability, therefore, the effect of neighbors’ setting (border areas) on local values should be incorporated into disease management for establishing programs of surveillance and control of CD via vector. CONCLUSIONS The study model proposed here is flexible and can be adapted to various eco-epidemiological profiles and is suitable for focusing anti-T. cruzi serological surveillance programs in vulnerable human populations. PMID:28953999
SU-E-T-60: A Plan Quality Index in IMRT QA That Is Independent of the Acceptance Criteria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, D; Kang, S; Kim, T
2015-06-15
Purpose: In IMRT QA, plan quality evaluation is made based on pass rate under preset acceptance criteria, mostly using gamma-values. This method is convenient but, its Result highly depends on what the acceptance criteria are and suffers from the lack of sensitivity in judging how good the plan is. In this study, we introduced a simple but effective plan quality index of IMRT QA based on dose difference only to supplement such shortcomings, and investigated its validity. Methods: The proposed index is a single value which is calculated mainly based on point-by-point comparison between planned and measured dose distributions, andmore » it becomes “1” in an ideal case. A systematic evaluation was performed with one-dimensional test dose distributions. For 3 hypothetical dose profiles, various displacements (in both dose and space) were introduced, the proposed index was calculated for each case, and the behavior of obtained indices was analyzed and compared with that of gamma evaluation. In addition, the feasibility of the index was assessed with clinical IMRT/VMAT/SBRT QA cases for different sites (prostate, head & neck, liver, lung, spine, and abdomen). Results: The proposed index showed more robust correlation with the amount of induced displacement compared to the gamma evaluation method. No matter what the acceptance criteria are (e.g., whether 3%/3mm or 2%/2mm), it was possible to clearly rank every case with the proposed index while it was difficult to do with the gamma evaluation method. Conclusion: IMRT plan quality can be evaluated quantitatively by the proposed index. It is considered that the proposed index would provide useful information for better judging the level of goodness of each plan and its Result is independent of the acceptance criteria. This work was supported by the Radiation Technology R&D program (No. 2013M2A2A7043498) and the Mid-career Researcher Program (2014R1A2A1A10050270) through the National Research Foundation of Korea funded by the Ministry of Science, ICT&Future Planning.« less
Lean Information Management: Criteria For Selecting Key Performance Indicators At Shop Floor
NASA Astrophysics Data System (ADS)
Iuga, Maria Virginia; Kifor, Claudiu Vasile; Rosca, Liviu-Ion
2015-07-01
Most successful organizations worldwide use key performance indicators as an important part of their corporate strategy in order to forecast, measure and plan their businesses. Performance metrics vary in their purpose, definition and content. Therefore, the way organizations select what they think are the optimal indicators for their businesses varies from company to company, sometimes even from department to department. This study aims to answer the question of what is the most suitable way to define and select key performance indicators. More than that, it identifies the right criteria to select key performance indicators at shop floor level. This paper contributes to prior research by analysing and comparing previously researched selection criteria and proposes an original six-criteria-model, which caters towards choosing the most adequate KPIs. Furthermore, the authors take the research a step further by further steps to closed research gaps within this field of study.
Ren, Jingzheng
2018-01-01
This objective of this study is to develop a generic multi-attribute decision analysis framework for ranking the technologies for ballast water treatment and determine their grades. An evaluation criteria system consisting of eight criteria in four categories was used to evaluate the technologies for ballast water treatment. The Best-Worst method, which is a subjective weighting method and Criteria importance through inter-criteria correlation method, which is an objective weighting method, were combined to determine the weights of the evaluation criteria. The extension theory was employed to prioritize the technologies for ballast water treatment and determine their grades. An illustrative case including four technologies for ballast water treatment, i.e. Alfa Laval (T 1 ), Hyde (T 2 ), Unitor (T 3 ), and NaOH (T 4 ), were studied by the proposed method, and the Hyde (T 2 ) was recognized as the best technology. Sensitivity analysis was also carried to investigate the effects of the combined coefficients and the weights of the evaluation criteria on the final priority order of the four technologies for ballast water treatment. The sum weighted method and the TOPSIS was also employed to rank the four technologies, and the results determined by these two methods are consistent to that determined by the proposed method in this study. Copyright © 2017 Elsevier Ltd. All rights reserved.
39 CFR 501.7 - Postage Evidencing System requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Performance Criteria for Information-Based Indicia and Security Architecture for Open IBI Postage Evidencing Systems or Performance Criteria for Information-Based Indicia and Security Architecture for Closed IBI... Information-Based Indicia and Security Architecture for Open IBI Postage Evidencing Systems or Performance...
39 CFR 501.7 - Postage Evidencing System requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Performance Criteria for Information-Based Indicia and Security Architecture for Open IBI Postage Evidencing Systems or Performance Criteria for Information-Based Indicia and Security Architecture for Closed IBI... Information-Based Indicia and Security Architecture for Open IBI Postage Evidencing Systems or Performance...
5 CFR 430.404 - Certification criteria.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 430.404 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERFORMANCE MANAGEMENT Performance Appraisal Certification for Pay Purposes § 430.404 Certification criteria. (a) To be... make meaningful distinctions based on relative performance and meet the other requirements of 5 U.S.C...
ANALYTICAL METHODS NECESSARY TO IMPLEMENT RISK-BASED CRITERIA FOR CHEMICALS IN MUNICIPAL SLUDGE
The Ambient Water Quality Criteria that were promulgated by the U.S. Environmental Protection Agency in 1980 included water concentration levels which, for many pollutants, were so low as to be unmeasurable by standard analytical methods. Criteria for controlling toxics in munici...
Supplier selection criteria for sustainable supply chain management in thermal power plant
NASA Astrophysics Data System (ADS)
Firoz, Faisal; Narayan Biswal, Jitendra; Satapathy, Suchismita
2018-02-01
Supplies are always in great demand when it comes to industrial operations. The quality of raw material their price accompanied by sustainability and environmental effects are a major concern for industrial operators these days. Supply Chain Management is the subject which is focused on how the supply of different products is carried out. The motive is that each operation performed can be optimized and inherently the efficiency of the whole chain is integrated. In this paper we will be dealing with all the criteria that are required to be evaluated before selecting a supplier, in particular, focusing on Thermal Power Plant. The most suppliers of the thermal power plant are the coal suppliers. The quality of coal directly determines the efficiency of the whole plant. And when there are matters concerning coal environmental pollution plays a very crucial role. ANP method has been used here to select suppliers of thermal power sectors in Indian context. After applying ANP to prioritize the sustainable supplier selection criteria, it is found that for thermal power industries best suppliers are Nationalized/State owned suppliers then 2nd ranked suppliers are imported supplier. Private owned suppliers are ranked least. So private owned suppliers must be more concerned about their performance. Among these suppliers it is found that to compete in the global market privatized suppliers have to give more emphasize on most important criteria like sustainability, then fuel cost and quality. Still some sub-criteria like a clean program, environmental issues, quality, reliability, service rate, investment in high technology, green transportation channel, waste management etc needs for continuous improvement as per their priority.
Marcu, Orly; Dodson, Emma-Joy; Alam, Nawsad; Sperber, Michal; Kozakov, Dima; Lensink, Marc F; Schueler-Furman, Ora
2017-03-01
CAPRI rounds 28 and 29 included, for the first time, peptide-receptor targets of three different systems, reflecting increased appreciation of the importance of peptide-protein interactions. The CAPRI rounds allowed us to objectively assess the performance of Rosetta FlexPepDock, one of the first protocols to explicitly include peptide flexibility in docking, accounting for peptide conformational changes upon binding. We discuss here successes and challenges in modeling these targets: we obtain top-performing, high-resolution models of the peptide motif for cases with known binding sites but there is a need for better modeling of flanking regions, as well as better selection criteria, in particular for unknown binding sites. These rounds have also provided us the opportunity to reassess the success criteria, to better reflect the quality of a peptide-protein complex model. Using all models submitted to CAPRI, we analyze the correlation between current classification criteria and the ability to retrieve critical interface features, such as hydrogen bonds and hotspots. We find that loosening the backbone (and ligand) RMSD threshold, together with a restriction on the side chain RMSD measure, allows us to improve the selection of high-accuracy models. We also suggest a new measure to assess interface hydrogen bond recovery, which is not assessed by the current CAPRI criteria. Finally, we find that surprisingly much can be learned from rather inaccurate models about binding hotspots, suggesting that the current status of peptide-protein docking methods, as reflected by the submitted CAPRI models, can already have a significant impact on our understanding of protein interactions. Proteins 2017; 85:445-462. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Type-2 fuzzy set extension of DEMATEL method combined with perceptual computing for decision making
NASA Astrophysics Data System (ADS)
Hosseini, Mitra Bokaei; Tarokh, Mohammad Jafar
2013-05-01
Most decision making methods used to evaluate a system or demonstrate the weak and strength points are based on fuzzy sets and evaluate the criteria with words that are modeled with fuzzy sets. The ambiguity and vagueness of the words and different perceptions of a word are not considered in these methods. For this reason, the decision making methods that consider the perceptions of decision makers are desirable. Perceptual computing is a subjective judgment method that considers that words mean different things to different people. This method models words with interval type-2 fuzzy sets that consider the uncertainty of the words. Also, there are interrelations and dependency between the decision making criteria in the real world; therefore, using decision making methods that cannot consider these relations is not feasible in some situations. The Decision-Making Trail and Evaluation Laboratory (DEMATEL) method considers the interrelations between decision making criteria. The current study used the combination of DEMATEL and perceptual computing in order to improve the decision making methods. For this reason, the fuzzy DEMATEL method was extended into type-2 fuzzy sets in order to obtain the weights of dependent criteria based on the words. The application of the proposed method is presented for knowledge management evaluation criteria.
Do Processing Patterns of Strengths and Weaknesses Predict Differential Treatment Response?
Miciak, Jeremy; Williams, Jacob L; Taylor, W Pat; Cirino, Paul T; Fletcher, Jack M; Vaughn, Sharon
2016-08-01
No previous empirical study has investigated whether the LD identification decisions of proposed methods to operationalize processing strengths and weaknesses (PSW) approaches for LD identification are associated with differential treatment response. We investigated whether the identification decisions of the concordance/discordance model (C/DM; Hale & Fiorello, 2004) and Cross Battery Assessment approach (XBA method; Flanagan, Ortiz, & Alfonso, 2007) were consistent and whether they predicted intervention response beyond that accounted for by pretest performance on measures of reading. Psychoeducational assessments were administered at pretest to 203 4 th graders with low reading comprehension and individual results were utilized to identify students who met LD criteria according to the C/DM and XBA methods and students who did not. Resulting group status permitted an investigation of agreement for identification methods and whether group status at pretest (LD or not LD) was associated with differential treatment response to an intensive reading intervention. The LD identification decisions of the XBA and C/DM demonstrated poor agreement with one another (κ = -.10). Comparisons of posttest performance for students who met LD criteria and those who did not meet were largely null, with small effect sizes across all measures. LD status, as identified through the C/DM and XBA approaches, was not associated with differential treatment response and did not contribute educationally meaningful information about how students would respond to intensive reading intervention. These results do not support the value of cognitive assessment utilized in this way as part of the LD identification process.
Subjective global assessment of nutritional status – A systematic review of the literature.
da Silva Fink, Jaqueline; Daniel de Mello, Paula; Daniel de Mello, Elza
2015-10-01
Subjective Global Assessment (SGA) is a nutritional assessment tool widely used in hospital clinical practice, even though it is not exempted of limitations in relation to its use. This systematic review intended to update knowledge on the performance of SGA as a method for the assessment of the nutritional status of hospitalized adults. PubMed data base was consulted, using the search term "subjective global assessment". Studies published in English, Portuguese or Spanish, between 2002 and 2012 were selected, excluding those not found in full, letters to the editor, pilot studies, narrative reviews, studies with n < 30, studies with population younger than 18 years of age, research with non-hospitalized populations or those which used a modified version of the SGA. Of 454 eligible studies, 110 presented eligibility criteria. After applying the exclusion criteria, 21 studies were selected, 6 with surgical patients, 7 with clinical patients, and 8 with both. Most studies demonstrated SGA performance similar or better than the usual assessment methods for nutritional status, such as anthropometry and laboratory data, but the same result was not found when comparing SGA and nutritional screening methods. Recently published literature demonstrates SGA as a valid tool for the nutritional diagnosis of hospitalized clinical and surgical patients, and point to a potential superiority of nutritional screening methods in the early detection of malnutrition. Copyright © 2014 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
NASA Astrophysics Data System (ADS)
Dionne, J. P.; Levine, J.; Makris, A.
2018-01-01
To design the next generation of blast mitigation helmets that offer increasing levels of protection against explosive devices, manufacturers must be able to rely on appropriate test methodologies and human surrogates that will differentiate the performance level of various helmet solutions and ensure user safety. Ideally, such test methodologies and associated injury thresholds should be based on widely accepted injury criteria relevant within the context of blast. Unfortunately, even though significant research has taken place over the last decade in the area of blast neurotrauma, there currently exists no agreement in terms of injury mechanisms for blast-induced traumatic brain injury. In absence of such widely accepted test methods and injury criteria, the current study presents a specific blast test methodology focusing on explosive ordnance disposal protective equipment, involving the readily available Hybrid III mannequin, initially developed for the automotive industry. The unlikely applicability of the associated brain injury criteria (based on both linear and rotational head acceleration) is discussed in the context of blast. Test results encompassing a large number of blast configurations and personal protective equipment are presented, emphasizing the possibility to develop useful correlations between blast parameters, such as the scaled distance, and mannequin engineering measurements (head acceleration). Suggestions are put forward for a practical standardized blast testing methodology taking into account limitations in the applicability of acceleration-based injury criteria as well as the inherent variability in blast testing results.
How effective are selection methods in medical education? A systematic review.
Patterson, Fiona; Knight, Alec; Dowell, Jon; Nicholson, Sandra; Cousans, Fran; Cleland, Jennifer
2016-01-01
Selection methods used by medical schools should reliably identify whether candidates are likely to be successful in medical training and ultimately become competent clinicians. However, there is little consensus regarding methods that reliably evaluate non-academic attributes, and longitudinal studies examining predictors of success after qualification are insufficient. This systematic review synthesises the extant research evidence on the relative strengths of various selection methods. We offer a research agenda and identify key considerations to inform policy and practice in the next 50 years. A formalised literature search was conducted for studies published between 1997 and 2015. A total of 194 articles met the inclusion criteria and were appraised in relation to: (i) selection method used; (ii) research question(s) addressed, and (iii) type of study design. Eight selection methods were identified: (i) aptitude tests; (ii) academic records; (iii) personal statements; (iv) references; (v) situational judgement tests (SJTs); (vi) personality and emotional intelligence assessments; (vii) interviews and multiple mini-interviews (MMIs), and (viii) selection centres (SCs). The evidence relating to each method was reviewed against four evaluation criteria: effectiveness (reliability and validity); procedural issues; acceptability, and cost-effectiveness. Evidence shows clearly that academic records, MMIs, aptitude tests, SJTs and SCs are more effective selection methods and are generally fairer than traditional interviews, references and personal statements. However, achievement in different selection methods may differentially predict performance at the various stages of medical education and clinical practice. Research into selection has been over-reliant on cross-sectional study designs and has tended to focus on reliability estimates rather than validity as an indicator of quality. A comprehensive framework of outcome criteria should be developed to allow researchers to interpret empirical evidence and compare selection methods fairly. This review highlights gaps in evidence for the combination of selection tools that is most effective and the weighting to be given to each tool. © 2015 John Wiley & Sons Ltd.
Development of methods for establishing nutrient criteria in lakes and reservoirs: A review.
Huo, Shouliang; Ma, Chunzi; Xi, Beidou; Zhang, Yali; Wu, Fengchang; Liu, Hongliang
2018-05-01
Nutrient criteria provide a scientific foundation for the comprehensive evaluation, prevention, control and management of water eutrophication. In this review, the literature was examined to systematically evaluate the benefits, drawbacks, and applications of statistical analysis, paleolimnological reconstruction, stressor-response model, and model inference approaches for nutrient criteria determination. The developments and challenges in the determination of nutrient criteria in lakes and reservoirs are presented. Reference lakes can reflect the original states of lakes, but reference sites are often unavailable. Using the paleolimnological reconstruction method, it is often difficult to reconstruct the historical nutrient conditions of shallow lakes in which the sediments are easily disturbed. The model inference approach requires sufficient data to identify the appropriate equations and characterize a waterbody or group of waterbodies, thereby increasing the difficulty of establishing nutrient criteria. The stressor-response model is a potential development direction for nutrient criteria determination, and the mechanisms of stressor-response models should be studied further. Based on studies of the relationships among water ecological criteria, eutrophication, nutrient criteria and plankton, methods for determining nutrient criteria should be closely integrated with water management requirements. Copyright © 2017. Published by Elsevier B.V.