Methodological Pluralism: The Gold Standard of STEM Evaluation
ERIC Educational Resources Information Center
Lawrenz, Frances; Huffman, Douglas
2006-01-01
Nationally, there is continuing debate about appropriate methods for conducting educational evaluations. The U.S. Department of Education has placed a priority on "scientifically" based evaluation methods and has advocated a "gold standard" of randomized controlled experimentation. The priority suggests that randomized control methods are best,…
Wohlsen, T; Bates, J; Vesey, G; Robinson, W A; Katouli, M
2006-04-01
To use BioBall cultures as a precise reference standard to evaluate methods for enumeration of Escherichia coli and other coliform bacteria in water samples. Eight methods were evaluated including membrane filtration, standard plate count (pour and spread plate methods), defined substrate technology methods (Colilert and Colisure), the most probable number method and the Petrifilm disposable plate method. Escherichia coli and Enterobacter aerogenes BioBall cultures containing 30 organisms each were used. All tests were performed using 10 replicates. The mean recovery of both bacteria varied with the different methods employed. The best and most consistent results were obtained with Petrifilm and the pour plate method. Other methods either yielded a low recovery or showed significantly high variability between replicates. The BioBall is a very suitable quality control tool for evaluating the efficiency of methods for bacterial enumeration in water samples.
ERIC Educational Resources Information Center
Kimball, Steven M.; Milanowski, Anthony
2009-01-01
Purpose: The article reports on a study of school leader decision making that examined variation in the validity of teacher evaluation ratings in a school district that has implemented a standards-based teacher evaluation system. Research Methods: Applying mixed methods, the study used teacher evaluation ratings and value-added student achievement…
Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás
2014-01-01
Objective To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. Background In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. Methods In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Results Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. Conclusion The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies. PMID:25148262
Zietze, Stefan; Müller, Rainer H; Brecht, René
2008-03-01
In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.
Chen, Jing; Wang, Shu-Mei; Meng, Jiang; Sun, Fei; Liang, Sheng-Wang
2013-05-01
To establish a new method for quality evaluation and validate its feasibilities by simultaneous quantitative assay of five alkaloids in Sophora flavescens. The new quality evaluation method, quantitative analysis of multi-components by single marker (QAMS), was established and validated with S. flavescens. Five main alkaloids, oxymatrine, sophocarpine, matrine, oxysophocarpine and sophoridine, were selected as analytes to evaluate the quality of rhizome of S. flavescens, and the relative correction factor has good repeatibility. Their contents in 21 batches of samples, collected from different areas, were determined by both external standard method and QAMS. The method was evaluated by comparison of the quantitative results between external standard method and QAMS. No significant differences were found in the quantitative results of five alkaloids in 21 batches of S. flavescens determined by external standard method and QAMS. It is feasible and suitable to evaluate the quality of rhizome of S. flavescens by QAMS.
Validation and Verification (V and V) Testing on Midscale Flame Resistant (FR) Test Method
2016-12-16
Method for Evaluation of Flame Resistant Clothing for Protection against Fire Simulations Using an Instrumented Manikin. Validation and...complement (not replace) the capabilities of the ASTM F1930 Standard Test Method for Evaluation of Flame Resistant Clothing for Protection against Fire ...Engineering Center (NSRDEC) to complement the ASTM F1930 Standard Test Method for Evaluation of Flame Resistant Clothing for Protection against Fire
A review of the latest guidelines for NIBP device validation.
Alpert, Bruce S; Quinn, David E; Friedman, Bruce A
2013-12-01
The current ISO Standard is accepted as the National Standard in almost every industrialized nation. An overview of the most recently adopted standards is provided. Standards writing groups including the Advancement of Medical Instrumentation Sphygmomanometer Committee and ISO JWG7 are working to expand standardized evaluation methods to include the evaluation of devices intended for use in environments where motion artifact is common. An Association for the Advancement of Medical Instrumentation task group on noninvasive blood pressure measurement in the presence of motion artifact has published a technical information report containing research and standardized methods for the evaluation of blood pressure device performance in the presence of motion artifact.
Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás
2014-01-01
To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-04
... Evaluation and testing within a risk management process 2-100 ASTM E1372-95 (2003) Standard Test Method...) Standard Title, Type of Test Method for Agar Diffusion Cell standard , Relevant Culture Screening for... Biological evaluation of medical devices - Office(s) and Part 3: Tests for genotoxicity, Division(s...
Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C
2015-04-13
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY
Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...
Jerrold E. Winandy; Douglas Herdman
2003-01-01
The purpose of this work was to evaluate the effects of a new boron-nitrogen, phosphate-free fire-rerardant (FR) formulation on the initial strength of No. 1 southern pine 2 by 4 lumber and its potential for in-service thermal degradation. The lumber was evaluated according to Method C of the D 5664 standard test method. The results indicated that for lumber exposed at...
Testing and Evaluation of Passive Radiation Detection Equipment for Homeland Security
West, David L.; Wood, Nathan L.; Forrester, Christina D.
2017-12-01
This article is concerned with test and evaluation methods for passive radiation detection equipment used in homeland security applications. The different types of equipment used in these applications are briefly reviewed and then test and evaluation methods discussed. The primary emphasis is on the test and evaluation standards developed by the American National Standards Institute’s N42 committees. Commonalities among the standards are then reviewed as well as examples of unique aspects for specific equipment types. Throughout, sample test configurations and results from testing and evaluation at Oak Ridge National Laboratory are given. The article concludes with a brief discussion ofmore » typical tests and evaluations not covered by the N42 standards and some examples of test and evaluation that involve the end users of the equipment.« less
Testing and Evaluation of Passive Radiation Detection Equipment for Homeland Security
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, David L.; Wood, Nathan L.; Forrester, Christina D.
This article is concerned with test and evaluation methods for passive radiation detection equipment used in homeland security applications. The different types of equipment used in these applications are briefly reviewed and then test and evaluation methods discussed. The primary emphasis is on the test and evaluation standards developed by the American National Standards Institute’s N42 committees. Commonalities among the standards are then reviewed as well as examples of unique aspects for specific equipment types. Throughout, sample test configurations and results from testing and evaluation at Oak Ridge National Laboratory are given. The article concludes with a brief discussion ofmore » typical tests and evaluations not covered by the N42 standards and some examples of test and evaluation that involve the end users of the equipment.« less
Evaluating the Performance of the IEEE Standard 1366 Method for Identifying Major Event Days
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eto, Joseph H.; LaCommare, Kristina Hamachi; Sohn, Michael D.
IEEE Standard 1366 offers a method for segmenting reliability performance data to isolate the effects of major events from the underlying year-to-year trends in reliability. Recent analysis by the IEEE Distribution Reliability Working Group (DRWG) has found that reliability performance of some utilities differs from the expectations that helped guide the development of the Standard 1366 method. This paper proposes quantitative metrics to evaluate the performance of the Standard 1366 method in identifying major events and in reducing year-to-year variability in utility reliability. The metrics are applied to a large sample of utility-reported reliability data to assess performance of themore » method with alternative specifications that have been considered by the DRWG. We find that none of the alternatives perform uniformly 'better' than the current Standard 1366 method. That is, none of the modifications uniformly lowers the year-to-year variability in System Average Interruption Duration Index without major events. Instead, for any given alternative, while it may lower the value of this metric for some utilities, it also increases it for other utilities (sometimes dramatically). Thus, we illustrate some of the trade-offs that must be considered in using the Standard 1366 method and highlight the usefulness of the metrics we have proposed in conducting these evaluations.« less
Small-Scale System for Evaluation of Stretch-Flangeability with Excellent Reliability
NASA Astrophysics Data System (ADS)
Yoon, Jae Ik; Jung, Jaimyun; Lee, Hak Hyeon; Kim, Hyoung Seop
2018-02-01
We propose a system for evaluating the stretch-flangeability of small-scale specimens based on the hole-expansion ratio (HER). The system has no size effect and shows excellent reproducibility, reliability, and economic efficiency. To verify the reliability and reproducibility of the proposed hole-expansion testing (HET) method, the deformation behavior of the conventional standard stretch-flangeability evaluation method was compared with the proposed method using finite-element method simulations. The distribution of shearing defects in the hole-edge region of the specimen, which has a significant influence on the HER, was investigated using scanning electron microscopy. The stretch-flangeability of several kinds of advanced high-strength steel determined using the conventional standard method was compared with that using the proposed small-scale HET method. It was verified that the deformation behavior, morphology and distribution of shearing defects, and stretch-flangeability results for the specimens were the same for the conventional standard method and the proposed small-scale stretch-flangeability evaluation system.
Small-Scale System for Evaluation of Stretch-Flangeability with Excellent Reliability
NASA Astrophysics Data System (ADS)
Yoon, Jae Ik; Jung, Jaimyun; Lee, Hak Hyeon; Kim, Hyoung Seop
2018-06-01
We propose a system for evaluating the stretch-flangeability of small-scale specimens based on the hole-expansion ratio (HER). The system has no size effect and shows excellent reproducibility, reliability, and economic efficiency. To verify the reliability and reproducibility of the proposed hole-expansion testing (HET) method, the deformation behavior of the conventional standard stretch-flangeability evaluation method was compared with the proposed method using finite-element method simulations. The distribution of shearing defects in the hole-edge region of the specimen, which has a significant influence on the HER, was investigated using scanning electron microscopy. The stretch-flangeability of several kinds of advanced high-strength steel determined using the conventional standard method was compared with that using the proposed small-scale HET method. It was verified that the deformation behavior, morphology and distribution of shearing defects, and stretch-flangeability results for the specimens were the same for the conventional standard method and the proposed small-scale stretch-flangeability evaluation system.
Jha, Abhinav K.; Kupinski, Matthew A.; Rodríguez, Jeffrey J.; Stephen, Renu M.; Stopeck, Alison T.
2012-01-01
In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both accuracy and precision. We also propose consistency checks for this evaluation technique. PMID:22713231
Reporting Qualitative Research: Standards, Challenges, and Implications for Health Design.
Peditto, Kathryn
2018-04-01
This Methods column describes the existing reporting standards for qualitative research, their application to health design research, and the challenges to implementation. Intended for both researchers and practitioners, this article provides multiple perspectives on both reporting and evaluating high-quality qualitative research. Two popular reporting standards exist for reporting qualitative research-the Consolidated Criteria for Reporting Qualitative Research (COREQ) and the Standards for Reporting Qualitative Research (SRQR). Though compiled using similar procedures, they differ in their criteria and the methods to which they apply. Creating and applying reporting criteria is inherently difficult due to the undefined and fluctuating nature of qualitative research when compared to quantitative studies. Qualitative research is expansive and occasionally controversial, spanning many different methods of inquiry and epistemological approaches. A "one-size-fits-all" standard for reporting qualitative research can be restrictive, but COREQ and SRQR both serve as valuable tools for developing responsible qualitative research proposals, effectively communicating research decisions, and evaluating submissions. Ultimately, tailoring a set of standards specific to health design research and its frequently used methods would ensure quality research and aid reviewers in their evaluations.
Ma, Weina; Yang, Liu; Lv, Yanni; Fu, Jia; Zhang, Yanmin; He, Langchong
2017-06-23
The equilibrium dissociation constant (K D ) of drug-membrane receptor affinity is the basic parameter that reflects the strength of interaction. The cell membrane chromatography (CMC) method is an effective technique to study the characteristics of drug-membrane receptor affinity. In this study, the K D value of CMC relative standard method for the determination of drug-membrane receptor affinity was established to analyze the relative K D values of drugs binding to the membrane receptors (Epidermal growth factor receptor and angiotensin II receptor). The K D values obtained by the CMC relative standard method had a strong correlation with those obtained by the frontal analysis method. Additionally, the K D values obtained by CMC relative standard method correlated with pharmacological activity of the drug being evaluated. The CMC relative standard method is a convenient and effective method to evaluate drug-membrane receptor affinity. Copyright © 2017 Elsevier B.V. All rights reserved.
A new evaluation tool to obtain practice-based evidence of worksite health promotion programs.
Dunet, Diane O; Sparling, Phillip B; Hersey, James; Williams-Piehota, Pamela; Hill, Mary D; Hanssen, Carl; Lawrenz, Frances; Reyes, Michele
2008-10-01
The Centers for Disease Control and Prevention developed the Swift Worksite Assessment and Translation (SWAT) evaluation method to identify promising practices in worksite health promotion programs. The new method complements research studies and evaluation studies of evidence-based practices that promote healthy weight in working adults. We used nationally recognized program evaluation standards of utility, feasibility, accuracy, and propriety as the foundation for our 5-step method: 1) site identification and selection, 2) site visit, 3) post-visit evaluation of promising practices, 4) evaluation capacity building, and 5) translation and dissemination. An independent, outside evaluation team conducted process and summative evaluations of SWAT to determine its efficacy in providing accurate, useful information and its compliance with evaluation standards. The SWAT evaluation approach is feasible in small and medium-sized workplace settings. The independent evaluation team judged SWAT favorably as an evaluation method, noting among its strengths its systematic and detailed procedures and service orientation. Experts in worksite health promotion evaluation concluded that the data obtained by using this evaluation method were sufficient to allow them to make judgments about promising practices. SWAT is a useful, business-friendly approach to systematic, yet rapid, evaluation that comports with program evaluation standards. The method provides a new tool to obtain practice-based evidence of worksite health promotion programs that help prevent obesity and, more broadly, may advance public health goals for chronic disease prevention and health promotion.
Park, Yu Rang; Yoon, Young Jo; Jang, Tae Hun; Seo, Hwa Jeong
2014-01-01
Objectives Extension of the standard model while retaining compliance with it is a challenging issue because there is currently no method for semantically or syntactically verifying an extended data model. A metadata-based extended model, named CCR+, was designed and implemented to achieve interoperability between standard and extended models. Methods Furthermore, a multilayered validation method was devised to validate the standard and extended models. The American Society for Testing and Materials (ASTM) Community Care Record (CCR) standard was selected to evaluate the CCR+ model; two CCR and one CCR+ XML files were evaluated. Results In total, 188 metadata were extracted from the ASTM CCR standard; these metadata are semantically interconnected and registered in the metadata registry. An extended-data-model-specific validation file was generated from these metadata. This file can be used in a smartphone application (Health Avatar CCR+) as a part of a multilayered validation. The new CCR+ model was successfully evaluated via a patient-centric exchange scenario involving multiple hospitals, with the results supporting both syntactic and semantic interoperability between the standard CCR and extended, CCR+, model. Conclusions A feasible method for delivering an extended model that complies with the standard model is presented herein. There is a great need to extend static standard models such as the ASTM CCR in various domains: the methods presented here represent an important reference for achieving interoperability between standard and extended models. PMID:24627817
Preliminary evaluation of a gel tube agglutination major cross-match method in dogs.
Villarnovo, Dania; Burton, Shelley A; Horney, Barbara S; MacKenzie, Allan L; Vanderstichel, Raphaël
2016-09-01
A major cross-match gel tube test is available for use in dogs yet has not been clinically evaluated. This study compared cross-match results obtained using the gel tube and the standard tube methods for canine samples. Study 1 included 107 canine sample donor-recipient pairings cross-match tested with the RapidVet-H method gel tube test and compared results with the standard tube method. Additionally, 120 pairings using pooled sera containing anti-canine erythrocyte antibody at various concentrations were tested with leftover blood from a hospital population to assess sensitivity and specificity of the gel tube method in comparison with the standard method. The gel tube method had a good relative specificity of 96.1% in detecting lack of agglutination (compatibility) compared to the standard tube method. Agreement between the 2 methods was moderate. Nine of 107 pairings showed agglutination/incompatibility on either test, too few to allow reliable calculation of relative sensitivity. Fifty percent of the gel tube method results were difficult to interpret due to sample spreading in the reaction and/or negative control tubes. The RapidVet-H method agreed with the standard cross-match method on compatible samples, but detected incompatibility in some sample pairs that were compatible with the standard method. Evaluation using larger numbers of incompatible pairings is needed to assess diagnostic utility. The gel tube method results were difficult to categorize due to sample spreading. Weak agglutination reactions or other factors such as centrifuge model may be responsible. © 2016 American Society for Veterinary Clinical Pathology.
Park, Yu Rang; Yoon, Young Jo; Jang, Tae Hun; Seo, Hwa Jeong; Kim, Ju Han
2014-01-01
Extension of the standard model while retaining compliance with it is a challenging issue because there is currently no method for semantically or syntactically verifying an extended data model. A metadata-based extended model, named CCR+, was designed and implemented to achieve interoperability between standard and extended models. Furthermore, a multilayered validation method was devised to validate the standard and extended models. The American Society for Testing and Materials (ASTM) Community Care Record (CCR) standard was selected to evaluate the CCR+ model; two CCR and one CCR+ XML files were evaluated. In total, 188 metadata were extracted from the ASTM CCR standard; these metadata are semantically interconnected and registered in the metadata registry. An extended-data-model-specific validation file was generated from these metadata. This file can be used in a smartphone application (Health Avatar CCR+) as a part of a multilayered validation. The new CCR+ model was successfully evaluated via a patient-centric exchange scenario involving multiple hospitals, with the results supporting both syntactic and semantic interoperability between the standard CCR and extended, CCR+, model. A feasible method for delivering an extended model that complies with the standard model is presented herein. There is a great need to extend static standard models such as the ASTM CCR in various domains: the methods presented here represent an important reference for achieving interoperability between standard and extended models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makoto Kashiwagi; Garamszeghy, Mike; Lantes, Bertrand
Disposal of low-and intermediate-level activated waste generated at nuclear power plants is being planned or carried out in many countries. The radioactivity concentrations and/or total quantities of long-lived, difficult-to-measure nuclides (DTM nuclides), such as C-14, Ni-63, Nb-94, α emitting nuclides etc., are often restricted by the safety case for a final repository as determined by each country's safety regulations, and these concentrations or amounts are required to be known and declared. With respect to waste contaminated by contact with process water, the Scaling Factor method (SF method), which is empirically based on sampling and analysis data, has been applied asmore » an important method for determining concentrations of DTM nuclides. This method was standardized by the International Organization for Standardization (ISO) and published in 2007 as ISO21238 'Scaling factor method to determine the radioactivity of low and intermediate-level radioactive waste packages generated at nuclear power plants' [1]. However, for activated metal waste with comparatively high concentrations of radioactivity, such as may be found in reactor control rods and internal structures, direct sampling and radiochemical analysis methods to evaluate the DTM nuclides are limited by access to the material and potentially high personnel radiation exposure. In this case, theoretical calculation methods in combination with empirical methods based on remote radiation surveys need to be used to best advantage for determining the disposal inventory of DTM nuclides while minimizing exposure to radiation workers. Pursuant to this objective a standard for the theoretical evaluation of the radioactivity concentration of DTM nuclides in activated waste, is in process through ISO TC85/SC5 (ISO Technical Committee 85: Nuclear energy, nuclear technologies, and radiological protection; Subcommittee 5: Nuclear fuel cycle). The project team for this ISO standard was formed in 2011 and is composed of experts from 11 countries. The project team has been conducting technical discussions on theoretical methods for determining concentrations of radioactivity, and has developed the draft International Standard of ISO16966 'Theoretical activation calculation method to evaluate the radioactivity of activated waste generated at nuclear reactors' [2]. This paper describes the international standardization process developed by the ISO project team, and outlines the following two theoretical activity evaluation methods:? Point method? Range method. (authors)« less
Alignment of Standards and Assessment: A Theoretical and Empirical Study of Methods for Alignment
ERIC Educational Resources Information Center
Nasstrom, Gunilla; Henriksson, Widar
2008-01-01
Introduction: In a standards-based school-system alignment of policy documents with standards and assessment is important. To be able to evaluate whether schools and students have reached the standards, the assessment should focus on the standards. Different models and methods can be used for measuring alignment, i.e. the correspondence between…
Li, Li; Liu, Dong-Jun
2014-01-01
Since 2012, China has been facing haze-fog weather conditions, and haze-fog pollution and PM2.5 have become hot topics. It is very necessary to evaluate and analyze the ecological status of the air environment of China, which is of great significance for environmental protection measures. In this study the current situation of haze-fog pollution in China was analyzed first, and the new Ambient Air Quality Standards were introduced. For the issue of air quality evaluation, a comprehensive evaluation model based on an entropy weighting method and nearest neighbor method was developed. The entropy weighting method was used to determine the weights of indicators, and the nearest neighbor method was utilized to evaluate the air quality levels. Then the comprehensive evaluation model was applied into the practical evaluation problems of air quality in Beijing to analyze the haze-fog pollution. Two simulation experiments were implemented in this study. One experiment included the indicator of PM2.5 and was carried out based on the new Ambient Air Quality Standards (GB 3095-2012); the other experiment excluded PM2.5 and was carried out based on the old Ambient Air Quality Standards (GB 3095-1996). Their results were compared, and the simulation results showed that PM2.5 was an important indicator for air quality and the evaluation results of the new Air Quality Standards were more scientific than the old ones. The haze-fog pollution situation in Beijing City was also analyzed based on these results, and the corresponding management measures were suggested. PMID:25170682
40 CFR 80.165 - Certification test procedures and standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., “Standard Test Method for Evaluating Unleaded Automotive Spark-Ignition Engine Fuel for Electronic Port Fuel.... The required test fuel must produce the accumulation of less than 100 mg of intake valve deposits on... Board, “Test Method for Evaluating Port Fuel Injector (PFI) Deposits in Vehicle Engines”, March 1, 1991...
40 CFR 80.165 - Certification test procedures and standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., “Standard Test Method for Evaluating Unleaded Automotive Spark-Ignition Engine Fuel for Electronic Port Fuel.... The required test fuel must produce the accumulation of less than 100 mg of intake valve deposits on... Board, “Test Method for Evaluating Port Fuel Injector (PFI) Deposits in Vehicle Engines”, March 1, 1991...
Development Of Methodologies Using PhabrOmeter For Fabric Drape Evaluation
NASA Astrophysics Data System (ADS)
Lin, Chengwei
Evaluation of fabric drape is important for textile industry as it reveals the aesthetic and functionality of the cloth and apparel. Although many fabric drape measuring methods have been developed for several decades, they are falling behind the need for fast product development by the industry. To meet the requirement of industries, it is necessary to develop an effective and reliable method to evaluate fabric drape. The purpose of the present study is to determine if PhabrOmeter can be applied to fabric drape evaluation. PhabrOmeter is a fabric sensory performance evaluating instrument which is developed to provide fast and reliable quality testing results. This study was sought to determine the relationship between fabric drape and other fabric attributes. In addition, a series of conventional methods including AATCC standards, ASTM standards and ISO standards were used to characterize the fabric samples. All the data were compared and analyzed with linear correlation method. The results indicate that PhabrOmeter is reliable and effective instrument for fabric drape evaluation. Besides, some effects including fabric structure, testing directions were considered to examine their impact on fabric drape.
NASA Astrophysics Data System (ADS)
Jin, Yang; Ciwei, Gao; Jing, Zhang; Min, Sun; Jie, Yu
2017-05-01
The selection and evaluation of priority domains in Global Energy Internet standard development will help to break through limits of national investment, thus priority will be given to standardizing technical areas with highest urgency and feasibility. Therefore, in this paper, the process of Delphi survey based on technology foresight is put forward, the evaluation index system of priority domains is established, and the index calculation method is determined. Afterwards, statistical method is used to evaluate the alternative domains. Finally the top four priority domains are determined as follows: Interconnected Network Planning and Simulation Analysis, Interconnected Network Safety Control and Protection, Intelligent Power Transmission and Transformation, and Internet of Things.
ERIC Educational Resources Information Center
Lin, Jie
2006-01-01
The Bookmark standard-setting procedure was developed to address the perceived problems with the most popular method for setting cut-scores: the Angoff procedure (Angoff, 1971). The purposes of this article are to review the Bookmark procedure and evaluate it in terms of Berk's (1986) criteria for evaluating cut-score setting methods. The…
Airside HVAC BESTEST: HVAC Air-Distribution System Model Test Cases for ASHRAE Standard 140
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ronald; Neymark, Joel; Kennedy, Mike D.
This paper summarizes recent work to develop new airside HVAC equipment model analytical verification test cases for ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs. The analytical verification test method allows comparison of simulation results from a wide variety of building energy simulation programs with quasi-analytical solutions, further described below. Standard 140 is widely cited for evaluating software for use with performance-path energy efficiency analysis, in conjunction with well-known energy-efficiency standards including ASHRAE Standard 90.1, the International Energy Conservation Code, and other international standards. Airside HVAC Equipment is a common area ofmore » modelling not previously explicitly tested by Standard 140. Integration of the completed test suite into Standard 140 is in progress.« less
Rocha, José Celso; Passalia, Felipe José; Matos, Felipe Delestro; Takahashi, Maria Beatriz; Ciniciato, Diego de Souza; Maserati, Marc Peter; Alves, Mayra Fernanda; de Almeida, Tamie Guibu; Cardoso, Bruna Lopes; Basso, Andrea Cristina; Nogueira, Marcelo Fábio Gouveia
2017-08-09
Morphological analysis is the standard method of assessing embryo quality; however, its inherent subjectivity tends to generate discrepancies among evaluators. Using genetic algorithms and artificial neural networks (ANNs), we developed a new method for embryo analysis that is more robust and reliable than standard methods. Bovine blastocysts produced in vitro were classified as grade 1 (excellent or good), 2 (fair), or 3 (poor) by three experienced embryologists according to the International Embryo Technology Society (IETS) standard. The images (n = 482) were subjected to automatic feature extraction, and the results were used as input for a supervised learning process. One part of the dataset (15%) was used for a blind test posterior to the fitting, for which the system had an accuracy of 76.4%. Interestingly, when the same embryologists evaluated a sub-sample (10%) of the dataset, there was only 54.0% agreement with the standard (mode for grades). However, when using the ANN to assess this sub-sample, there was 87.5% agreement with the modal values obtained by the evaluators. The presented methodology is covered by National Institute of Industrial Property (INPI) and World Intellectual Property Organization (WIPO) patents and is currently undergoing a commercial evaluation of its feasibility.
Wolde, Mistire; Tarekegn, Getahun; Kebede, Tedla
2018-05-01
Point-of-care glucometer (PoCG) devices play a significant role in self-monitoring of the blood sugar level, particularly in the follow-up of high blood sugar therapeutic response. The aim of this study was to evaluate blood glucose test results performed with four randomly selected glucometers on diabetes and control subjects versus standard wet chemistry (hexokinase) methods in Addis Ababa, Ethiopia. A prospective cross-sectional study was conducted on randomly selected 200 study participants (100 participants with diabetes and 100 healthy controls). Four randomly selected PoCG devices (CareSens N, DIAVUE Prudential, On Call Extra, i-QARE DS-W) were evaluated against hexokinase method and ISO 15197:2003 and ISO 15197:2013 standards. The minimum and maximum blood sugar values were recorded by CareSens N (21 mg/dl) and hexokinase method (498.8 mg/dl), respectively. The mean sugar values of all PoCG devices except On Call Extra showed significant differences compared with the reference hexokinase method. Meanwhile, all four PoCG devices had strong positive relationship (>80%) with the reference method (hexokinase). On the other hand, none of the four PoCG devices fulfilled the minimum accuracy measurement set by ISO 15197:2003 and ISO 15197:2013 standards. In addition, the linear regression analysis revealed that all four selected PoCG overestimated the glucose concentrations. The overall evaluation of the selected four PoCG measurements were poorly correlated with standard reference method. Therefore, before introducing PoCG devices to the market, there should be a standardized evaluation platform for validation. Further similar large-scale studies on other PoCG devices also need to be undertaken.
Valente, Marta Sofia; Pedro, Paulo; Alonso, M Carmen; Borrego, Juan J; Dionísio, Lídia
2010-03-01
Monitoring the microbiological quality of water used for recreational activities is very important to human public health. Although the sanitary quality of recreational marine waters could be evaluated by standard methods, they are time-consuming and need confirmation. For these reasons, faster and more sensitive methods, such as the defined substrate-based technology, have been developed. In the present work, we have compared the standard method of membrane filtration using Tergitol-TTC agar for total coliforms and Escherichia coli, and Slanetz and Bartley agar for enterococci, and the IDEXX defined substrate technology for these faecal pollution indicators to determine the microbiological quality of natural recreational waters. ISO 17994:2004 standard was used to compare these methods. The IDEXX for total coliforms and E. coli, Colilert, showed higher values than those obtained by the standard method. Enterolert test, for the enumeration of enterococci, showed lower values when compared with the standard method. It may be concluded that more studies to evaluate the precision and accuracy of the rapid tests are required in order to apply them for routine monitoring of marine and freshwater recreational bathing areas. The main advantages of these methods are that they are more specific, feasible and simpler than the standard methodology.
The Explication of Quality Standards in Self-Evaluation
ERIC Educational Resources Information Center
Bronkhorst, Larike H.; Baartman, Liesbeth K. J.; Stokking, Karel M.
2012-01-01
Education aiming at students' competence development asks for new assessment methods. The quality of these methods needs to be assured using adapted quality criteria and accompanying standards. As such standards are not widely available, this study sets out to examine what level of compliance with quality criteria stakeholders consider…
Daily sodium and potassium excretion can be estimated by scheduled spot urine collections.
Doenyas-Barak, Keren; Beberashvili, Ilia; Bar-Chaim, Adina; Averbukh, Zhan; Vogel, Ofir; Efrati, Shai
2015-01-01
The evaluation of sodium and potassium intake is part of the optimal management of hypertension, metabolic syndrome, renal stones, and other conditions. To date, no convenient method for its evaluation exists, as the gold standard method of 24-hour urine collection is cumbersome and often incorrectly performed, and methods that use spot or shorter collections are not accurate enough to replace the gold standard. The aim of this study was to evaluate the correlation and agreement between a new method that uses multiple-scheduled spot urine collection and the gold standard method of 24-hour urine collection. The urine sodium or potassium to creatinine ratios were determined for four scheduled spot urine samples. The mean ratios of the four spot samples and the ratios of each of the single spot samples were corrected for estimated creatinine excretion and compared to the gold standard. A significant linear correlation was demonstrated between the 24-hour urinary solute excretions and estimated excretion evaluated by any of the scheduled spot urine samples. The correlation of the mean of the four spots was better than for any of the single spots. Bland-Altman plots showed that the differences between these measurements were within the limits of agreement. Four scheduled spot urine samples can be used as a convenient method for estimation of 24-hour sodium or potassium excretion. © 2015 S. Karger AG, Basel.
Dong, Ming; Fisher, Carolyn; Añez, Germán; Rios, Maria; Nakhasi, Hira L.; Hobson, J. Peyton; Beanan, Maureen; Hockman, Donna; Grigorenko, Elena; Duncan, Robert
2016-01-01
Aims To demonstrate standardized methods for spiking pathogens into human matrices for evaluation and comparison among diagnostic platforms. Methods and Results This study presents detailed methods for spiking bacteria or protozoan parasites into whole blood and virus into plasma. Proper methods must start with a documented, reproducible pathogen source followed by steps that include standardized culture, preparation of cryopreserved aliquots, quantification of the aliquots by molecular methods, production of sufficient numbers of individual specimens and testing of the platform with multiple mock specimens. Results are presented following the described procedures that showed acceptable reproducibility comparing in-house real-time PCR assays to a commercially available multiplex molecular assay. Conclusions A step by step procedure has been described that can be followed by assay developers who are targeting low prevalence pathogens. Significance and Impact of Study The development of diagnostic platforms for detection of low prevalence pathogens such as biothreat or emerging agents is challenged by the lack of clinical specimens for performance evaluation. This deficit can be overcome using mock clinical specimens made by spiking cultured pathogens into human matrices. To facilitate evaluation and comparison among platforms, standardized methods must be followed in the preparation and application of spiked specimens. PMID:26835651
NASA Astrophysics Data System (ADS)
Kumar, Harish
The present paper discusses the procedure for evaluation of best measurement capability of a force calibration machine. The best measurement capability of force calibration machine is evaluated by a comparison through the precision force transfer standards to the force standard machines. The force transfer standards are calibrated by the force standard machine and then by the force calibration machine by adopting the similar procedure. The results are reported and discussed in the paper and suitable discussion has been made for force calibration machine of 200 kN capacity. Different force transfer standards of nominal capacity 20 kN, 50 kN and 200 kN are used. It is found that there are significant variations in the .uncertainty of force realization by the force calibration machine according to the proposed method in comparison to the earlier method adopted.
ORE's GENeric Evaluation SYStem: GENESYS 1988-89.
ERIC Educational Resources Information Center
Baenen, Nancy; And Others
GENESYS--GENeric Evaluation SYStem--is a method of streamlining data collection and evaluation through the use of computer technology. GENESYS has allowed the Office of Research and Evaluation (ORE) of the Austin (Texas) Independent School District to evaluate a multitude of contrasting programs with limited resources. By standardizing methods and…
Zhao, L; Yan, Y J
2017-11-20
Objective: To investigate the problems encountered in the application of the standard (hereinafter referred to as standard) for the diagnosis of chronic obstructive pulmonary disease caused by occu-pational irritant chemicals, to provide reference for the revision of the new standard, to reduce the number of missed patients in occupational COPD, and to get rid of the working environment of those who suffer from chronic respiratory diseases due to long-term exposure to poisons., slowing the progression of the disease. Methods: Using Delphi (Delphi) Expert research method, after the senior experts to demonstrate, to under-stand the GBZ 237-2011 "occupational irritant chemicals to the diagnosis of chronic obstructive pulmonary dis-ease" standard evaluation of the system encountered problems, to seek expert advice, The problems encoun-tered during the clinical implementation of the standards promulgated in 2011 are presented. Results: Through the Delphi Expert investigation method, it is found that experts agree on the content evaluation and implemen-tation evaluation in the standard, but the operational evaluation of the standard is disputed. According to the clinical experience, the experts believe that the range of occupational irritant gases should be expanded, and the operation of the problem of smoking, seniority determination and occupational contact history should be challenged during the diagnosis. Conclusions: Since the promulgation in 2011 of the criteria for the diagnosis of chronic obstructive pulmonary disease caused by occupational stimulant chemicals, there have been some problems in the implementation process, which have caused many occupationally exposed to irritating gases to suffer from "occupational chronic respiratory Diseases" without a definitive diagnosis.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-29
... Ignitibility of Exterior Wall Assemblies Using a Radiant Heat Energy Source. NFPA 269 Standard Test Method P... for Heat and Visible Smoke Release Rates for Materials and Products Using an Oxygen Consumption... Plastic Insulation. NFPA 285 Standard Fire Test P Method for Evaluation of Fire Propagation...
Comparative assessment of three standardized robotic surgery training methods.
Hung, Andrew J; Jayaratna, Isuru S; Teruya, Kara; Desai, Mihir M; Gill, Inderbir S; Goh, Alvin C
2013-10-01
To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity. To explore the concept of cross-method validity, where the relative performance of each method is compared. Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: 'novice/trainee': urology residents, previous experience <30 cases (n = 38) and 'experts': faculty surgeons, previous experience ≥30 cases (n = 11). Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool. A Kruskal-Wallis test was used to evaluate performance differences between novices and experts (construct validity). Spearman's correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity). Novice and expert surgeons had previously performed a median (range) of 0 (0-20) and 300 (30-2000) robotic cases, respectively (P < 0.001). Construct validity: experts consistently outperformed residents with all three methods (P < 0.001). Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = -0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = -0.8, P < 0.0001). Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001). We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment. We externally confirmed the construct validity of each featured training tool. © 2013 BJU International.
A Survey of Practice Patterns in Concussion Assessment and Management.
Ferrara, Michael S.; McCrea, Michael; Peterson, Connie L.; Guskiewicz, Kevin M.
2001-06-01
OBJECTIVES: To identify methods used by athletic trainers to assess concussions and the use of that information to assist in return-to-play decisions and to determine athletic trainers' familiarity with new standardized methods of concussion assessment. DESIGN AND SETTING: A 21-item questionnaire was distributed to attendees of a minicourse at the 1999 National Athletic Trainers' Association Annual Meeting and Clinical Symposia entitled "Use of Standardized Assessment of Concussion (SAC) in the Immediate Sideline Evaluation of Injured Athletes." SUBJECTS: A total of 339 valid surveys were returned by the attendees of the minicourse. MEASUREMENTS: We used frequency analysis and descriptive statistics. RESULTS: Clinical examination (33%) and a symptom checklist (15.3%) were the most common evaluative tools used to assess concussions. The Colorado Guidelines (28%) were used more than other concussion management guidelines. Athletic trainers (34%) and team physicians (40%) were primarily responsible for making decisions regarding return to play. A large number of respondents (83.5%) believed that the use of a standardized method of concussion assessment provided more information than routine clinical and physical examination alone. CONCLUSIONS: Athletic trainers are using a variety of clinical tools to evaluate concussions in athletes. Clinical evaluation and collaboration with physicians still appear to be the primary methods used for return-to-play decisions. However, athletic trainers are beginning to use standardized methods of concussion to evaluate these injuries and to assist them in assessing the severity of injury and deciding when it is safe to return to play.
A Gold Standards Approach to Training Instructors to Evaluate Crew Performance
NASA Technical Reports Server (NTRS)
Baker, David P.; Dismukes, R. Key
2003-01-01
The Advanced Qualification Program requires that airlines evaluate crew performance in Line Oriented Simulation. For this evaluation to be meaningful, instructors must observe relevant crew behaviors and evaluate those behaviors consistently and accurately against standards established by the airline. The airline industry has largely settled on an approach in which instructors evaluate crew performance on a series of event sets, using standardized grade sheets on which behaviors specific to event set are listed. Typically, new instructors are given a class in which they learn to use the grade sheets and practice evaluating crew performance observed on videotapes. These classes emphasize reliability, providing detailed instruction and practice in scoring so that all instructors within a given class will give similar scores to similar performance. This approach has value but also has important limitations; (1) ratings within one class of new instructors may differ from those of other classes; (2) ratings may not be driven primarily by the specific behaviors on which the company wanted the crews to be scored; and (3) ratings may not be calibrated to company standards for level of performance skill required. In this paper we provide a method to extend the existing method of training instructors to address these three limitations. We call this method the "gold standards" approach because it uses ratings from the company's most experienced instructors as the basis for training rater accuracy. This approach ties the training to the specific behaviors on which the experienced instructors based their ratings.
NASA Astrophysics Data System (ADS)
Wu, Fan; Cao, Pin; Yang, Yongying; Li, Chen; Chai, Huiting; Zhang, Yihui; Xiong, Haoliang; Xu, Wenlin; Yan, Kai; Zhou, Lin; Liu, Dong; Bai, Jian; Shen, Yibing
2016-11-01
The inspection of surface defects is one of significant sections of optical surface quality evaluation. Based on microscopic scattering dark-field imaging, sub-aperture scanning and stitching, the Surface Defects Evaluating System (SDES) can acquire full-aperture image of defects on optical elements surface and then extract geometric size and position information of defects with image processing such as feature recognization. However, optical distortion existing in the SDES badly affects the inspection precision of surface defects. In this paper, a distortion correction algorithm based on standard lattice pattern is proposed. Feature extraction, polynomial fitting and bilinear interpolation techniques in combination with adjacent sub-aperture stitching are employed to correct the optical distortion of the SDES automatically in high accuracy. Subsequently, in order to digitally evaluate surface defects with American standard by using American military standards MIL-PRF-13830B to judge the surface defects information obtained from the SDES, an American standard-based digital evaluation algorithm is proposed, which mainly includes a judgment method of surface defects concentration. The judgment method establishes weight region for each defect and adopts the method of overlap of weight region to calculate defects concentration. This algorithm takes full advantage of convenience of matrix operations and has merits of low complexity and fast in running, which makes itself suitable very well for highefficiency inspection of surface defects. Finally, various experiments are conducted and the correctness of these algorithms are verified. At present, these algorithms have been used in SDES.
Signori, Cácia; Collares, Kauê; Cumerlato, Catarina B F; Correa, Marcos B; Opdam, Niek J M; Cenci, Maximiliano S
2018-04-01
The aim of this study was to investigate the validity of assessment of intraoral digital photography in the evaluation of dental restorations. Intraoral photographs of anterior and posterior restorations were classified based on FDI criteria according to the need for intervention: no intervention, repair and replacement. Evaluations were performed by an experienced expert in restorative dentistry (gold standard evaluator) and 3 trained dentists (consensus). The clinical inspection was the reference standard method. The prevalence of failures was explored. Cohen's kappa statistic was used. Validity was accessed by sensitivity, specificity, likelihood ratio and predictives values. Higher prevalence of failed restorations intervention was identified by the intraoral photography (17.7%) in comparison to the clinical evaluation (14.1%). Moderate agreement in the diagnosis of total failures was shown between the methods for the gold standard evaluator (kappa = 0.51) and consensus of evaluators (kappa = 0.53). Gold standard evaluator and consensus showed substantial and moderate agreement for posterior restorations (kappa = 0.61; 0.59), and fair and moderate agreement for anterior restorations (kappa = 0.36; 0.43), respectively. The accuracy was 84.8% in the assessment by intraoral photographs. Sensitivity and specificity values of 87.5% and 89.3% were found. Under the limits of this study, the assessment of digital photography performed by intraoral camera is an indirect diagnostic method valid for the evaluation of dental restorations, mainly in posterior teeth. This method should be employed taking into account the higher detection of defects provided by the images, which are not always clinically relevant. Copyright © 2018 Elsevier Ltd. All rights reserved.
A new method for calculating ecological flow: Distribution flow method
NASA Astrophysics Data System (ADS)
Tan, Guangming; Yi, Ran; Chang, Jianbo; Shu, Caiwen; Yin, Zhi; Han, Shasha; Feng, Zhiyong; Lyu, Yiwei
2018-04-01
A distribution flow method (DFM) and its ecological flow index and evaluation grade standard are proposed to study the ecological flow of rivers based on broadening kernel density estimation. The proposed DFM and its ecological flow index and evaluation grade standard are applied into the calculation of ecological flow in the middle reaches of the Yangtze River and compared with traditional calculation method of hydrological ecological flow, method of flow evaluation, and calculation result of fish ecological flow. Results show that the DFM considers the intra- and inter-annual variations in natural runoff, thereby reducing the influence of extreme flow and uneven flow distributions during the year. This method also satisfies the actual runoff demand of river ecosystems, demonstrates superiority over the traditional hydrological methods, and shows a high space-time applicability and application value.
Methodology for Evaluating Cost-effectiveness of Commercial Energy Code Changes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Liu, Bing
This document lays out the U.S. Department of Energy’s (DOE’s) method for evaluating the cost-effectiveness of energy code proposals and editions. The evaluation is applied to provisions or editions of the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Standard 90.1 and the International Energy Conservation Code (IECC). The method follows standard life-cycle cost (LCC) economic analysis procedures. Cost-effectiveness evaluation requires three steps: 1) evaluating the energy and energy cost savings of code changes, 2) evaluating the incremental and replacement costs related to the changes, and 3) determining the cost-effectiveness of energy code changes based on those costs andmore » savings over time.« less
ERIC Educational Resources Information Center
Patalino, Marianne
Problems in current course evaluation methods are discussed and an alternative method is described for the construction, analysis, and interpretation of a test to evaluate instructional programs. The method presented represents a different approach to the traditional overreliance on standardized achievement tests and the total scores they provide.…
Stan T. Lebow; Patricia K. Lebow; Kolby C. Hirth
2017-01-01
Current standardized methods are not well-suited for estimating in-service preservative leaching from treated wood products. This study compared several alternative leaching methods to a commonly used standard method, and to leaching under natural exposure conditions. Small blocks or lumber specimens were pressure treated with a wood preservative containing borax and...
A comparison of in vitro cytotoxicity assays in medical device regulatory studies.
Liu, Xuemei; Rodeheaver, Denise P; White, Jeffrey C; Wright, Ann M; Walker, Lisa M; Zhang, Fan; Shannon, Stephen
2018-06-06
Medical device biocompatibility testing is used to evaluate the risk of adverse effects on tissues from exposure to leachates/extracts. A battery of tests is typically recommended in accordance with regulatory standards to determine if the device is biocompatible. In vitro cytotoxicity, a key element of the standards, is a required endpoint for all types of medical devices. Each validated cytotoxicity method has different methodology and acceptance criteria that could influence the selection of a specific test. In addition, some guidances are more specific than others as to the recommended test methods. For example, the International Organization for Standardization (ISO 1 ) cites preference for quantitative methods (e.g., tetrazolium (MTT/XTT), neutral red (NR), or colony formation assays (CFA)) over qualitative methods (e.g., elution, agar overlay/diffusion, or direct), while a recent ISO standard for contact lens/lens care solutions specifically requires a qualitative direct test. Qualitative methods are described in United States Pharmacopeia (USP) while quantitative CFAs are listed in Japan guidance. The aim of this review is to compare the methodologies such as test article preparation, test conditions, and criteria for six cytotoxicity methods recommended in regulatory standards in order to inform decisions on which method(s) to select during the medical device safety evaluation. Copyright © 2018. Published by Elsevier Inc.
Tsukahara, Keita; Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Nishimaki-Mogami, Tomoko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi
2016-01-01
A real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) soybean event, MON87701. First, a standard plasmid for MON87701 quantification was constructed. The conversion factor (C f ) required to calculate the amount of genetically modified organism (GMO) was experimentally determined for a real-time PCR instrument. The determined C f for the real-time PCR instrument was 1.24. For the evaluation of the developed method, a blind test was carried out in an inter-laboratory trial. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSDr), respectively. The determined biases and the RSDr values were less than 30 and 13%, respectively, at all evaluated concentrations. The limit of quantitation of the method was 0.5%, and the developed method would thus be applicable for practical analyses for the detection and quantification of MON87701.
Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei
2015-01-01
A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.
ERIC Educational Resources Information Center
Alowaydhi, Wafa Hafez
2016-01-01
The current study aimed at standardizing the program of learning Arabic for non-native speakers in Saudi Electronic University according to certain standards of total quality. To achieve its purpose, the study adopted the descriptive analytical method. The author prepared a measurement tool for evaluating the electronic learning programs in light…
Gold-standard evaluation of a folksonomy-based ontology learning model
NASA Astrophysics Data System (ADS)
Djuana, E.
2018-03-01
Folksonomy, as one result of collaborative tagging process, has been acknowledged for its potential in improving categorization and searching of web resources. However, folksonomy contains ambiguities such as synonymy and polysemy as well as different abstractions or generality problem. To maximize its potential, some methods for associating tags of folksonomy with semantics and structural relationships have been proposed such as using ontology learning method. This paper evaluates our previous work in ontology learning according to gold-standard evaluation approach in comparison to a notable state-of-the-art work and several baselines. The results show that our method is comparable to the state-of the art work which further validate our approach as has been previously validated using task-based evaluation approach.
Sabzghabaei, Foroogh; Salajeghe, Mahla; Soltani Arabshahi, Seyed Kamran
2017-01-01
Background: In this study, ambulatory care training in Firoozgar hospital was evaluated based on Iranian national standards of undergraduate medical education related to ambulatory education using Baldrige Excellence Model. Moreover, some suggestions were offered to promote education quality in the current condition of ambulatory education in Firoozgar hospital and national standards using the gap analysis method. Methods: This descriptive analytic study was a kind of evaluation research performed using the standard check lists published by the office of undergraduate medical education council. Data were collected through surveying documents, interviewing, and observing the processes based on the Baldrige Excellence Model. After confirming the validity and reliability of the check lists, we evaluated the establishment level of the national standards of undergraduate medical education in the clinics of this hospital in the 4 following domains: educational program, evaluation, training and research resources, and faculty members. Data were analyzed according to the national standards of undergraduate medical education related to ambulatory education and the Baldrige table for scoring. Finally, the quality level of the current condition was determined as very appropriate, appropriate, medium, weak, and very weak. Results: In domains of educational program 62%, in evaluation 48%, in training and research resources 46%, in faculty members 68%, and in overall ratio, 56% of the standards were appropriate. Conclusion: The most successful domains were educational program and faculty members, but evaluation and training and research resources domains had a medium performance. Some domains and indicators were determined as weak and their quality needed to be improved, so it is suggested to provide the necessary facilities and improvements by attending to the quality level of the national standards of ambulatory education PMID:29951400
The need for performance criteria in evaluating the durability of wood products
Stan Lebow; Bessie Woodward; Patricia Lebow; Carol Clausen
2010-01-01
Data generated from wood-product durability evaluations can be difficult to interpret. Standard methods used to evaluate the potential long-term durability of wood products often provide little guidance on interpretation of test results. Decisions on acceptable performance for standardization and code compliance are based on the judgment of reviewers or committees....
Kim, E J; Utterback, P L; Applegate, T J; Parsons, C M
2011-11-01
The objective of this study was to evaluate and compare amino acid digestibility of several feedstuffs using 2 commonly accepted methods: the precision-fed cecectomized rooster assay (PFR) and the standardized ileal amino acid assay (SIAAD). Six corn, 6 corn distillers dried grains with or without solubles (DDGS/DDG), one wet distillers grains, one condensed solubles, 2 meat and bone meal (MBM) and a poultry byproduct meal were evaluated. Due to insufficient amounts, the wet distillers grains and condensed solubles were only evaluated in roosters. Standardized amino acid digestibility varied among the feed ingredients and among samples of the same ingredient for both methods. For corn, there were generally no differences in amino acid digestibility between the 2 methods. When differences did occur, there was no consistent pattern among the individual amino acids and methods. Standardized amino acid digestibility was not different between the 2 methods for 4 of the DDG samples; however, the PFR yielded higher digestibility values for a high protein DDG and a conventionally processed DDGS. The PFR yielded higher amino acid digestibility values than the SIAAD for several amino acids in 1 MBM and the poultry byproduct meal, but it yielded lower digestibility values for the other MBM. Overall, there were no consistent differences between methods for amino acid digestibility values. In conclusion, the PFR and SIAAD methods are acceptable for determining amino acid digestibility. However, these procedures do not always yield similar results for all feedstuffs evaluated. Thus, further studies are needed to understand the underlying causes in this variability.
The current benchmark method for detecting Cryptosporidium oocysts in water is the U.S. Environmental Protection Agency (U.S. EPA) Method 1623. Studies evaluating this method report that recoveries are highly variable and dependent upon laboratory, water sample, and analyst. Ther...
Costing evidence for health care decision-making in Austria: A systematic review
Mayer, Susanne; Kiss, Noemi; Łaszewska, Agata
2017-01-01
Background With rising healthcare costs comes an increasing demand for evidence-informed resource allocation using economic evaluations worldwide. Furthermore, standardization of costing and reporting methods both at international and national levels are imperative to make economic evaluations a valid tool for decision-making. The aim of this review is to assess the availability and consistency of costing evidence that could be used for decision-making in Austria. It describes systematically the current economic evaluation and costing studies landscape focusing on the applied costing methods and their reporting standards. Findings are discussed in terms of their likely impacts on evidence-based decision-making and potential suggestions for areas of development. Methods A systematic literature review of English and German language peer-reviewed as well as grey literature (2004–2015) was conducted to identify Austrian economic analyses. The databases MEDLINE, EMBASE, SSCI, EconLit, NHS EED and Scopus were searched. Publication and study characteristics, costing methods, reporting standards and valuation sources were systematically synthesised and assessed. Results A total of 93 studies were included. 87% were journal articles, 13% were reports. 41% of all studies were full economic evaluations, mostly cost-effectiveness analyses. Based on relevant standards the most commonly observed limitations were that 60% of the studies did not clearly state an analytical perspective, 25% of the studies did not provide the year of costing, 27% did not comprehensively list all valuation sources, and 38% did not report all applied unit costs. Conclusion There are substantial inconsistencies in the costing methods and reporting standards in economic analyses in Austria, which may contribute to a low acceptance and lack of interest in economic evaluation-informed decision making. To improve comparability and quality of future studies, national costing guidelines should be updated with more specific methodological guidance and a national reference cost library should be set up to allow harmonisation of valuation methods. PMID:28806728
Brown, Gary S.; Betty, Rita G.; Brockmann, John E.; Lucero, Daniel A.; Souza, Caroline A.; Walsh, Kathryn S.; Boucher, Raymond M.; Tezak, Mathew; Wilson, Mollye C.; Rudolph, Todd
2007-01-01
Polyester-rayon blend wipes were evaluated for efficiency of extraction and recovery of powdered Bacillus atrophaeus spores from stainless steel and painted wallboard surfaces. Method limits of detection were also estimated for both surfaces. The observed mean efficiency of polyester-rayon blend wipe recovery from stainless steel was 0.35 with a standard deviation of ±0.12, and for painted wallboard it was 0.29 with a standard deviation of ±0.15. Evaluation of a sonication extraction method for the polyester-rayon blend wipes produced a mean extraction efficiency of 0.93 with a standard deviation of ±0.09. Wipe recovery quantitative limits of detection were estimated at 90 CFU per unit of stainless steel sample area and 105 CFU per unit of painted wallboard sample area. The method recovery efficiency and limits of detection established in this work provide useful guidance for the planning of incident response environmental sampling following the release of a biological agent such as Bacillus anthracis. PMID:17122390
Brown, Gary S; Betty, Rita G; Brockmann, John E; Lucero, Daniel A; Souza, Caroline A; Walsh, Kathryn S; Boucher, Raymond M; Tezak, Mathew; Wilson, Mollye C; Rudolph, Todd
2007-02-01
Polyester-rayon blend wipes were evaluated for efficiency of extraction and recovery of powdered Bacillus atrophaeus spores from stainless steel and painted wallboard surfaces. Method limits of detection were also estimated for both surfaces. The observed mean efficiency of polyester-rayon blend wipe recovery from stainless steel was 0.35 with a standard deviation of +/-0.12, and for painted wallboard it was 0.29 with a standard deviation of +/-0.15. Evaluation of a sonication extraction method for the polyester-rayon blend wipes produced a mean extraction efficiency of 0.93 with a standard deviation of +/-0.09. Wipe recovery quantitative limits of detection were estimated at 90 CFU per unit of stainless steel sample area and 105 CFU per unit of painted wallboard sample area. The method recovery efficiency and limits of detection established in this work provide useful guidance for the planning of incident response environmental sampling following the release of a biological agent such as Bacillus anthracis.
Double row equivalent for rotator cuff repair: A biomechanical analysis of a new technique.
Robinson, Sean; Krigbaum, Henry; Kramer, Jon; Purviance, Connor; Parrish, Robin; Donahue, Joseph
2018-06-01
There are numerous configurations of double row fixation for rotator cuff tears however, there remains to be a consensus on the best method. In this study, we evaluated three different double-row configurations, including a new method. Our primary question is whether the new anchor and technique compares in biomechanical strength to standard double row techniques. Eighteen prepared fresh frozen bovine infraspinatus tendons were randomized to one of three groups including the New Double Row Equivalent, Arthrex Speedbridge and a transosseous equivalent using standard Stabilynx anchors. Biomechanical testing was performed on humeri sawbones and ultimate load, strain, yield strength, contact area, contact pressure, and a survival plots were evaluated. The new double row equivalent method demonstrated increased survival as well as ultimate strength at 415N compared to the remainder testing groups as well as equivalent contact area and pressure to standard double row techniques. This new anchor system and technique demonstrated higher survival rates and loads to failure than standard double row techniques. This data provides us with a new method of rotator cuff fixation which should be further evaluated in the clinical setting. Basic science biomechanical study.
Hansen, William B; Derzon, James H; Reese, Eric L
2014-06-01
We propose a method for creating groups against which outcomes of local pretest-posttest evaluations of evidence-based programs can be judged. This involves assessing pretest markers for new and previously conducted evaluations to identify groups that have high pretest similarity. A database of 802 prior local evaluations provided six summary measures for analysis. The proximity of all groups using these variables is calculated as standardized proximities having values between 0 and 1. Five methods for creating standardized proximities are demonstrated. The approach allows proximity limits to be adjusted to find sufficient numbers of synthetic comparators. Several index cases are examined to assess the numbers of groups available to serve as comparators. Results show that most local evaluations would have sufficient numbers of comparators available for estimating program effects. This method holds promise as a tool for local evaluations to estimate relative effectiveness. © The Author(s) 2012.
EVALUATION OF ANALYTICAL REPORTING ERRORS GENERATED AS DESCRIBED IN SW-846 METHOD 8261A
SW-846 Method 8261A incorporates the vacuum distillation of analytes from samples, and their recoveries are characterized by internal standards. The internal standards measure recoveries with confidence intervals as functions of physical properties. The frequency the calculate...
A Learner-Centered Grading Method Focused on Reaching Proficiency with Course Learning Outcomes
ERIC Educational Resources Information Center
Toledo, Santiago; Dubas, Justin M.
2017-01-01
Getting students to use grading feedback as a tool for learning is a continual challenge for educators. This work proposes a method for evaluating student performance that provides feedback to students based on standards of learning dictated by clearly delineated course learning outcomes. This method combines elements of standards-based grading…
[Sampling methods for PM2.5 from stationary sources: a review].
Jiang, Jing-Kun; Deng, Jian-Guo; Li, Zhen; Li, Xing-Hua; Duan, Lei; Hao, Ji-Ming
2014-05-01
The new China national ambient air quality standard has been published in 2012 and will be implemented in 2016. To meet the requirements in this new standard, monitoring and controlling PM2,,5 emission from stationary sources are very important. However, so far there is no national standard method on sampling PM2.5 from stationary sources. Different sampling methods for PM2.5 from stationary sources and relevant international standards were reviewed in this study. It includes the methods for PM2.5 sampling in flue gas and the methods for PM2.5 sampling after dilution. Both advantages and disadvantages of these sampling methods were discussed. For environmental management, the method for PM2.5 sampling in flue gas such as impactor and virtual impactor was suggested as a standard to determine filterable PM2.5. To evaluate environmental and health effects of PM2.5 from stationary sources, standard dilution method for sampling of total PM2.5 should be established.
Quantifying relative importance: Computing standardized effects in models with binary outcomes
Grace, James B.; Johnson, Darren; Lefcheck, Jonathan S.; Byrnes, Jarrett E.K.
2018-01-01
Results from simulation studies show that both the LT and OE methods of standardization support a similarly-broad range of coefficient comparisons. The LT method estimates effects that reflect underlying latent-linear propensities, while the OE method computes a linear approximation for the effects of predictors on binary responses. The contrast between assumptions for the two methods is reflected in persistently weaker standardized effects associated with OE standardization. Reliance on standard deviations for standardization (the traditional approach) is critically examined and shown to introduce substantial biases when predictors are non-Gaussian. The use of relevant ranges in place of standard deviations has the capacity to place LT and OE standardized coefficients on a more comparable scale. As ecologists address increasingly complex hypotheses, especially those that involve comparing the influences of different controlling factors (e.g., top-down versus bottom-up or biotic versus abiotic controls), comparable coefficients become a necessary component for evaluations.
ERIC Educational Resources Information Center
Lewy, Colleen; Sells, C. Wayne; Gilhooly, Jennifer; McKelvey, Robert
2009-01-01
Objective: The authors aim to determine whether pediatric residents used DSM-IV criteria to diagnose major depressive disorder and how this related to residents' confidence in diagnosis and treatment skills before and after clinical training with depressed adolescents. Methods: Pediatric residents evaluated two different standardized patients…
Care management program evaluation: constituents, conflicts, and moves toward standardization.
Long, D Adam; Perry, Theodore L; Pelletier, Kenneth R; Lehman, Gregg O
2006-06-01
Care management program evaluations bring together constituents from finance, medicine, and social sciences. The differing assumptions and scientific philosophies that these constituents bring to the task often lead to frustrations and even contentions. Given the forms and variations of care management programs, the difficulty associated with program outcomes measurement should not be surprising. It is no wonder then that methods for clinical and economic evaluations of program efficacy continue to be debated and have yet to be standardized. We describe these somewhat hidden processes, examine where the industry stands, and provide recommendations for steps to standardize evaluation methodology.
Measures of fish behavior as indicators of sublethal toxicosis during standard toxicity tests
Little, E.E.; DeLonay, A.J.
1996-01-01
Behavioral functions essential for growth and survival can be dramatically altered by sublethal exposure to toxicants. Measures of these behavioral responses are effective in detecting adverse effects of sublethal contaminant exposure. Behavioral responses of fishes can be qualitatively and quantitatively evaluated during routine toxicity tests. At selected intervals of exposure, qualitative evaluations are accomplished through direct observations, whereas video recordings are used for quantitative evaluations. Standardized procedures for behavioral evaluation are readily applicable to different fish species and provide rapid, sensitive, and ecologically relevant assessments of sublethal exposure. The methods are readily applied to standardized test protocols.
CrowdMapping: A Crowdsourcing-Based Terminology Mapping Method for Medical Data Standardization.
Mao, Huajian; Chi, Chenyang; Huang, Boyu; Meng, Haibin; Yu, Jinghui; Zhao, Dongsheng
2017-01-01
Standardized terminology is the prerequisite of data exchange in analysis of clinical processes. However, data from different electronic health record systems are based on idiosyncratic terminology systems, especially when the data is from different hospitals and healthcare organizations. Terminology standardization is necessary for the medical data analysis. We propose a crowdsourcing-based terminology mapping method, CrowdMapping, to standardize the terminology in medical data. CrowdMapping uses a confidential model to determine how terminologies are mapped to a standard system, like ICD-10. The model uses mappings from different health care organizations and evaluates the diversity of the mapping to determine a more sophisticated mapping rule. Further, the CrowdMapping model enables users to rate the mapping result and interact with the model evaluation. CrowdMapping is a work-in-progress system, we present initial results mapping terminologies.
Orthoclinostatic test as one of the methods for evaluating the human functional state
NASA Technical Reports Server (NTRS)
Doskin, V. A.; Gissen, L. D.; Bomshteyn, O. Z.; Merkin, E. N.; Sarychev, S. B.
1980-01-01
The possible use of different methods to evaluate the autonomic regulation in hygienic studies were examined. The simplest and most objective tests were selected. It is shown that the use of the optimized standards not only makes it possible to detect earlier unfavorables shifts, but also permits a quantitative characterization of the degree of impairment in the state of the organism. Precise interpretation of the observed shifts is possible. Results indicate that the standards can serve as one of the criteria for evaluating the state and can be widely used in hygienic practice.
Current federal regulations require monitoring for fecal coliforms or Salmonella in biosolids destined for land application. Methods used for analysis of fecal coliforms and Salmonella were reviewed and a standard protocol was developed. The protocols were then evaluated by testi...
NASA Astrophysics Data System (ADS)
da Silva Oliveira, C. I.; Martinez-Martinez, D.; Al-Rjoub, A.; Rebouta, L.; Menezes, R.; Cunha, L.
2018-04-01
In this paper, we present a statistical method that allows evaluating the degree of a transparency of a thin film. To do so, the color coordinates are measured on different substrates, and the standard deviation is evaluated. In case of low values, the color depends on the film and not on the substrate, and intrinsic colors are obtained. In contrast, transparent films lead to high values of standard deviation, since the value of the color coordinates depends on the substrate. Between both extremes, colored films with a certain degree of transparency can be found. This method allows an objective and simple evaluation of the transparency of any film, improving the subjective visual inspection and avoiding the thickness problems related to optical spectroscopy evaluation. Zirconium oxynitride films deposited on three different substrates (Si, steel and glass) are used for testing the validity of this method, whose results have been validated with optical spectroscopy, and agree with the visual impression of the samples.
Costing evidence for health care decision-making in Austria: A systematic review.
Mayer, Susanne; Kiss, Noemi; Łaszewska, Agata; Simon, Judit
2017-01-01
With rising healthcare costs comes an increasing demand for evidence-informed resource allocation using economic evaluations worldwide. Furthermore, standardization of costing and reporting methods both at international and national levels are imperative to make economic evaluations a valid tool for decision-making. The aim of this review is to assess the availability and consistency of costing evidence that could be used for decision-making in Austria. It describes systematically the current economic evaluation and costing studies landscape focusing on the applied costing methods and their reporting standards. Findings are discussed in terms of their likely impacts on evidence-based decision-making and potential suggestions for areas of development. A systematic literature review of English and German language peer-reviewed as well as grey literature (2004-2015) was conducted to identify Austrian economic analyses. The databases MEDLINE, EMBASE, SSCI, EconLit, NHS EED and Scopus were searched. Publication and study characteristics, costing methods, reporting standards and valuation sources were systematically synthesised and assessed. A total of 93 studies were included. 87% were journal articles, 13% were reports. 41% of all studies were full economic evaluations, mostly cost-effectiveness analyses. Based on relevant standards the most commonly observed limitations were that 60% of the studies did not clearly state an analytical perspective, 25% of the studies did not provide the year of costing, 27% did not comprehensively list all valuation sources, and 38% did not report all applied unit costs. There are substantial inconsistencies in the costing methods and reporting standards in economic analyses in Austria, which may contribute to a low acceptance and lack of interest in economic evaluation-informed decision making. To improve comparability and quality of future studies, national costing guidelines should be updated with more specific methodological guidance and a national reference cost library should be set up to allow harmonisation of valuation methods.
Jha, Abhinav K; Caffo, Brian; Frey, Eric C
2016-01-01
The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation. PMID:26982626
Jha, Abhinav K; Caffo, Brian; Frey, Eric C
2016-04-07
The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation.
24 CFR 35.1335 - Standard treatments.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint Hazard Evaluation and Hazard Reduction Activities § 35.1335 Standard treatments. Standard... § 35.1330, unless it is found not to be a soil-lead hazard in accordance with § 35.1320(b). (e) Safe...
24 CFR 35.1335 - Standard treatments.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint Hazard Evaluation and Hazard Reduction Activities § 35.1335 Standard treatments. Standard... § 35.1330, unless it is found not to be a soil-lead hazard in accordance with § 35.1320(b). (e) Safe...
24 CFR 35.1335 - Standard treatments.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint Hazard Evaluation and Hazard Reduction Activities § 35.1335 Standard treatments. Standard... § 35.1330, unless it is found not to be a soil-lead hazard in accordance with § 35.1320(b). (e) Safe...
24 CFR 35.1335 - Standard treatments.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint Hazard Evaluation and Hazard Reduction Activities § 35.1335 Standard treatments. Standard... § 35.1330, unless it is found not to be a soil-lead hazard in accordance with § 35.1320(b). (e) Safe...
[Detection of KRAS mutation in colorectal cancer patients' cfDNA with droplet digital PCR].
Luo, Yuwen; Li, Yao
2018-03-25
This study aims to develop a new method for the detection of KRAS mutations related to colorectal cancer in cfDNA, and to evaluate the sensitivity and accuracy of the detection. We designed a method of cfDNA based KRAS detection by droplets digital PCR (ddPCR). The theoretical performance of the method is evaluated by reference standard and compared to the ARMS PCR method. Two methods, ddPCR and qPCR, were successfully established to detect KRAS wild type and 7 mutants. Both methods were validated using plasmid standards and actual samples. The results were evaluated by false positive rate, linearity, and limit of detection. Finally, 52 plasma cfDNA samples from patients and 20 samples from healthy people were tested, the clinical sensitivity is 97.64%, clinical specificity is 81.43%. ddPCR method shows higher performance than qPCR. The LOD of ddPCR method reached single digits of cfDNA copies, it can detect as low as 0.01% to 0.04% mutation abundance.
Towards standardized assessment of endoscope optical performance: geometric distortion
NASA Astrophysics Data System (ADS)
Wang, Quanzeng; Desai, Viraj N.; Ngo, Ying Z.; Cheng, Wei-Chung; Pfefer, Joshua
2013-12-01
Technological advances in endoscopes, such as capsule, ultrathin and disposable devices, promise significant improvements in safety, clinical effectiveness and patient acceptance. Unfortunately, the industry lacks test methods for preclinical evaluation of key optical performance characteristics (OPCs) of endoscopic devices that are quantitative, objective and well-validated. As a result, it is difficult for researchers and developers to compare image quality and evaluate equivalence to, or improvement upon, prior technologies. While endoscope OPCs include resolution, field of view, and depth of field, among others, our focus in this paper is geometric image distortion. We reviewed specific test methods for distortion and then developed an objective, quantitative test method based on well-defined experimental and data processing steps to evaluate radial distortion in the full field of view of an endoscopic imaging system. Our measurements and analyses showed that a second-degree polynomial equation could well describe the radial distortion curve of a traditional endoscope. The distortion evaluation method was effective for correcting the image and can be used to explain other widely accepted evaluation methods such as picture height distortion. Development of consensus standards based on promising test methods for image quality assessment, such as the method studied here, will facilitate clinical implementation of innovative endoscopic devices.
Aurumskjöld, Marie-Louise; Söderberg, Marcus; Stålhammar, Fredrik; von Steyern, Kristina Vult; Tingberg, Anders; Ydström, Kristina
2018-06-01
Background In pediatric patients, computed tomography (CT) is important in the medical chain of diagnosing and monitoring various diseases. Because children are more radiosensitive than adults, they require minimal radiation exposure. One way to achieve this goal is to implement new technical solutions, like iterative reconstruction. Purpose To evaluate the potential of a new, iterative, model-based method for reconstructing (IMR) pediatric abdominal CT at a low radiation dose and determine whether it maintains or improves image quality, compared to the current reconstruction method. Material and Methods Forty pediatric patients underwent abdominal CT. Twenty patients were examined with the standard dose settings and 20 patients were examined with a 32% lower radiation dose. Images from the standard examination were reconstructed with a hybrid iterative reconstruction method (iDose 4 ), and images from the low-dose examinations were reconstructed with both iDose 4 and IMR. Image quality was evaluated subjectively by three observers, according to modified EU image quality criteria, and evaluated objectively based on the noise observed in liver images. Results Visual grading characteristics analyses showed no difference in image quality between the standard dose examination reconstructed with iDose 4 and the low dose examination reconstructed with IMR. IMR showed lower image noise in the liver compared to iDose 4 images. Inter- and intra-observer variance was low: the intraclass coefficient was 0.66 (95% confidence interval = 0.60-0.71) for the three observers. Conclusion IMR provided image quality equivalent or superior to the standard iDose 4 method for evaluating pediatric abdominal CT, even with a 32% dose reduction.
Method and platform standardization in MRM-based quantitative plasma proteomics.
Percy, Andrew J; Chambers, Andrew G; Yang, Juncong; Jackson, Angela M; Domanski, Dominik; Burkhart, Julia; Sickmann, Albert; Borchers, Christoph H
2013-12-16
There exists a growing demand in the proteomics community to standardize experimental methods and liquid chromatography-mass spectrometry (LC/MS) platforms in order to enable the acquisition of more precise and accurate quantitative data. This necessity is heightened by the evolving trend of verifying and validating candidate disease biomarkers in complex biofluids, such as blood plasma, through targeted multiple reaction monitoring (MRM)-based approaches with stable isotope-labeled standards (SIS). Considering the lack of performance standards for quantitative plasma proteomics, we previously developed two reference kits to evaluate the MRM with SIS peptide approach using undepleted and non-enriched human plasma. The first kit tests the effectiveness of the LC/MRM-MS platform (kit #1), while the second evaluates the performance of an entire analytical workflow (kit #2). Here, these kits have been refined for practical use and then evaluated through intra- and inter-laboratory testing on 6 common LC/MS platforms. For an identical panel of 22 plasma proteins, similar concentrations were determined, regardless of the kit, instrument platform, and laboratory of analysis. These results demonstrate the value of the kit and reinforce the utility of standardized methods and protocols. The proteomics community needs standardized experimental protocols and quality control methods in order to improve the reproducibility of MS-based quantitative data. This need is heightened by the evolving trend for MRM-based validation of proposed disease biomarkers in complex biofluids such as blood plasma. We have developed two kits to assist in the inter- and intra-laboratory quality control of MRM experiments: the first kit tests the effectiveness of the LC/MRM-MS platform (kit #1), while the second evaluates the performance of an entire analytical workflow (kit #2). In this paper, we report the use of these kits in intra- and inter-laboratory testing on 6 common LC/MS platforms. This article is part of a Special Issue entitled: Standardization and Quality Control in Proteomics. © 2013.
2016-09-01
Standardization (ISO). 2015. Water quality - calanoid copepod early- life stage test with Acartia tonsa. ISO 16778:2015. International Organization for...Toxicity Test Methods for Marine Water Quality Evaluations by Alan J Kennedy, Guilherme Lotufo, Jennifer G. Laird, and J. Daniel Farrar PURPOSE: The...MPRSA evaluations in some regions. The organisms used in these test methods are not planktonic for most of their life cycles (juveniles and adults
1992-06-01
CRITERIA TO HIRE CIVILIANS 10 21. PROFESSIONAL QUALIFICATION STANDARDS 18 22. CLASSROOM OBSERVATION 19 23. OTHER METHODS TO EVALUATE 18 INSTRUCTION 24. OTHER...other methods used to evaluate classroom instruction? (Note: Question 23 asks whether respondents use classroom observation to evaluate instruction] (15...number of affirmative responses are as follows: "* Question 22: Do you use classroom observation to evaluate instruction? (17 responses) "* Question
Evaluation of a Proposed Drift Reduction Technology High-Speed Wind Tunnel Testing Protocol
2009-03-01
05: “Standard Test Method for Determining Liquid Drop Size Characteristics in a Spray Using Optical Nonimaging Light- Scattering Instruments” 15...Method for Determining Liquid Drop Size Characteris- tics in a Spray Using Optical Nonimaging Light-Scattering Instruments,” Annual Book of ASTM Standards
Test Methodology to Evaluate the Safety of Materials Using Spark Incendivity
NASA Technical Reports Server (NTRS)
Buhler, Charles; Calle, Carlos; Clements, Sid; Ritz, Mindy; Starnes, Jeff
2007-01-01
For many years scientists and engineers have been searching for the proper test method to evaluate an electrostatic risk for materials used in hazardous environments. A new test standard created by the International Electrotechnical Commission is a promising addition to conventional test methods used throughout industry. The purpose of this paper is to incorporate this test into a proposed new methodology for the evaluation of materials exposed to flammable environments. However, initial testing using this new standard has uncovered some unconventional behavior in materials that conventional test methods were thought to have reconciled. For example some materials tested at higher humidities were more susceptible to incendive discharges than at lower humidity even though the surface resistivity was lower.
DOT National Transportation Integrated Search
2011-10-30
Main aim of this project was to evaluate alternate standard test methods for stress corrosion cracking (SCC) and compare them with the results from slow strain rate test (SSRT) results under equivalent environmental conditions. Other important aim of...
NASA Technical Reports Server (NTRS)
2009-01-01
This Interim Standard establishes requirements for evaluation, testing, and selection of materials that are intended for use in space vehicles, associated Ground Support Equipment (GSE), and facilities used during assembly, test, and flight operations. Included are requirements, criteria, and test methods for evaluating the flammability, offgassing, and compatibility of materials.
Reassessing RCTs as the "Gold Standard": Synergy Not Separatism in Evaluation Designs
ERIC Educational Resources Information Center
Hanley, Pam; Chambers, Bette; Haslam, Jonathan
2016-01-01
Randomised controlled trials (RCTs) are increasingly used to evaluate educational interventions in the UK. However, RCTs remain controversial for some elements of the research community. This paper argues that the widespread use of the term "gold standard" to describe RCTs is problematic, as it implies that other research methods are…
Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.
2017-01-01
Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883
A Mapmark method of standard setting as implemented for the National Assessment Governing Board.
Schulz, E Matthew; Mitzel, Howard C
2011-01-01
This article describes a Mapmark standard setting procedure, developed under contract with the National Assessment Governing Board (NAGB). The procedure enhances the bookmark method with spatially representative item maps, holistic feedback, and an emphasis on independent judgment. A rationale for these enhancements, and the bookmark method, is presented, followed by a detailed description of the materials and procedures used in a meeting to set standards for the 2005 National Assessment of Educational Progress (NAEP) in Grade 12 mathematics. The use of difficulty-ordered content domains to provide holistic feedback is a particularly novel feature of the method. Process evaluation results comparing Mapmark to Anghoff-based methods previously used for NAEP standard setting are also presented.
Standardized methods for photography in procedural dermatology using simple equipment.
Hexsel, Doris; Hexsel, Camile L; Dal'Forno, Taciana; Schilling de Souza, Juliana; Silva, Aline F; Siega, Carolina
2017-04-01
Photography is an important tool in dermatology. Reproducing the settings of before photos after interventions allows more accurate evaluation of treatment outcomes. In this article, we describe standardized methods and tips to obtain photographs, both for clinical practice and research procedural dermatology, using common equipment. Standards for the studio, cameras, photographer, patients, and framing are presented in this article. © 2017 The International Society of Dermatology.
Kaiser, G M; Wirges, U; Becker, S; Baier, C; Radunz, S; Kraus, H; Paul, A
2014-01-01
A challenge for solid organ transplantation in Germany is the shortage of organs. In an effort to increase donation rates, some federal states mandated hospitals to install transplantation officers to coordinate, evaluate, and enhance the donation and transplantation processes. In 2009 the German Foundation for Organ Transplantation (DSO) implemented the In-House Coordination Project, which includes retrospective, quarterly, information technology-based case analyses of all deceased patients with primary or secondary brain injury in regard to the organ donation process in maximum care hospitals. From 2006 to 2008 an analysis of potential organ donors was performed in our hospital using a time-consuming, complex method using questionnaires, hand-written patient files, and the hospital IT documentation system (standard method). Analyses in the In-House Coordination Project are instead carried out by a proprietary semiautomated IT tool called Transplant Check, which uses easily accessible standard data records of the hospital controlling and accounting unit. The aim of our study was to compare the results of the standard method and Transplant Check in detecting and evaluating potential donors. To do so, the same period of time (2006 to 2008) was re-evaluated using the IT tool. Transplant Check was able to record significantly more patients who fulfilled the criteria for inclusion than the standard method (641 vs 424). The methods displayed a wide overlap, apart from 22 patients who were only recorded by the standard method. In these cases, the accompanying brain injury diagnosis was not recorded in the controlling and accounting unit data records due to little relative clinical significance. None of the 22 patients fulfilled the criteria for brain death. In summary, Transplant Check is an easy-to-use, reliable, and valid tool for evaluating donor potential in a maximum care hospital. Therefore from 2010 on, analyses were performed exclusively with Transplant Check at our university hospital. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Kim, W. S.; Seng, G. T.
1982-01-01
A rapid ultraviolet spectrophotometric method for the simultaneous determination of aromatics in middistillate fuels was developed and evaluated. In this method, alkylbenzenes, alkylnaphthalenes, alkylanthracenes/phenanthracenes and total aromatics were determined from ultraviolet spectra of the fuels. The accuracy and precision were determined using simulated standard fuels with known compositions. The total aromatics fraction accuracy was 5% for a Jet A type fuel and 0.6% for a broadened properties jet turbine type fuel. Precision, expressed as relative standard deviations, ranged from 2.9% for the alkylanthracenes/phenanthrenes to 15.3% for the alkylbenzenes. The accuracy, however, was less for actual fuel samples when compared to the results obtained by a mass spectrometric method. In addition, the ASTM D-1840 method for naphthalenes by ultraviolet spectroscopy was evaluated.
Methods for assessing the quality of mammalian embryos: How far we are from the gold standard?
Rocha, José C; Passalia, Felipe; Matos, Felipe D; Maserati, Marc P; Alves, Mayra F; Almeida, Tamie G de; Cardoso, Bruna L; Basso, Andrea C; Nogueira, Marcelo F G
2016-08-01
Morphological embryo classification is of great importance for many laboratory techniques, from basic research to the ones applied to assisted reproductive technology. However, the standard classification method for both human and cattle embryos, is based on quality parameters that reflect the overall morphological quality of the embryo in cattle, or the quality of the individual embryonic structures, more relevant in human embryo classification. This assessment method is biased by the subjectivity of the evaluator and even though several guidelines exist to standardize the classification, it is not a method capable of giving reliable and trustworthy results. Latest approaches for the improvement of quality assessment include the use of data from cellular metabolism, a new morphological grading system, development kinetics and cleavage symmetry, embryo cell biopsy followed by pre-implantation genetic diagnosis, zona pellucida birefringence, ion release by the embryo cells and so forth. Nowadays there exists a great need for evaluation methods that are practical and non-invasive while being accurate and objective. A method along these lines would be of great importance to embryo evaluation by embryologists, clinicians and other professionals who work with assisted reproductive technology. Several techniques shows promising results in this sense, one being the use of digital images of the embryo as basis for features extraction and classification by means of artificial intelligence techniques (as genetic algorithms and artificial neural networks). This process has the potential to become an accurate and objective standard for embryo quality assessment.
Methods for assessing the quality of mammalian embryos: How far we are from the gold standard?
Rocha, José C.; Passalia, Felipe; Matos, Felipe D.; Maserati Jr, Marc P.; Alves, Mayra F.; de Almeida, Tamie G.; Cardoso, Bruna L.; Basso, Andrea C.; Nogueira, Marcelo F. G.
2016-01-01
Morphological embryo classification is of great importance for many laboratory techniques, from basic research to the ones applied to assisted reproductive technology. However, the standard classification method for both human and cattle embryos, is based on quality parameters that reflect the overall morphological quality of the embryo in cattle, or the quality of the individual embryonic structures, more relevant in human embryo classification. This assessment method is biased by the subjectivity of the evaluator and even though several guidelines exist to standardize the classification, it is not a method capable of giving reliable and trustworthy results. Latest approaches for the improvement of quality assessment include the use of data from cellular metabolism, a new morphological grading system, development kinetics and cleavage symmetry, embryo cell biopsy followed by pre-implantation genetic diagnosis, zona pellucida birefringence, ion release by the embryo cells and so forth. Nowadays there exists a great need for evaluation methods that are practical and non-invasive while being accurate and objective. A method along these lines would be of great importance to embryo evaluation by embryologists, clinicians and other professionals who work with assisted reproductive technology. Several techniques shows promising results in this sense, one being the use of digital images of the embryo as basis for features extraction and classification by means of artificial intelligence techniques (as genetic algorithms and artificial neural networks). This process has the potential to become an accurate and objective standard for embryo quality assessment. PMID:27584609
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-14
... development of BG1Luc ER TA test method performance standards. ICCVAM assigned the activities a high priority... Vitro Test Methods for Detecting Potential Endocrine Disruptors. Research Triangle Park, NC: National...Final.pdf . ICCVAM. 2003a. ICCVAM Evaluation of In Vitro Test Methods For Detecting Potential Endocrine...
Marchetti, Bárbara V; Candotti, Cláudia T; Raupp, Eduardo G; Oliveira, Eduardo B C; Furlanetto, Tássia S; Loss, Jefferson F
The purpose of this study was to assess a radiographic method for spinal curvature evaluation in children, based on spinous processes, and identify its normality limits. The sample consisted of 90 radiographic examinations of the spines of children in the sagittal plane. Thoracic and lumbar curvatures were evaluated using angular (apex angle [AA]) and linear (sagittal arrow [SA]) measurements based on the spinous processes. The same curvatures were also evaluated using the Cobb angle (CA) method, which is considered the gold standard. For concurrent validity (AA vs CA), Pearson's product-moment correlation coefficient, root-mean-square error, Pitman- Morgan test, and Bland-Altman analysis were used. For reproducibility (AA, SA, and CA), the intraclass correlation coefficient, standard error of measurement, and minimal detectable change measurements were used. A significant correlation was found between CA and AA measurements, as was a low root-mean-square error. The mean difference between the measurements was 0° for thoracic and lumbar curvatures, and the mean standard deviations of the differences were ±5.9° and 6.9°, respectively. The intraclass correlation coefficients of AA and SA were similar to or higher than the gold standard (CA). The standard error of measurement and minimal detectable change of the AA were always lower than the CA. This study determined the concurrent validity, as well as intra- and interrater reproducibility, of the radiographic measurements of kyphosis and lordosis in children. Copyright © 2017. Published by Elsevier Inc.
Evaluating Sleep Disturbance: A Review of Methods
NASA Technical Reports Server (NTRS)
Smith, Roy M.; Oyung, R.; Gregory, K.; Miller, D.; Rosekind, M.; Rosekind, Mark R. (Technical Monitor)
1996-01-01
There are three general approaches to evaluating sleep disturbance in regards to noise: subjective, behavioral, and physiological. Subjective methods range from standardized questionnaires and scales to self-report measures designed for specific research questions. There are two behavioral methods that provide useful sleep disturbance data. One behavioral method is actigraphy, a motion detector that provides an empirical estimate of sleep quantity and quality. An actigraph, worn on the non-dominant wrist, provides a 24-hr estimate of the rest/activity cycle. The other method involves a behavioral response, either to a specific probe or stimuli or subject initiated (e.g., indicating wakefulness). The classic, gold standard for evaluating sleep disturbance is continuous physiological monitoring of brain, eye, and muscle activity. This allows detailed distinctions of the states and stages of sleep, awakenings, and sleep continuity. Physiological delta can be obtained in controlled laboratory settings and in natural environments. Current ambulatory physiological recording equipment allows evaluation in home and work settings. These approaches will be described and the relative strengths and limitations of each method will be discussed.
Comparison of ambulatory blood pressure reference standards in children evaluated for hypertension
Jones, Deborah P.; Richey, Phyllis A.; Alpert, Bruce S.
2009-01-01
Objective The purpose of this study was to systematically compare methods for standardization of blood pressure levels obtained by ambulatory blood pressure monitoring (ABPM) in a group of 111 children studied at our institution. Methods Blood pressure indices, blood pressure loads and standard deviation scores were calculated using he original ABPM and the modified reference standards. Bland—Altman plots and kappa statistics for the level of agreement were generated. Results Overall, the agreement between the two methods was excellent; however, approximately 5% of children were classified differently by one as compared with the other method. Conclusion Depending on which version of the German Working Group’s reference standards is used for interpretation of ABPM data, the classification of the individual as having hypertension or normal blood pressure may vary. PMID:19433980
Ultrawide-field Fluorescein Angiography for Evaluation of Diabetic Retinopathy
Kong, Mingui; Lee, Mee Yon
2012-01-01
Purpose To investigate the advantages of ultrawide-field fluorescein angiography (FA) over the standard fundus examination in the evaluation of diabetic retinopathy (DR). Methods Ultrawide-field FAs were obtained in 118 eyes of 59 diabetic patients; 11 eyes with no DR, 71 eyes with nonproliferative diabetic retinopathy (NPDR), and 36 eyes with proliferative diabetic retinopathy (PDR), diagnosed by the standard method. The presence of peripheral abnormal lesions beyond the standard seven fields was examined. Results Ultrawide-field FA images demonstrated peripheral microaneurysms in six (54.5%) of 11 eyes with no DR and all eyes with moderate to severe NPDR and PDR. Peripheral retinal neovascularizations were detected in three (4.2%) of 71 eyes with NPDR and in 13 (36.1%) of 36 eyes with PDR. Peripheral vascular nonperfusion and vascular leakage were found in two-thirds of eyes with severe NPDR and PDR. Conclusions Ultrawide-field FA demonstrates peripheral lesions beyond standard fields, which can allow early detection and a close evaluation of DR. PMID:23204797
[Thinking about vertigo effectiveness evaluation methods in clinical research of Chinese medicine].
Liu, Hong-mei; Li, Tao
2014-10-01
Vertigo is a kind of patients' subjective feelings. The severity of vertigo is closely related to many factors. But we are short of a well accepted quantitative evaluation method capable of accurately and comprehensively evaluating vertigo in clinics. Reducing the onset of vertigo, enhancing the re- covery of equilibrium function, and improving the quality of life of vertigo patients should be taken as the focus of evaluating therapeutic effects. As for establishing a Chinese medical effectiveness evaluation system for vertigo, we believe we should distinguish different "diseases". We could roughly identify it as systemic vertigo and non-systemic vertigo. For systemic vertigo, the efficacy of vertigo could be comprehensively evaluated by UCLA vertigo questionnaire or dizziness handicap inventory combined with equilibrium function testing indices. But for non-systemic vertigo, the efficacy of vertigo could be comprehensively evaluated by taking UCLA vertigo questionnaire or dizziness handicap inventory as main efficacy indices. Secondly, we should analyze different reasons for vertigo, choose symptoms and signs in line with vertigo features as well as with Chinese medical theories, and formulate corresponding syndrome effectiveness standards according to different diseases. We should not simply take syndrome diagnosis standards as efficacy evaluation standards.
Comparison of three commercially available fit-test methods.
Janssen, Larry L; Luinenburg, D Michael; Mullins, Haskell E; Nelson, Thomas J
2002-01-01
American National Standards Institute (ANSI) standard Z88.10, Respirator Fit Testing Methods, includes criteria to evaluate new fit-tests. The standard allows generated aerosol, particle counting, or controlled negative pressure quantitative fit-tests to be used as the reference method to determine acceptability of a new test. This study examined (1) comparability of three Occupational Safety and Health Administration-accepted fit-test methods, all of which were validated using generated aerosol as the reference method; and (2) the effect of the reference method on the apparent performance of a fit-test method under evaluation. Sequential fit-tests were performed using the controlled negative pressure and particle counting quantitative fit-tests and the bitter aerosol qualitative fit-test. Of 75 fit-tests conducted with each method, the controlled negative pressure method identified 24 failures; bitter aerosol identified 22 failures; and the particle counting method identified 15 failures. The sensitivity of each method, that is, agreement with the reference method in identifying unacceptable fits, was calculated using each of the other two methods as the reference. None of the test methods met the ANSI sensitivity criterion of 0.95 or greater when compared with either of the other two methods. These results demonstrate that (1) the apparent performance of any fit-test depends on the reference method used, and (2) the fit-tests evaluated use different criteria to identify inadequately fitting respirators. Although "acceptable fit" cannot be defined in absolute terms at this time, the ability of existing fit-test methods to reject poor fits can be inferred from workplace protection factor studies.
Establishment of a bioassay for the toxicity evaluation and quality control of Aconitum herbs.
Qin, Yi; Wang, Jia-bo; Zhao, Yan-ling; Shan, Li-mei; Li, Bao-cai; Fang, Fang; Jin, Cheng; Xiao, Xiao-he
2012-01-15
Currently, no bioassay is available for evaluating the toxicity of Aconitum herbs, which are well known for their lethal cardiotoxicity and neurotoxicity. In this study, we established a bioassay to evaluate the toxicity of Aconitum herbs. Test sample and standard solutions were administered to rats by intravenous infusion to determine their minimum lethal doses (MLD). Toxic potency was calculated by comparing the MLD. The experimental conditions of the method were optimized and standardized to ensure the precision and reliability of the bioassay. The application of the standardized bioassay was then tested by analyzing 18 samples of Aconitum herbs. Additionally, three major toxic alkaloids (aconitine, mesaconitine, and hypaconitine) in Aconitum herbs were analyzed using a liquid chromatographic method, which is the current method of choice for evaluating the toxicity of Aconitum herbs. We found that for all Aconitum herbs, the total toxicity of the extract was greater than the toxicity of the three alkaloids. Therefore, these three alkaloids failed to account for the total toxicity of Aconitum herbs. Compared with individual chemical analysis methods, the chief advantage of the bioassay is that it characterizes the total toxicity of Aconitum herbs. An incorrect toxicity evaluation caused by quantitative analysis of the three alkaloids might be effectively avoided by performing this bioassay. This study revealed that the bioassay is a powerful method for the safety assessment of Aconitum herbs. Copyright © 2011 Elsevier B.V. All rights reserved.
ASSESSING AND COMBINING RELIABILITY OF PROTEIN INTERACTION SOURCES
LEACH, SONIA; GABOW, AARON; HUNTER, LAWRENCE; GOLDBERG, DEBRA S.
2008-01-01
Integrating diverse sources of interaction information to create protein networks requires strategies sensitive to differences in accuracy and coverage of each source. Previous integration approaches calculate reliabilities of protein interaction information sources based on congruity to a designated ‘gold standard.’ In this paper, we provide a comparison of the two most popular existing approaches and propose a novel alternative for assessing reliabilities which does not require a gold standard. We identify a new method for combining the resultant reliabilities and compare it against an existing method. Further, we propose an extrinsic approach to evaluation of reliability estimates, considering their influence on the downstream tasks of inferring protein function and learning regulatory networks from expression data. Results using this evaluation method show 1) our method for reliability estimation is an attractive alternative to those requiring a gold standard and 2) the new method for combining reliabilities is less sensitive to noise in reliability assignments than the similar existing technique. PMID:17990508
A Preliminary Rubric Design to Evaluate Mixed Methods Research
ERIC Educational Resources Information Center
Burrows, Timothy J.
2013-01-01
With the increase in frequency of the use of mixed methods, both in research publications and in externally funded grants there are increasing calls for a set of standards to assess the quality of mixed methods research. The purpose of this mixed methods study was to conduct a multi-phase analysis to create a preliminary rubric to evaluate mixed…
Duct Leakage Repeatability Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Iain; Sherman, Max
2014-08-01
The purpose of this report is to evaluate the repeatability of the three most significant measurement techniques for duct leakage using data from the literature and recently obtained field data. We will also briefly discuss the first two factors. The main question to be answered by this study is to determine if differences in the repeatability of these tests methods is sufficient to indicate that any of these methods is so poor that it should be excluded from consideration as an allowed procedure in codes and standards. The three duct leak measurement methods assessed in this report are the twomore » duct pressurization methods that are commonly used by many practitioners and the DeltaQ technique. These are methods B, C and A, respectively of the ASTM E1554 standard. Although it would be useful to evaluate other duct leak test methods, this study focused on those test methods that are commonly used and are required in various test standards, such as BPI (2010), RESNET (2014), ASHRAE 62.2 (2013), California Title 24 (CEC 2012), DOE Weatherization and many other energy efficiency programs.« less
ERIC Educational Resources Information Center
Thomas, Sonya C.
2013-01-01
Writing is seldom explicitly taught, most specifically, in academic and scholarly writing. Therefore, this mixed methods correlational phenomenology research study explored the correlation between self-efficacy perception and course room preparation for the comprehensive examination, APA standards in the course room, APA standards evaluation for…
Fortini, Martina; Migliorini, Marzia; Cherubini, Chiara; Cecchi, Lorenzo; Calamai, Luca
2017-04-01
The commercial value of virgin olive oils (VOOs) strongly depends on their classification, also based on the aroma of the oils, usually evaluated by a panel test. Nowadays, a reliable analytical method is still needed to evaluate the volatile organic compounds (VOCs) and support the standard panel test method. To date, the use of HS-SPME sampling coupled to GC-MS is generally accepted for the analysis of VOCs in VOOs. However, VOO is a challenging matrix due to the simultaneous presence of: i) compounds at ppm and ppb concentrations; ii) molecules belonging to different chemical classes and iii) analytes with a wide range of molecular mass. Therefore, HS-SPME-GC-MS quantitation based upon the use of external standard method or of only a single internal standard (ISTD) for data normalization in an internal standard method, may be troublesome. In this work a multiple internal standard normalization is proposed to overcome these problems and improving quantitation of VOO-VOCs. As many as 11 ISTDs were used for quantitation of 71 VOCs. For each of them the most suitable ISTD was selected and a good linearity in a wide range of calibration was obtained. Except for E-2-hexenal, without ISTD or with an unsuitable ISTD, the linear range of calibration was narrower with respect to that obtained by a suitable ISTD, confirming the usefulness of multiple internal standard normalization for the correct quantitation of VOCs profile in VOOs. The method was validated for 71 VOCs, and then applied to a series of lampante virgin olive oils and extra virgin olive oils. In light of our results, we propose the application of this analytical approach for routine quantitative analyses and to support sensorial analysis for the evaluation of positive and negative VOOs attributes. Copyright © 2017 Elsevier B.V. All rights reserved.
Evaluation of 3 dental unit waterline contamination testing methods
Porteous, Nuala; Sun, Yuyu; Schoolfield, John
2015-01-01
Previous studies have found inconsistent results from testing methods used to measure heterotrophic plate count (HPC) bacteria in dental unit waterline (DUWL) samples. This study used 63 samples to compare the results obtained from an in-office chairside method and 2 currently used commercial laboratory HPC methods (Standard Methods 9215C and 9215E). The results suggest that the Standard Method 9215E is not suitable for application to DUWL quality monitoring, due to the detection of limited numbers of heterotrophic organisms at the required 35°C incubation temperature. The results also confirm that while the in-office chairside method is useful for DUWL quality monitoring, the Standard Method 9215C provided the most accurate results. PMID:25574718
ERIC Educational Resources Information Center
Chang, Edward C.; Yu, Tina; Chang, Olivia D.; Jilani, Zunaira
2016-01-01
Objectives: The present study examined perfectionism (viz, evaluative concerns and personal standards) and ethnicity as predictors of body dissatisfaction in female college students. Participants: Participants were 298 female college students sampled by December of 2013. Methods: A self-report survey with measures of body dissatisfaction,…
ERIC Educational Resources Information Center
Nixon, Lisa
2013-01-01
The purpose of this mixed methods study was to determine the key implementation issues of a standards-based teacher evaluation system as perceived by campus administrators. The 80 campus administrators that participated in this study were from six public school districts located in southeastern Texas that serve students in grades Kindergarten…
Evaluation of experimental methods for assessing safety for ultrasound radiation force elastography.
Skurczynski, M J; Duck, F A; Shipley, J A; Bamber, J C; Melodelima, D
2009-08-01
Standard test tools have been evaluated for the assessment of safety associated with a prototype transducer intended for a novel radiation force elastographic imaging system. In particular, safety has been evaluated by direct measurement of temperature rise, using a standard thermal test object, and detection of inertial cavitation from acoustic emission. These direct measurements have been compared with values of the thermal index and mechanical index, calculated from acoustic measurements in water using standard formulae. It is concluded that measurements using a thermal test object can be an effective alternative to the calculation of thermal index for evaluating thermal hazard. Measurement of the threshold for cavitation was subject to considerable variability, and it is concluded that the mechanical index still remains the preferred standard means for assessing cavitation hazard.
ERIC Educational Resources Information Center
Blackwell, H. Richard
1963-01-01
An application method for evaluating the visual significance of reflected glare is described, based upon a number of decisions with respect to the relative importance of various aspects of visual performance. A standardized procedure for evaluating the overall effectiveness of lighting from photometric data on materials or installations is needed…
[Research progress on mechanical performance evaluation of artificial intervertebral disc].
Li, Rui; Wang, Song; Liao, Zhenhua; Liu, Weiqiang
2018-03-01
The mechanical properties of artificial intervertebral disc (AID) are related to long-term reliability of prosthesis. There are three testing methods involved in the mechanical performance evaluation of AID based on different tools: the testing method using mechanical simulator, in vitro specimen testing method and finite element analysis method. In this study, the testing standard, testing equipment and materials of AID were firstly introduced. Then, the present status of AID static mechanical properties test (static axial compression, static axial compression-shear), dynamic mechanical properties test (dynamic axial compression, dynamic axial compression-shear), creep and stress relaxation test, device pushout test, core pushout test, subsidence test, etc. were focused on. The experimental techniques using in vitro specimen testing method and testing results of available artificial discs were summarized. The experimental methods and research status of finite element analysis were also summarized. Finally, the research trends of AID mechanical performance evaluation were forecasted. The simulator, load, dynamic cycle, motion mode, specimen and test standard would be important research fields in the future.
Standard Test Methods for Textile Composites
NASA Technical Reports Server (NTRS)
Masters, John E.; Portanova, Marc A.
1996-01-01
Standard testing methods for composite laminates reinforced with continuous networks of braided, woven, or stitched fibers have been evaluated. The microstructure of these textile' composite materials differs significantly from that of tape laminates. Consequently, specimen dimensions and loading methods developed for tape type composites may not be applicable to textile composites. To this end, a series of evaluations were made comparing testing practices currently used in the composite industry. Information was gathered from a variety of sources and analyzed to establish a series of recommended test methods for textile composites. The current practices established for laminated composite materials by ASTM and the MIL-HDBK-17 Committee were considered. This document provides recommended test methods for determining both in-plane and out-of-plane properties. Specifically, test methods are suggested for: unnotched tension and compression; open and filled hole tension; open hole compression; bolt bearing; and interlaminar tension. A detailed description of the material architectures evaluated is also provided, as is a recommended instrumentation practice.
A standard telemental health evaluation model: the time is now.
Kramer, Greg M; Shore, Jay H; Mishkind, Matt C; Friedl, Karl E; Poropatich, Ronald K; Gahm, Gregory A
2012-05-01
The telehealth field has advanced historic promises to improve access, cost, and quality of care. However, the extent to which it is delivering on its promises is unclear as the scientific evidence needed to justify success is still emerging. Many have identified the need to advance the scientific knowledge base to better quantify success. One method for advancing that knowledge base is a standard telemental health evaluation model. Telemental health is defined here as the provision of mental health services using live, interactive video-teleconferencing technology. Evaluation in the telemental health field largely consists of descriptive and small pilot studies, is often defined by the individual goals of the specific programs, and is typically focused on only one outcome. The field should adopt new evaluation methods that consider the co-adaptive interaction between users (patients and providers), healthcare costs and savings, and the rapid evolution in communication technologies. Acceptance of a standard evaluation model will improve perceptions of telemental health as an established field, promote development of a sounder empirical base, promote interagency collaboration, and provide a framework for more multidisciplinary research that integrates measuring the impact of the technology and the overall healthcare aspect. We suggest that consideration of a standard model is timely given where telemental health is at in terms of its stage of scientific progress. We will broadly recommend some elements of what such a standard evaluation model might include for telemental health and suggest a way forward for adopting such a model.
NASA Astrophysics Data System (ADS)
Lehmann, Thomas M.
2002-05-01
Reliable evaluation of medical image processing is of major importance for routine applications. Nonetheless, evaluation is often omitted or methodically defective when novel approaches or algorithms are introduced. Adopted from medical diagnosis, we define the following criteria to classify reference standards: 1. Reliance, if the generation or capturing of test images for evaluation follows an exactly determined and reproducible protocol. 2. Equivalence, if the image material or relationships considered within an algorithmic reference standard equal real-life data with respect to structure, noise, or other parameters of importance. 3. Independence, if any reference standard relies on a different procedure than that to be evaluated, or on other images or image modalities than that used routinely. This criterion bans the simultaneous use of one image for both, training and test phase. 4. Relevance, if the algorithm to be evaluated is self-reproducible. If random parameters or optimization strategies are applied, reliability of the algorithm must be shown before the reference standard is applied for evaluation. 5. Significance, if the number of reference standard images that are used for evaluation is sufficient large to enable statistically founded analysis. We demand that a true gold standard must satisfy the Criteria 1 to 3. Any standard only satisfying two criteria, i.e., Criterion 1 and Criterion 2 or Criterion 1 and Criterion 3, is referred to as silver standard. Other standards are termed to be from plastic. Before exhaustive evaluation based on gold or silver standards is performed, its relevance must be shown (Criterion 4) and sufficient tests must be carried out to found statistical analysis (Criterion 5). In this paper, examples are given for each class of reference standards.
LABORATORY TOXICITY TESTS FOR EVALUATING POTENTIAL EFFECTS OF ENDOCRINE-DISRUPTING COMPOUNDS
The scope of the Laboratory Testing Work Group was to evaluate methods for testing aquatic and terrestrial invertebrates in the laboratory. Specifically, discussions focused on the following objectives: 1) assess the extent to which consensus-based standard methods and other pub...
10 CFR 963.13 - Preclosure suitability evaluation method.
Code of Federal Regulations, 2010 CFR
2010-01-01
... of the structures, systems, components, equipment, and operator actions intended to mitigate or... and the criteria in § 963.14. DOE will consider the performance of the system in terms of the criteria... protection standard. (b) The preclosure safety evaluation method, using preliminary engineering...
Zhang, Hua; Chen, Qing-song; Li, Nan; Hua, Yan; Zeng, Lin; Xu, Guo-yang; Tao, Li-yuan; Zhao, Yi-ming
2013-05-01
To compare the results of noise hazard evaluations based on area sampling and personal sampling in a new thermal power plant and to analyze the similarities and differences between the two measurement methods. According to Measurement of Physical agents in Workplace Part 8: Noise(GBZff 189.8-2007), area sampling was performed at various operating points for noise measurement, and meanwhile the workers under different types of work wore noise dosimeters for personal noise exposure measurement. The two measurement methods were used to evaluate the level of noise hazards in the enterprise according to the corresponding occupational health standards, and the evaluation results were compared. Area sampling was performed at 99 operating points, the mean noise level was 88.9 ± 11.1 dB (A)(range, 51.3-107.0 dB (A)), with an over-standard rate of 75.8%. Personal sampling was performed (73 person times),and the mean noise level was 79.3 ± 6.3 dB (A), with an over-standard rate of 6.6% ( 16/241 ). There was a statistically significant difference in the over-standard rate between the evaluation results of the two measurement methods ( x2=53.869, ?<0.001 ). Because of the characteristics of the work in new thermal power plants, the noise hazard evaluation based on area sampling cannot be used instead of personal noise exposure measurement among workers. Personal sampling should be used in the noise measurement in new thermal power plant.
ERIC Educational Resources Information Center
Budd, Julia M.
2018-01-01
Evaluating cross-disciplinary collaboration has generally been undertaken using disciplinary standards. However, this practice is increasingly being found to be inadequate due to the often contradictory nature of the methods used. It has been suggested that methods that consider the unique integrative nature of these studies be employed. This…
Wolterink, Jelmer M; Leiner, Tim; de Vos, Bob D; Coatrieux, Jean-Louis; Kelm, B Michael; Kondo, Satoshi; Salgado, Rodrigo A; Shahzad, Rahil; Shu, Huazhong; Snoeren, Miranda; Takx, Richard A P; van Vliet, Lucas J; van Walsum, Theo; Willems, Tineke P; Yang, Guanyu; Zheng, Yefeng; Viergever, Max A; Išgum, Ivana
2016-05-01
The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular disease (CVD) events. In clinical practice, CAC is manually identified and automatically quantified in cardiac CT using commercially available software. This is a tedious and time-consuming process in large-scale studies. Therefore, a number of automatic methods that require no interaction and semiautomatic methods that require very limited interaction for the identification of CAC in cardiac CT have been proposed. Thus far, a comparison of their performance has been lacking. The objective of this study was to perform an independent evaluation of (semi)automatic methods for CAC scoring in cardiac CT using a publicly available standardized framework. Cardiac CT exams of 72 patients distributed over four CVD risk categories were provided for (semi)automatic CAC scoring. Each exam consisted of a noncontrast-enhanced calcium scoring CT (CSCT) and a corresponding coronary CT angiography (CCTA) scan. The exams were acquired in four different hospitals using state-of-the-art equipment from four major CT scanner vendors. The data were divided into 32 training exams and 40 test exams. A reference standard for CAC in CSCT was defined by consensus of two experts following a clinical protocol. The framework organizers evaluated the performance of (semi)automatic methods on test CSCT scans, per lesion, artery, and patient. Five (semi)automatic methods were evaluated. Four methods used both CSCT and CCTA to identify CAC, and one method used only CSCT. The evaluated methods correctly detected between 52% and 94% of CAC lesions with positive predictive values between 65% and 96%. Lesions in distal coronary arteries were most commonly missed and aortic calcifications close to the coronary ostia were the most common false positive errors. The majority (between 88% and 98%) of correctly identified CAC lesions were assigned to the correct artery. Linearly weighted Cohen's kappa for patient CVD risk categorization by the evaluated methods ranged from 0.80 to 1.00. A publicly available standardized framework for the evaluation of (semi)automatic methods for CAC identification in cardiac CT is described. An evaluation of five (semi)automatic methods within this framework shows that automatic per patient CVD risk categorization is feasible. CAC lesions at ambiguous locations such as the coronary ostia remain challenging, but their detection had limited impact on CVD risk determination.
Samuel V. Glass; Stanley D. Gatland II; Kohta Ueno; Christopher J. Schumacher
2017-01-01
ASHRAE Standard 160, Criteria for Moisture-Control Design Analysis in Buildings, was published in 2009. The standard sets criteria for moisture design loads, hygrothermal analysis methods, and satisfactory moisture performance of the building envelope. One of the evaluation criteria specifies conditions necessary to avoid mold growth. The current standard requires that...
Matrix effect and recovery terminology issues in regulated drug bioanalysis.
Huang, Yong; Shi, Robert; Gee, Winnie; Bonderud, Richard
2012-02-01
Understanding the meaning of the terms used in the bioanalytical method validation guidance is essential for practitioners to implement best practice. However, terms that have several meanings or that have different interpretations exist within bioanalysis, and this may give rise to differing practices. In this perspective we discuss an important but often confusing term - 'matrix effect (ME)' - in regulated drug bioanalysis. The ME can be interpreted as either the ionization change or the measurement bias of the method caused by the nonanalyte matrix. The ME definition dilemma makes its evaluation challenging. The matrix factor is currently used as a standard method for evaluation of ionization changes caused by the matrix in MS-based methods. Standard additions to pre-extraction samples have been suggested to evaluate the overall effects of a matrix from different sources on the analytical system, because it covers ionization variation and extraction recovery variation. We also provide our personal views on the term 'recovery'.
NASA Astrophysics Data System (ADS)
shunhe, Li; jianhua, Rao; lin, Gui; weimin, Zhang; degang, Liu
2017-11-01
The result of remanufacturing evaluation is the basis for judging whether the heavy duty machine tool can remanufacture in the EOL stage of the machine tool lifecycle management.The objectivity and accuracy of evaluation is the key to the evaluation method.In this paper, the catastrophe progression method is introduced into the quantitative evaluation of heavy duty machine tools’ remanufacturing,and the results are modified by the comprehensive adjustment method,which makes the evaluation results accord with the standard of human conventional thinking.Using the catastrophe progression method to establish the heavy duty machine tools’ quantitative evaluation model,to evaluate the retired TK6916 type CNC floor milling-boring machine’s remanufacturing.The evaluation process is simple,high quantification,the result is objective.
Ashley, Kevin; Brisson, Michael J; Howe, Alan M; Bartley, David L
2009-12-01
A collaborative interlaboratory evaluation of a newly standardized inductively coupled plasma mass spectrometry (ICP-MS) method for determining trace beryllium in workplace air samples was carried out toward fulfillment of method validation requirements for ASTM International voluntary consensus standard test methods. The interlaboratory study (ILS) was performed in accordance with an applicable ASTM International standard practice, ASTM E691, which describes statistical procedures for investigating interlaboratory precision. Uncertainty was also estimated in accordance with ASTM D7440, which applies the International Organization for Standardization Guide to the Expression of Uncertainty in Measurement to air quality measurements. Performance evaluation materials (PEMs) used consisted of 37 mm diameter mixed cellulose ester filters that were spiked with beryllium at levels of 0.025 (low loading), 0.5 (medium loading), and 10 (high loading) microg Be/filter; these spiked filters were prepared by a contract laboratory. Participating laboratories were recruited from a pool of over 50 invitees; ultimately, 20 laboratories from Europe, North America, and Asia submitted ILS results. Triplicates of each PEM (blanks plus the three different loading levels) were conveyed to each volunteer laboratory, along with a copy of the draft standard test method that each participant was asked to follow; spiking levels were unknown to the participants. The laboratories were requested to prepare the PEMs by one of three sample preparation procedures (hotplate or microwave digestion or hotblock extraction) that were described in the draft standard. Participants were then asked to analyze aliquots of the prepared samples by ICP-MS and to report their data in units of mu g Be/filter sample. Interlaboratory precision estimates from participating laboratories, computed in accordance with ASTM E691, were 0.165, 0.108, and 0.151 (relative standard deviation) for the PEMs spiked at 0.025, 0.5, and 10 microg Be/filter, respectively. Overall recoveries were 93.2%, 102%, and 80.6% for the low, medium, and high beryllium loadings, respectively. Expanded uncertainty estimates for interlaboratory analysis of low, medium, and high beryllium loadings, calculated in accordance with ASTM D7440, were 18.8%, 19.8%, and 24.4%, respectively. These figures of merit support promulgation of the analytical procedure as an ASTM International standard test method, ASTM D7439.
A new IRT-based standard setting method: application to eCat-listening.
García, Pablo Eduardo; Abad, Francisco José; Olea, Julio; Aguado, David
2013-01-01
Criterion-referenced interpretations of tests are highly necessary, which usually involves the difficult task of establishing cut scores. Contrasting with other Item Response Theory (IRT)-based standard setting methods, a non-judgmental approach is proposed in this study, in which Item Characteristic Curve (ICC) transformations lead to the final cut scores. eCat-Listening, a computerized adaptive test for the evaluation of English Listening, was administered to 1,576 participants, and the proposed standard setting method was applied to classify them into the performance standards of the Common European Framework of Reference for Languages (CEFR). The results showed a classification closely related to relevant external measures of the English language domain, according to the CEFR. It is concluded that the proposed method is a practical and valid standard setting alternative for IRT-based tests interpretations.
Endo, Yasushi
2018-01-01
Edible fats and oils are among the basic components of the human diet, along with carbohydrates and proteins, and they are the source of high energy and essential fatty acids such as linoleic and linolenic acids. Edible fats and oils are used in for pan- and deep-frying, and in salad dressing, mayonnaise and processed foods such as chocolates and cream. The physical and chemical properties of edible fats and oils can affect the quality of oil foods and hence must be evaluated in detail. The physical characteristics of edible fats and oils include color, specific gravity, refractive index, melting point, congeal point, smoke point, flash point, fire point, and viscosity, while the chemical characteristics include acid value, saponification value, iodine value, fatty acid composition, trans isomers, triacylglycerol composition, unsaponifiable matters (sterols, tocopherols) and minor components (phospholipids, chlorophyll pigments, glycidyl fatty acid esters). Peroxide value, p-anisidine value, carbonyl value, polar compounds and polymerized triacylglycerols are indexes of the deterioration of edible fats and oils. This review describes the analytical methods to evaluate the quality of edible fats and oils, especially the Standard Methods for Analysis of Fats, Oils and Related Materials edited by Japan Oil Chemists' Society (the JOCS standard methods) and advanced methods.
Open-source platform to benchmark fingerprints for ligand-based virtual screening
2013-01-01
Similarity-search methods using molecular fingerprints are an important tool for ligand-based virtual screening. A huge variety of fingerprints exist and their performance, usually assessed in retrospective benchmarking studies using data sets with known actives and known or assumed inactives, depends largely on the validation data sets used and the similarity measure used. Comparing new methods to existing ones in any systematic way is rather difficult due to the lack of standard data sets and evaluation procedures. Here, we present a standard platform for the benchmarking of 2D fingerprints. The open-source platform contains all source code, structural data for the actives and inactives used (drawn from three publicly available collections of data sets), and lists of randomly selected query molecules to be used for statistically valid comparisons of methods. This allows the exact reproduction and comparison of results for future studies. The results for 12 standard fingerprints together with two simple baseline fingerprints assessed by seven evaluation methods are shown together with the correlations between methods. High correlations were found between the 12 fingerprints and a careful statistical analysis showed that only the two baseline fingerprints were different from the others in a statistically significant way. High correlations were also found between six of the seven evaluation methods, indicating that despite their seeming differences, many of these methods are similar to each other. PMID:23721588
Evaluation of scaling invariance embedded in short time series.
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.
Evaluation of Scaling Invariance Embedded in Short Time Series
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356
A straightforward experimental method to evaluate the Lamb-Mössbauer factor of a 57Co/Rh source
NASA Astrophysics Data System (ADS)
Spina, G.; Lantieri, M.
2014-01-01
In analyzing Mössbauer spectra by means of the integral transmission function, a correct evaluation of the recoilless fs factor of the source at the position of the sample is needed. A novel method to evaluate fs for a 57Co source is proposed. The method uses the standard transmission experimental set up and it does not need further measurements but the ones that are mandatory in order to center the Mössbauer line and to calibrate the Mössbauer transducer. Firstly, the background counts are evaluated by collecting a standard Multi Channel Scaling (MCS) spectrum of a tick metal iron foil absorber and two Pulse Height Analysis (PHA) spectra with the same life-time and setting the maximum velocity of the transducer at the same value of the MCS spectrum. Secondly, fs is evaluated by fitting the collected MCS spectrum throughout the integral transmission approach. A test of the suitability of the technique is presented, too.
Does daily nurse staffing match ward workload variability? Three hospitals' experiences.
Gabbay, Uri; Bukchin, Michael
2009-01-01
Nurse shortage and rising healthcare resource burdens mean that appropriate workforce use is imperative. This paper aims to evaluate whether daily nursing staffing meets ward workload needs. Nurse attendance and daily nurses' workload capacity in three hospitals were evaluated. Statistical process control was used to evaluate intra-ward nurse workload capacity and day-to-day variations. Statistical process control is a statistics-based method for process monitoring that uses charts with predefined target measure and control limits. Standardization was performed for inter-ward analysis by converting ward-specific crude measures to ward-specific relative measures by dividing observed/expected. Two charts: acceptable and tolerable daily nurse workload intensity, were defined. Appropriate staffing indicators were defined as those exceeding predefined rates within acceptable and tolerable limits (50 percent and 80 percent respectively). A total of 42 percent of the overall days fell within acceptable control limits and 71 percent within tolerable control limits. Appropriate staffing indicators were met in only 33 percent of wards regarding acceptable nurse workload intensity and in only 45 percent of wards regarding tolerable workloads. The study work did not differentiate crude nurse attendance and it did not take into account patient severity since crude bed occupancy was used. Double statistical process control charts and certain staffing indicators were used, which is open to debate. Wards that met appropriate staffing indicators prove the method's feasibility. Wards that did not meet appropriate staffing indicators prove the importance and the need for process evaluations and monitoring. Methods presented for monitoring daily staffing appropriateness are simple to implement either for intra-ward day-to-day variation by using nurse workload capacity statistical process control charts or for inter-ward evaluation using standardized measure of nurse workload intensity. The real challenge will be to develop planning systems and implement corrective interventions such as dynamic and flexible daily staffing, which will face difficulties and barriers. The paper fulfils the need for workforce utilization evaluation. A simple method using available data for daily staffing appropriateness evaluation, which is easy to implement and operate, is presented. The statistical process control method enables intra-ward evaluation, while standardization by converting crude into relative measures enables inter-ward analysis. The staffing indicator definitions enable performance evaluation. This original study uses statistical process control to develop simple standardization methods and applies straightforward statistical tools. This method is not limited to crude measures, rather it uses weighted workload measures such as nursing acuity or weighted nurse level (i.e. grade/band).
Berthon, Beatrice; Spezi, Emiliano; Galavis, Paulina; Shepherd, Tony; Apte, Aditya; Hatt, Mathieu; Fayad, Hadi; De Bernardi, Elisabetta; Soffientini, Chiara D; Ross Schmidtlein, C; El Naqa, Issam; Jeraj, Robert; Lu, Wei; Das, Shiva; Zaidi, Habib; Mawlawi, Osama R; Visvikis, Dimitris; Lee, John A; Kirov, Assen S
2017-08-01
The aim of this paper is to define the requirements and describe the design and implementation of a standard benchmark tool for evaluation and validation of PET-auto-segmentation (PET-AS) algorithms. This work follows the recommendations of Task Group 211 (TG211) appointed by the American Association of Physicists in Medicine (AAPM). The recommendations published in the AAPM TG211 report were used to derive a set of required features and to guide the design and structure of a benchmarking software tool. These items included the selection of appropriate representative data and reference contours obtained from established approaches and the description of available metrics. The benchmark was designed in a way that it could be extendable by inclusion of bespoke segmentation methods, while maintaining its main purpose of being a standard testing platform for newly developed PET-AS methods. An example of implementation of the proposed framework, named PETASset, was built. In this work, a selection of PET-AS methods representing common approaches to PET image segmentation was evaluated within PETASset for the purpose of testing and demonstrating the capabilities of the software as a benchmark platform. A selection of clinical, physical, and simulated phantom data, including "best estimates" reference contours from macroscopic specimens, simulation template, and CT scans was built into the PETASset application database. Specific metrics such as Dice Similarity Coefficient (DSC), Positive Predictive Value (PPV), and Sensitivity (S), were included to allow the user to compare the results of any given PET-AS algorithm to the reference contours. In addition, a tool to generate structured reports on the evaluation of the performance of PET-AS algorithms against the reference contours was built. The variation of the metric agreement values with the reference contours across the PET-AS methods evaluated for demonstration were between 0.51 and 0.83, 0.44 and 0.86, and 0.61 and 1.00 for DSC, PPV, and the S metric, respectively. Examples of agreement limits were provided to show how the software could be used to evaluate a new algorithm against the existing state-of-the art. PETASset provides a platform that allows standardizing the evaluation and comparison of different PET-AS methods on a wide range of PET datasets. The developed platform will be available to users willing to evaluate their PET-AS methods and contribute with more evaluation datasets. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
In July 1997, EPA promulgated a new National Ambient Air Quality Standard (NAAQS) for fine particulate matter (PM2.5). This new standard was based on collection of an integrated mass sample on a filter. Field studies have demonstrated that the collection of semivolatile compoun...
A new in vitro method to evaluate radio-opacity of endodontic sealers
Malka, V B; Hochscheidt, G L; Larentis, N L; Grecca, F S; Kopper, P M P
2015-01-01
Objectives: To evaluate a new method for assessing the radio-opacity of endodontic sealers and to compare radio-opacity values with a well-established standard method. Methods: The sealers evaluated in this study were AH Plus® (Dentsply DeTrey GmbH, Konstanz, Germany), Endo CPM Sealer (EGEO SRL, Buenos Aires, Argentina) and MTA Fillapex® (Angelus Dental Products Industry S/A, Londrina, Parana, Brazil). Two methods were used to evaluate radio-opacity: (D) standard discs and (S) a tissue simulator. For (D), ten standard discs were prepared for each sealer and were radiographed using Digora® phosphor storage plates (Soredex; Orion Corporation, Helsinki, Finland), alongside an aluminium stepwedge. For (S), polyethylene tubes filled with sealer (n = 10 for each) were radiographed inside the simulator as described. The digital images were analysed using Adobe Photoshop® software v. 10.0 (Adobe Systems, San Jose, CA). To compare the radio-opacity among the sealers, the data were analysed by ANOVA and Tukey's test, and to compare methods, they were analysed by the Mann–Whitney U test. To compare the data obtained from dentin and sealers in method (S), Student's paired t-test was used (=0.05). Results: In both methods, the sealers showed significant differences, according to the following decreasing order: AH Plus, MTA Fillapex and Endo CPM. In (D), MTA Fillapex and Endo CPM showed less radio-opacity than aluminium. For all of the materials, the radio-opacity was higher in (S) than in (D). Compared with dentin, all of the materials were more radio-opaque. Conclusions: The comparison of the two assessment methods for sealer radio-opacity testing validated the use of a tissue simulator block. PMID:25651275
Metrics for Offline Evaluation of Prognostic Performance
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2010-01-01
Prognostic performance evaluation has gained significant attention in the past few years. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.
ERIC Educational Resources Information Center
Catano, Nancy; Stronge, James H.
2007-01-01
This study used both quantitative and qualitative methods of content analysis to examine principal evaluation instruments and state and professional standards for principals in school districts located in a mid-Atlantic state in the USA. The purposes of this study were to (a) determine the degrees of emphasis that are placed upon leadership and…
Using normalization 3D model for automatic clinical brain quantative analysis and evaluation
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping
2003-05-01
Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.
Lin, Long-Ze; Harnly, James M
2008-11-12
A screening method using LC-DAD-ESI/MS was developed for the identification of common hydroxycinnamoylquinic acids based on direct comparison with standards. A complete standard set for mono-, di-, and tricaffeoylquinic isomers was assembled from commercially available standards, positively identified compounds in common plants (artichokes, asparagus, coffee bean, honeysuckle flowers, sweet potato, and Vernonia amygdalina leaves) and chemically modified standards. Four C18 reversed phase columns were tested using the standardized profiling method (based on LC-DAD-ESI/MS) for 30 phenolic compounds, and their elution order and retention times were evaluated. Using only two columns under standardized LC condition and the collected phenolic compound database, it was possible to separate all of the hydroxycinnamoylquinic acid conjugates and to identify 28 and 18 hydroxycinnamoylquinic acids in arnica flowers (Arnica montana L.) and burdock roots (Arctium lappa L.), respectively. Of these, 22 are reported for the first time.
Sert, Şenol
2013-07-01
A comparison method for the determination (without sample pre-concentration) of uranium in ore by inductively coupled plasma optical emission spectrometry (ICP-OES) has been performed. The experiments were conducted using three procedures: matrix matching, plasma optimization, and internal standardization for three emission lines of uranium. Three wavelengths of Sm were tested as internal standard for the internal standardization method. The robust conditions were evaluated using applied radiofrequency power, nebulizer argon gas flow rate, and sample uptake flow rate by considering the intensity ratio of the Mg(II) 280.270 nm and Mg(I) 285.213 nm lines. Analytical characterization of method was assessed by limit of detection and relative standard deviation values. The certificated reference soil sample IAEA S-8 was analyzed, and the uranium determination at 367.007 nm with internal standardization using Sm at 359.260 nm has been shown to improve accuracy compared with other methods. The developed method was used for real uranium ore sample analysis.
Selection of reference standard during method development using the analytical hierarchy process.
Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun
2015-03-25
Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. Copyright © 2015 Elsevier B.V. All rights reserved.
Guo-Qiang, Zhang; Yan, Huang; Licong, Cui
2017-01-01
We introduce RGT, Retrospective Ground-Truthing, as a surrogate reference standard for evaluating the performance of automated Ontology Quality Assurance (OQA) methods. The key idea of RGT is to use cumulative SNOMED CT changes derived from its regular longitudinal distributions by the official SNOMED CT editorial board as a partial, surrogate reference standard. The contributions of this paper are twofold: (1) to construct an RGT reference set for SNOMED CT relational changes; and (2) to perform a comparative evaluation of the performances of lattice, non-lattice, and randomized relational error detection methods using the standard precision, recall, and geometric measures. An RGT relational-change reference set of 32,241 IS-A changes were constructed from 5 U.S. editions of SNOMED CT from September 2014 to September 2016, with reversals and changes due to deletion or addition of new concepts excluded. 68,849 independent non-lattice fragments, 118,587 independent lattice fragments, and 446,603 relations were extracted from the SNOMED CT March 2014 distribution. Comparative performance analysis of smaller (less than 15) lattice vs. non-lattice fragments was also given to approach the more realistic setting in which such methods may be applied. Among the 32,241 IS-A changes, independent non-lattice fragments covered 52.8% changes with 26.4% precision with a G-score of 0.373. Even though this G-score is significantly lower in comparison to those in information retrieval, it breaks new ground in that such evaluations have never performed before in the highly discovery-oriented setting of OQA. PMID:29854262
Guo-Qiang, Zhang; Yan, Huang; Licong, Cui
2017-01-01
We introduce RGT, Retrospective Ground-Truthing, as a surrogate reference standard for evaluating the performance of automated Ontology Quality Assurance (OQA) methods. The key idea of RGT is to use cumulative SNOMED CT changes derived from its regular longitudinal distributions by the official SNOMED CT editorial board as a partial, surrogate reference standard. The contributions of this paper are twofold: (1) to construct an RGT reference set for SNOMED CT relational changes; and (2) to perform a comparative evaluation of the performances of lattice, non-lattice, and randomized relational error detection methods using the standard precision, recall, and geometric measures. An RGT relational-change reference set of 32,241 IS-A changes were constructed from 5 U.S. editions of SNOMED CT from September 2014 to September 2016, with reversals and changes due to deletion or addition of new concepts excluded. 68,849 independent non-lattice fragments, 118,587 independent lattice fragments, and 446,603 relations were extracted from the SNOMED CT March 2014 distribution. Comparative performance analysis of smaller (less than 15) lattice vs. non-lattice fragments was also given to approach the more realistic setting in which such methods may be applied. Among the 32,241 IS-A changes, independent non-lattice fragments covered 52.8% changes with 26.4% precision with a G-score of 0.373. Even though this G-score is significantly lower in comparison to those in information retrieval, it breaks new ground in that such evaluations have never performed before in the highly discovery-oriented setting of OQA.
Evaluation of Alternative Difference-in-Differences Methods
ERIC Educational Resources Information Center
Yu, Bing
2013-01-01
Difference-in-differences (DID) strategies are particularly useful for evaluating policy effects in natural experiments in which, for example, a policy affects some schools and students but not others. However, the standard DID method may produce biased estimation of the policy effect if the confounding effect of concurrent events varies by…
The U.S. Environmental Protection Agency (EPA), Research Triangle Park, North Carolina, has a program to evaluate and standardize source testing methods for hazardous pollutants in support of current and future air quality regulations. ccasionally, questions arise concerning an e...
Reform of the Method for Evaluating the Teaching of Medical Linguistics to Medical Students
ERIC Educational Resources Information Center
Zhang, Hongkui; Wang, Bo; Zhang, Longlu
2014-01-01
Explorating reform of the teaching evaluation method for vocational competency-based education (CBE) curricula for medical students is a very important process in following international medical education standards, intensify ing education and teaching reforms, enhancing teaching management, and improving the quality of medical education. This…
Evaluation of isolation methods for bacterial RNA quantitation in Dickeya dadantii
USDA-ARS?s Scientific Manuscript database
Dickeya dadantii is a difficult source for RNA of a sufficient quality for real-time qRT-PCR analysis of gene expression. Three RNA isolation methods were evaluated for their ability to produce high-quality RNA from this bacterium. Bacterial lysis with Trizol using standard protocols consistently ga...
ERIC Educational Resources Information Center
Wootton-Gorges, Sandra L.; Stein-Wexler, Rebecca; Walton, John W.; Rosas, Angela J.; Coulter, Kevin P.; Rogers, Kristen K.
2008-01-01
Purpose: Chest radiographs (CXR) are the standard method for evaluating rib fractures in abused infants. Computed tomography (CT) is a sensitive method to detect rib fractures. The purpose of this study was to compare CT and CXR in the evaluation of rib fractures in abused infants. Methods: This retrospective study included all 12 abused infants…
Zhiyong Cai; Michael O. Hunt; Robert J. Ross; Lawrence A. Soltis
1999-01-01
To date, there is no standard method for evaluating the structural integrity of wood floor systems using nondestructive techniques. Current methods of examination and assessment are often subjective and therefore tend to yield imprecise or variable results. For this reason, estimates of allowable wood floor loads are often conservative. The assignment of conservatively...
NASA Astrophysics Data System (ADS)
Smallwood, Jeremy; Swenson, David E.
2011-06-01
Evaluation of electrostatic performance of footwear and flooring in combination is necessary in applications such as electrostatic discharge (ESD) control in electronics manufacture, evaluation of equipment for avoidance of factory process electrostatic ignition risks and avoidance of electrostatic shocks to personnel in working environments. Typical standards use a walking test in which the voltage produced on a subject is evaluated by identification and measurement of the magnitude of the 5 highest "peaks" and "valleys" of the recorded voltage waveform. This method does not lend itself to effective analysis of the risk that the voltage will exceed a hazard threshold. This paper shows the advantages of voltage probability analysis and recommends that the method is adopted for use in future standards.
Link, Manuela; Schmid, Christina; Pleus, Stefan; Baumstark, Annette; Rittmeyer, Delia; Haug, Cornelia; Freckmann, Guido
2015-04-14
The standard ISO (International Organization for Standardization) 15197 is widely accepted for the accuracy evaluation of systems for self-monitoring of blood glucose (SMBG). Accuracy evaluation was performed for 4 SMBG systems (Accu-Chek Aviva, ContourXT, GlucoCheck XL, GlucoMen LX PLUS) with 3 test strip lots each. To investigate a possible impact of the comparison method on system accuracy data, 2 different established methods were used. The evaluation was performed in a standardized manner following test procedures described in ISO 15197:2003 (section 7.3). System accuracy was assessed by applying ISO 15197:2003 and in addition ISO 15197:2013 criteria (section 6.3.3). For each system, comparison measurements were performed with a glucose oxidase (YSI 2300 STAT Plus glucose analyzer) and a hexokinase (cobas c111) method. All 4 systems fulfilled the accuracy requirements of ISO 15197:2003 with the tested lots. More stringent accuracy criteria of ISO 15197:2013 were fulfilled by 3 systems (Accu-Chek Aviva, ContourXT, GlucoMen LX PLUS) when compared to the manufacturer's comparison method and by 2 systems (Accu-Chek Aviva, ContourXT) when compared to the alternative comparison method. All systems showed lot-to-lot variability to a certain degree; 2 systems (Accu-Chek Aviva, ContourXT), however, showed only minimal differences in relative bias between the 3 evaluated lots. In this study, all 4 systems complied with the evaluated test strip lots with accuracy criteria of ISO 15197:2003. Applying ISO 15197:2013 accuracy limits, differences in the accuracy of the tested systems were observed, also demonstrating that the applied comparison method/system and the lot-to-lot variability can have a decisive influence on accuracy data obtained for a SMBG system. © 2015 Diabetes Technology Society.
NASA Astrophysics Data System (ADS)
Jin, Huang; Ling, Lin; Jun, Guo; Jianguo, Li; Yongzhong, Wang
2017-11-01
Facing the increasingly severe situation of air pollution, China are now positively promoting the evaluation of high efficiency air pollution control equipments and the research of the relative national standards. This paper showed the significance and the effect of formulating the technical requirements of high efficiency precipitator equipments for assessment national standards in power industries as well as the research thoughts and principle of these standards. It introduce the qualitative and quantitative evaluation requirements of high efficiency precipitators using in power industries and the core technical content such as testing, calculating, evaluation methods and so on. The implementation of a series of national standards is in order to lead and promote the production and application of high efficiency precipitator equipments in the field of the prevention of air pollution in national power industries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Gowans, Dakers; Telarico, Chad
The Commercial and Industrial Lighting Evaluation Protocol (the protocol) describes methods to account for gross energy savings resulting from the programmatic installation of efficient lighting equipment in large populations of commercial, industrial, and other nonresidential facilities. This protocol does not address savings resulting from changes in codes and standards, or from education and training activities. A separate Uniform Methods Project (UMP) protocol, Chapter 3: Commercial and Industrial Lighting Controls Evaluation Protocol, addresses methods for evaluating savings resulting from lighting control measures such as adding time clocks, tuning energy management system commands, and adding occupancy sensors.
Generation of CAR T cells for adoptive therapy in the context of glioblastoma standard of care.
Riccione, Katherine; Suryadevara, Carter M; Snyder, David; Cui, Xiuyu; Sampson, John H; Sanchez-Perez, Luis
2015-02-16
Adoptive T cell immunotherapy offers a promising strategy for specifically targeting and eliminating malignant gliomas. T cells can be engineered ex vivo to express chimeric antigen receptors specific for glioma antigens (CAR T cells). The expansion and function of adoptively transferred CAR T cells can be potentiated by the lymphodepletive and tumoricidal effects of standard of care chemotherapy and radiotherapy. We describe a method for generating CAR T cells targeting EGFRvIII, a glioma-specific antigen, and evaluating their efficacy when combined with a murine model of glioblastoma standard of care. T cells are engineered by transduction with a retroviral vector containing the anti-EGFRvIII CAR gene. Tumor-bearing animals are subjected to host conditioning by a course of temozolomide and whole brain irradiation at dose regimens designed to model clinical standard of care. CAR T cells are then delivered intravenously to primed hosts. This method can be used to evaluate the antitumor efficacy of CAR T cells in the context of standard of care.
Engelberg, Jesse A.; Giberson, Richard T.; Young, Lawrence J.T.; Hubbard, Neil E.
2014-01-01
Microwave methods of fixation can dramatically shorten fixation times while preserving tissue structure; however, it remains unclear if adequate tissue antigenicity is preserved. To assess and validate antigenicity, robust quantitative methods and animal disease models are needed. We used two mouse mammary models of human breast cancer to evaluate microwave-assisted and standard 24-hr formalin fixation. The mouse models expressed four antigens prognostic for breast cancer outcome: estrogen receptor, progesterone receptor, Ki67, and human epidermal growth factor receptor 2. Using pathologist evaluation and novel methods of quantitative image analysis, we measured and compared the quality of antigen preservation, percentage of positive cells, and line plots of cell intensity. Visual evaluations by pathologists established that the amounts and patterns of staining were similar in tissues fixed by the different methods. The results of the quantitative image analysis provided a fine-grained evaluation, demonstrating that tissue antigenicity is preserved in tissues fixed using microwave methods. Evaluation of the results demonstrated that a 1-hr, 150-W fixation is better than a 45-min, 150-W fixation followed by a 15-min, 650-W fixation. The results demonstrated that microwave-assisted formalin fixation can standardize fixation times to 1 hr and produce immunohistochemistry that is in every way commensurate with longer conventional fixation methods. PMID:24682322
A new in vitro method to evaluate radio-opacity of endodontic sealers.
Malka, V B; Hochscheidt, G L; Larentis, N L; Grecca, F S; Fontanella, V R C; Kopper, P M P
2015-01-01
To evaluate a new method for assessing the radio-opacity of endodontic sealers and to compare radio-opacity values with a well-established standard method. The sealers evaluated in this study were AH Plus(®) (Dentsply DeTrey GmbH, Konstanz, Germany), Endo CPM Sealer (EGEO SRL, Buenos Aires, Argentina) and MTA Fillapex(®) (Angelus Dental Products Industry S/A, Londrina, Parana, Brazil). Two methods were used to evaluate radio-opacity: (D) standard discs and (S) a tissue simulator. For (D), ten standard discs were prepared for each sealer and were radiographed using Digora(®) phosphor storage plates (Soredex; Orion Corporation, Helsinki, Finland), alongside an aluminium stepwedge. For (S), polyethylene tubes filled with sealer (n = 10 for each) were radiographed inside the simulator as described. The digital images were analysed using Adobe Photoshop(®) software v. 10.0 (Adobe Systems, San Jose, CA). To compare the radio-opacity among the sealers, the data were analysed by ANOVA and Tukey's test, and to compare methods, they were analysed by the Mann-Whitney U test. To compare the data obtained from dentin and sealers in method (S), Student's paired t-test was used (=0.05). In both methods, the sealers showed significant differences, according to the following decreasing order: AH Plus, MTA Fillapex and Endo CPM. In (D), MTA Fillapex and Endo CPM showed less radio-opacity than aluminium. For all of the materials, the radio-opacity was higher in (S) than in (D). Compared with dentin, all of the materials were more radio-opaque. The comparison of the two assessment methods for sealer radio-opacity testing validated the use of a tissue simulator block.
Evaluation of ASR potential in Wyoming aggregates.
DOT National Transportation Integrated Search
2013-10-01
A comprehensive study was performed to evaluate the ASR reactivity of eight Wyoming aggregates. State-of-the-art and standardized test : methods were performed and results were used to evaluate these aggregate sources. Of the eight aggregates: four a...
NASA Astrophysics Data System (ADS)
Hudoklin, D.; Šetina, J.; Drnovšek, J.
2012-09-01
The measurement of the water-vapor permeation rate (WVPR) through materials is very important in many industrial applications such as the development of new fabrics and construction materials, in the semiconductor industry, packaging, vacuum techniques, etc. The demand for this kind of measurement grows considerably and thus many different methods for measuring the WVPR are developed and standardized within numerous national and international standards. However, comparison of existing methods shows a low level of mutual agreement. The objective of this paper is to demonstrate the necessary uncertainty evaluation for WVPR measurements, so as to provide a basis for development of a corresponding reference measurement standard. This paper presents a specially developed measurement setup, which employs a precision dew-point sensor for WVPR measurements on specimens of different shapes. The paper also presents a physical model, which tries to account for both dynamic and quasi-static methods, the common types of WVPR measurements referred to in standards and scientific publications. An uncertainty evaluation carried out according to the ISO/IEC guide to the expression of uncertainty in measurement (GUM) shows the relative expanded ( k = 2) uncertainty to be 3.0 % for WVPR of 6.71 mg . h-1 (corresponding to permeance of 30.4 mg . m-2. day-1 . hPa-1).
Comparison of ambulatory blood pressure reference standards in children evaluated for hypertension.
Jones, Deborah P; Richey, Phyllis A; Alpert, Bruce S
2009-06-01
The purpose of this study was to systematically compare methods for standardization of blood pressure levels obtained by ambulatory blood pressure monitoring (ABPM) in a group of 111 children studied at our institution. Blood pressure indices, blood pressure loads and standard deviation scores were calculated using the original ABPM and the modified reference standards. Bland-Altman plots and kappa statistics for the level of agreement were generated. Overall, the agreement between the two methods was excellent; however, approximately 5% of children were classified differently by one as compared with the other method. Depending on which version of the German Working Group's reference standards is used for interpretation of ABPM data, the classification of the individual as having hypertension or normal blood pressure may vary.
Evaluating the Rank-Ordering Method for Standard Maintaining
ERIC Educational Resources Information Center
Bramley, Tom; Gill, Tim
2010-01-01
The rank-ordering method for standard maintaining was designed for the purpose of mapping a known cut-score (e.g. a grade boundary mark) on one test to an equivalent point on the test score scale of another test, using holistic expert judgements about the quality of exemplars of examinees' work (scripts). It is a novel application of an old…
Liu, Yuan; Chen, Wei-Hua; Hou, Qiao-Juan; Wang, Xi-Chang; Dong, Ruo-Yan; Wu, Hao
2014-04-01
Near infrared spectroscopy (NIR) was used in this experiment to evaluate the freshness of ice-stored large yellow croaker (Pseudosciaena crocea) during different storage periods. And the TVB-N was used as an index to evaluate the freshness. Through comparing the correlation coefficent and standard deviations of calibration set and validation set of models established by singly and combined using of different pretreatment methods, different modeling methods and different wavelength region, the best TVB-N models of ice-stored large yellow croaker sold in the market were established to predict the freshness quickly. According to the research, the model shows that the best performance could be established by using the normalization by closure (Ncl) with 1st derivative (Dbl) and normalization to unit length (Nle) with 1st derivative as the pretreated method and partial least square (PLS) as the modeling method combined with choosing the wavelength region of 5 000-7 144, and 7 404-10 000 cm(-1). The calibration model gave the correlation coefficient of 0.992, with a standard error of calibration of 1.045 and the validation model gave the correlation coefficient of 0.999, with a standard error of prediction of 0.990. This experiment attempted to combine several pretreatment methods and choose the best wavelength region, which has got a good result. It could have a good prospective application of freshness detection and quality evaluation of large yellow croaker in the market.
2011-01-01
Background Verbal autopsy methods are critically important for evaluating the leading causes of death in populations without adequate vital registration systems. With a myriad of analytical and data collection approaches, it is essential to create a high quality validation dataset from different populations to evaluate comparative method performance and make recommendations for future verbal autopsy implementation. This study was undertaken to compile a set of strictly defined gold standard deaths for which verbal autopsies were collected to validate the accuracy of different methods of verbal autopsy cause of death assignment. Methods Data collection was implemented in six sites in four countries: Andhra Pradesh, India; Bohol, Philippines; Dar es Salaam, Tanzania; Mexico City, Mexico; Pemba Island, Tanzania; and Uttar Pradesh, India. The Population Health Metrics Research Consortium (PHMRC) developed stringent diagnostic criteria including laboratory, pathology, and medical imaging findings to identify gold standard deaths in health facilities as well as an enhanced verbal autopsy instrument based on World Health Organization (WHO) standards. A cause list was constructed based on the WHO Global Burden of Disease estimates of the leading causes of death, potential to identify unique signs and symptoms, and the likely existence of sufficient medical technology to ascertain gold standard cases. Blinded verbal autopsies were collected on all gold standard deaths. Results Over 12,000 verbal autopsies on deaths with gold standard diagnoses were collected (7,836 adults, 2,075 children, 1,629 neonates, and 1,002 stillbirths). Difficulties in finding sufficient cases to meet gold standard criteria as well as problems with misclassification for certain causes meant that the target list of causes for analysis was reduced to 34 for adults, 21 for children, and 10 for neonates, excluding stillbirths. To ensure strict independence for the validation of methods and assessment of comparative performance, 500 test-train datasets were created from the universe of cases, covering a range of cause-specific compositions. Conclusions This unique, robust validation dataset will allow scholars to evaluate the performance of different verbal autopsy analytic methods as well as instrument design. This dataset can be used to inform the implementation of verbal autopsies to more reliably ascertain cause of death in national health information systems. PMID:21816095
NASA Astrophysics Data System (ADS)
Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin
2017-06-01
Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.
Evaluating Public Libraries Using Standard Scores: The Library Quotient.
ERIC Educational Resources Information Center
O'Connor, Daniel O.
1982-01-01
Describes a method for assessing the performance of public libraries using a standardized scoring system and provides an analysis of public library data from New Jersey as an example. Library standards and the derivation of measurement ratios are also discussed. A 33-item bibliography and three data tables are included. (JL)
An Empirical Comparison of Variable Standardization Methods in Cluster Analysis.
ERIC Educational Resources Information Center
Schaffer, Catherine M.; Green, Paul E.
1996-01-01
The common marketing research practice of standardizing the columns of a persons-by-variables data matrix prior to clustering the entities corresponding to the rows was evaluated with 10 large-scale data sets. Results indicate that the column standardization practice may be problematic for some kinds of data that marketing researchers used for…
USDA-ARS?s Scientific Manuscript database
The objective of this study was to evaluate the percentage of US producers and milk not currently meeting the proposed bulk tank somatic cell counts (BTSCC) limits. Five different limits of BTSCC were evaluated for compliance: 750K, 600K, 500K, and 400K using the current US methods and 400K using th...
Analytical evaluation of current starch methods used in the international sugar industry: Part I.
Cole, Marsha; Eggleston, Gillian; Triplett, Alexa
2017-08-01
Several analytical starch methods exist in the international sugar industry to mitigate starch-related processing challenges and assess the quality of traded end-products. These methods use iodometric chemistry, mostly potato starch standards, and utilize similar solubilization strategies, but had not been comprehensively compared. In this study, industrial starch methods were compared to the USDA Starch Research method using simulated raw sugars. Type of starch standard, solubilization approach, iodometric reagents, and wavelength detection affected total starch determination in simulated raw sugars. Simulated sugars containing potato starch were more accurately detected by the industrial methods, whereas those containing corn starch, a better model for sugarcane starch, were only accurately measured by the USDA Starch Research method. Use of a potato starch standard curve over-estimated starch concentrations. Among the variables studied, starch standard, solubilization approach, and wavelength detection affected the sensitivity, accuracy/precision, and limited the detection/quantification of the current industry starch methods the most. Published by Elsevier Ltd.
A strategy for evaluating pathway analysis methods.
Yu, Chenggang; Woo, Hyung Jun; Yu, Xueping; Oyama, Tatsuya; Wallqvist, Anders; Reifman, Jaques
2017-10-13
Researchers have previously developed a multitude of methods designed to identify biological pathways associated with specific clinical or experimental conditions of interest, with the aim of facilitating biological interpretation of high-throughput data. Before practically applying such pathway analysis (PA) methods, we must first evaluate their performance and reliability, using datasets where the pathways perturbed by the conditions of interest have been well characterized in advance. However, such 'ground truths' (or gold standards) are often unavailable. Furthermore, previous evaluation strategies that have focused on defining 'true answers' are unable to systematically and objectively assess PA methods under a wide range of conditions. In this work, we propose a novel strategy for evaluating PA methods independently of any gold standard, either established or assumed. The strategy involves the use of two mutually complementary metrics, recall and discrimination. Recall measures the consistency of the perturbed pathways identified by applying a particular analysis method to an original large dataset and those identified by the same method to a sub-dataset of the original dataset. In contrast, discrimination measures specificity-the degree to which the perturbed pathways identified by a particular method to a dataset from one experiment differ from those identifying by the same method to a dataset from a different experiment. We used these metrics and 24 datasets to evaluate six widely used PA methods. The results highlighted the common challenge in reliably identifying significant pathways from small datasets. Importantly, we confirmed the effectiveness of our proposed dual-metric strategy by showing that previous comparative studies corroborate the performance evaluations of the six methods obtained by our strategy. Unlike any previously proposed strategy for evaluating the performance of PA methods, our dual-metric strategy does not rely on any ground truth, either established or assumed, of the pathways perturbed by a specific clinical or experimental condition. As such, our strategy allows researchers to systematically and objectively evaluate pathway analysis methods by employing any number of datasets for a variety of conditions.
Flotemersch, Joseph E; North, Sheila; Blocksom, Karen A
2014-02-01
Benthic macroinvertebrates are sampled in streams and rivers as one of the assessment elements of the US Environmental Protection Agency's National Rivers and Streams Assessment. In a 2006 report, the recommendation was made that different yet comparable methods be evaluated for different types of streams (e.g., low gradient vs. high gradient). Consequently, a research element was added to the 2008-2009 National Rivers and Streams Assessment to conduct a side-by-side comparison of the standard macroinvertebrate sampling method with an alternate method specifically designed for low-gradient wadeable streams and rivers that focused more on stream edge habitat. Samples were collected using each method at 525 sites in five of nine aggregate ecoregions located in the conterminous USA. Methods were compared using the benthic macroinvertebrate multimetric index developed for the 2006 Wadeable Streams Assessment. Statistical analysis did not reveal any trends that would suggest the overall assessment of low-gradient streams on a regional or national scale would change if the alternate method was used rather than the standard sampling method, regardless of the gradient cutoff used to define low-gradient streams. Based on these results, the National Rivers and Streams Survey should continue to use the standard field method for sampling all streams.
Computer-aided analysis with Image J for quantitatively assessing psoriatic lesion area.
Sun, Z; Wang, Y; Ji, S; Wang, K; Zhao, Y
2015-11-01
Body surface area is important in determining the severity of psoriasis. However, objective, reliable, and practical method is still in need for this purpose. We performed a computer image analysis (CIA) of psoriatic area using the image J freeware to determine whether this method could be used for objective evaluation of psoriatic area. Fifteen psoriasis patients were randomized to be treated with adalimumab or placebo in a clinical trial. At each visit, the psoriasis area of each body site was estimated by two physicians (E-method), and standard photographs were taken. The psoriasis area in the pictures was assessed with CIA using semi-automatic threshold selection (T-method), or manual selection (M-method, gold standard). The results assessed by the three methods were analyzed with reliability and affecting factors evaluated. Both T- and E-method correlated strongly with M-method, and T-method had a slightly stronger correlation with M-method. Both T- and E-methods had a good consistency between the evaluators. All the three methods were able to detect the change in the psoriatic area after treatment, while the E-method tends to overestimate. The CIA with image J freeware is reliable and practicable in quantitatively assessing the lesional of psoriasis area. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Hayashi, Kuniki; Hoshino, Tadashi; Yanai, Mitsuru; Tsuchiya, Tatsuyuki; Kumasaka, Kazunari; Kawano, Kinya
2004-06-01
It is well known that serious method-related differences exist in results of serum CA19-9, and the necessity of standardization has been pointed out. In this study, differences of serum tumor marker CA19-9 levels obtained by various immunoassay kits (CLEIA, FEIA, LPIA and RIA) were evaluated in sixty-seven clinical samples and five calibrators and the possibility to improve the inter-methodological differences were observed not only for clinical samples but also for calibrators. We supposed an assumed standard material using by a calibrator. We calculated the serum levels of CA19-9 when using the assumed standard material for three different measurement methods. We approximate the CA19-9 values using by this method. It is suggested that the obtained CA19-9 values could be approximated by recalculation with the assumed standard material would be able to correct between-method and between-laboratory discrepancies in particular systematic errors.
Spike shape analysis of electromyography for parkinsonian tremor evaluation.
Marusiak, Jarosław; Andrzejewska, Renata; Świercz, Dominika; Kisiel-Sajewicz, Katarzyna; Jaskólska, Anna; Jaskólski, Artur
2015-12-01
Standard electromyography (EMG) parameters have limited utility for evaluation of Parkinson disease (PD) tremor. Spike shape analysis (SSA) EMG parameters are more sensitive than standard EMG parameters for studying motor control mechanisms in healthy subjects. SSA of EMG has not been used to assess parkinsonian tremor. This study assessed the utility of SSA and standard time and frequency analysis for electromyographic evaluation of PD-related resting tremor. We analyzed 1-s periods of EMG recordings to detect nontremor and tremor signals in relaxed biceps brachii muscle of seven mild to moderate PD patients. SSA revealed higher mean spike amplitude, duration, and slope and lower mean spike frequency in tremor signals than in nontremor signals. Standard EMG parameters (root mean square, median, and mean frequency) did not show differences between the tremor and nontremor signals. SSA of EMG data is a sensitive method for parkinsonian tremor evaluation. © 2015 Wiley Periodicals, Inc.
Dong, Ren G; Sinsel, Erik W; Welcome, Daniel E; Warren, Christopher; Xu, Xueyan S; McDowell, Thomas W; Wu, John Z
2015-09-01
The hand coordinate systems for measuring vibration exposures and biodynamic responses have been standardized, but they are not actually used in many studies. This contradicts the purpose of the standardization. The objectives of this study were to identify the major sources of this problem, and to help define or identify better coordinate systems for the standardization. This study systematically reviewed the principles and definition methods, and evaluated typical hand coordinate systems. This study confirms that, as accelerometers remain the major technology for vibration measurement, it is reasonable to standardize two types of coordinate systems: a tool-based basicentric (BC) system and an anatomically based biodynamic (BD) system. However, these coordinate systems are not well defined in the current standard. Definition of the standard BC system is confusing, and it can be interpreted differently; as a result, it has been inconsistently applied in various standards and studies. The standard hand BD system is defined using the orientation of the third metacarpal bone. It is neither convenient nor defined based on important biological or biodynamic features. This explains why it is rarely used in practice. To resolve these inconsistencies and deficiencies, we proposed a revised method for defining the realistic handle BC system and an alternative method for defining the hand BD system. A fingertip-based BD system for measuring the principal grip force is also proposed based on an important feature of the grip force confirmed in this study.
Dong, Ren G.; Sinsel, Erik W.; Welcome, Daniel E.; Warren, Christopher; Xu, Xueyan S.; McDowell, Thomas W.; Wu, John Z.
2015-01-01
The hand coordinate systems for measuring vibration exposures and biodynamic responses have been standardized, but they are not actually used in many studies. This contradicts the purpose of the standardization. The objectives of this study were to identify the major sources of this problem, and to help define or identify better coordinate systems for the standardization. This study systematically reviewed the principles and definition methods, and evaluated typical hand coordinate systems. This study confirms that, as accelerometers remain the major technology for vibration measurement, it is reasonable to standardize two types of coordinate systems: a tool-based basicentric (BC) system and an anatomically based biodynamic (BD) system. However, these coordinate systems are not well defined in the current standard. Definition of the standard BC system is confusing, and it can be interpreted differently; as a result, it has been inconsistently applied in various standards and studies. The standard hand BD system is defined using the orientation of the third metacarpal bone. It is neither convenient nor defined based on important biological or biodynamic features. This explains why it is rarely used in practice. To resolve these inconsistencies and deficiencies, we proposed a revised method for defining the realistic handle BC system and an alternative method for defining the hand BD system. A fingertip-based BD system for measuring the principal grip force is also proposed based on an important feature of the grip force confirmed in this study. PMID:26929824
Pansharpening on the Narrow Vnir and SWIR Spectral Bands of SENTINEL-2
NASA Astrophysics Data System (ADS)
Vaiopoulos, A. D.; Karantzalos, K.
2016-06-01
In this paper results from the evaluation of several state-of-the-art pansharpening techniques are presented for the VNIR and SWIR bands of Sentinel-2. A procedure for the pansharpening is also proposed which aims at respecting the closest spectral similarities between the higher and lower resolution bands. The evaluation included 21 different fusion algorithms and three evaluation frameworks based both on standard quantitative image similarity indexes and qualitative evaluation from remote sensing experts. The overall analysis of the evaluation results indicated that remote sensing experts disagreed with the outcomes and method ranking from the quantitative assessment. The employed image quality similarity indexes and quantitative evaluation framework based on both high and reduced resolution data from the literature didn't manage to highlight/evaluate mainly the spatial information that was injected to the lower resolution images. Regarding the SWIR bands none of the methods managed to deliver significantly better results than a standard bicubic interpolation on the original low resolution bands.
Advancing Resident Assessment in Graduate Medical Education
Swing, Susan R.; Clyman, Stephen G.; Holmboe, Eric S.; Williams, Reed G.
2009-01-01
Background The Outcome Project requires high-quality assessment approaches to provide reliable and valid judgments of the attainment of competencies deemed important for physician practice. Intervention The Accreditation Council for Graduate Medical Education (ACGME) convened the Advisory Committee on Educational Outcome Assessment in 2007–2008 to identify high-quality assessment methods. The assessments selected by this body would form a core set that could be used by all programs in a specialty to assess resident performance and enable initial steps toward establishing national specialty databases of program performance. The committee identified a small set of methods for provisional use and further evaluation. It also developed frameworks and processes to support the ongoing evaluation of methods and the longer-term enhancement of assessment in graduate medical education. Outcome The committee constructed a set of standards, a methodology for applying the standards, and grading rules for their review of assessment method quality. It developed a simple report card for displaying grades on each standard and an overall grade for each method reviewed. It also described an assessment system of factors that influence assessment quality. The committee proposed a coordinated, national-level infrastructure to support enhancements to assessment, including method development and assessor training. It recommended the establishment of a new assessment review group to continue its work of evaluating assessment methods. The committee delivered a report summarizing its activities and 5 related recommendations for implementation to the ACGME Board in September 2008. PMID:21975993
Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P
2017-08-14
The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi-task learning is offered by providing the data and the protocols. Graphical Abstract .
NASA Astrophysics Data System (ADS)
Stare, E.; Beges, G.; Drnovsek, J.
2006-07-01
This paper presents the results of research into the measurement of the resistance of solid isolating materials to tracking. Two types of tracking were investigated: the proof tracking index (PTI) and the comparative tracking index (CTI). Evaluation of the measurement uncertainty in a case study was performed using a test method in accordance with the IEC 60112 standard. In the scope of the tests performed here, this particular test method was used to ensure the safety of electrical appliances. According to the EN ISO/IEC 17025 standard (EN ISO/IEC 17025), in the process of conformity assessment, the evaluation of the measurement uncertainty of the test method should be carried out. In the present article, possible influential parameters that are in accordance with the third and fourth editions of the standard IEC 60112 are discussed. The differences, ambiguities or lack of guidance referring to both editions of the standard are described in the article 'Ambiguities in technical standards—case study IEC 60112—measuring the resistance of solid isolating materials to tracking' (submitted for publication). Several hundred measurements were taken in the present experiments in order to form the basis for the results and conclusions presented. A specific problem of the test (according to the IEC 60112 standard) is the great variety of influential physical parameters (mechanical, electrical, chemical, etc) that can affect the results. At the end of the present article therefore, there is a histogram containing information on the contributions to the measurement uncertainty.
Determination of antenna factors using a three-antenna method at open-field test site
NASA Astrophysics Data System (ADS)
Masuzawa, Hiroshi; Tejima, Teruo; Harima, Katsushige; Morikawa, Takao
1992-09-01
Recently NIST has used the three-antenna method for calibration of the antenna factor of an antenna used for EMI measurements. This method does not require the specially designed standard antennas which are necessary in the standard field method or the standard antenna method, and can be used at an open-field test site. This paper theoretically and experimentally examines the measurement errors of this method and evaluates the precision of the antenna-factor calibration. It is found that the main source of the error is the non-ideal propagation characteristics of the test site, which should therefore be measured before the calibration. The precision of the antenna-factor calibration at the test site used in these experiments, is estimated to be 0.5 dB.
Sinigalliano, Christopher D.; Ervin, Jared S.; Van De Werfhorst, Laurie C.; Badgley, Brian D.; Ballestée, Elisenda; Bartkowiaka, Jakob; Boehm, Alexandria B.; Byappanahalli, Muruleedhara N.; Goodwin, Kelly D.; Gourmelon, Michèle; Griffith, John; Holden, Patricia A.; Jay, Jenny; Layton, Blythe; Lee, Cheonghoon; Lee, Jiyoung; Meijer, Wim G.; Noble, Rachel; Raith, Meredith; Ryu, Hodon; Sadowsky, Michael J.; Schriewer, Alexander; Wang, Dan; Wanless, David; Whitman, Richard; Wuertz, Stefan; Santo Domingo, Jorge W.
2013-01-01
Here we report results from a multi-laboratory (n = 11) evaluation of four different PCR methods targeting the 16S rRNA gene of Catellicoccus marimammalium originally developed to detect gull fecal contamination in coastal environments. The methods included a conventional end-point PCR method, a SYBR® Green qPCR method, and two TaqMan® qPCR methods. Different techniques for data normalization and analysis were tested. Data analysis methods had a pronounced impact on assay sensitivity and specificity calculations. Across-laboratory standardization of metrics including the lower limit of quantification (LLOQ), target detected but not quantifiable (DNQ), and target not detected (ND) significantly improved results compared to results submitted by individual laboratories prior to definition standardization. The unit of measure used for data normalization also had a pronounced effect on measured assay performance. Data normalization to DNA mass improved quantitative method performance as compared to enterococcus normalization. The MST methods tested here were originally designed for gulls but were found in this study to also detect feces from other birds, particularly feces composited from pigeons. Sequencing efforts showed that some pigeon feces from California contained sequences similar to C. marimammalium found in gull feces. These data suggest that the prevalence, geographic scope, and ecology of C. marimammalium in host birds other than gulls require further investigation. This study represents an important first step in the multi-laboratory assessment of these methods and highlights the need to broaden and standardize additional evaluations, including environmentally relevant target concentrations in ambient waters from diverse geographic regions.
Liang, M H
2000-09-01
Although widely used and reported in research for the evaluation of groups, measures of health status and health-related quality of life have had little application in clinical practice for the assessment of individual patients. One of the principal barriers is the demonstration that these measures add clinically significant information to measures of function or symptoms alone. Here, we review the methods for evaluation of construct validity in longitudinal studies and make recommendations for nomenclature, reporting of study results, and future research agenda. Analytical review. The terms "sensitivity" and "responsiveness" have been used interchangeably, and there are few studies that evaluate the extent to which health status or health-related quality-of life measures capture clinically important changes ("responsiveness"). Current methods of evaluating responsiveness are not standardized or evaluated. Approaches for the assessment of a clinically significant or meaningful change are described; rather than normative information, however, standardized transition questions are proposed. They would be reported routinely and as separate axes of description to capture individual perceptions. Research in methods to assess the subject's evaluation of the importance and magnitude of a measured change are critical if health status and health-related quality-of-life measures are to have an impact on patient care.
Lim, Maria A; Louie, Brenton; Ford, Daniel; Heath, Kyle; Cha, Paulyn; Betts-Lacroix, Joe; Lum, Pek Yee; Robertson, Timothy L; Schaevitz, Laura
2017-01-01
Despite a broad spectrum of anti-arthritic drugs currently on the market, there is a constant demand to develop improved therapeutic agents. Efficient compound screening and rapid evaluation of treatment efficacy in animal models of rheumatoid arthritis (RA) can accelerate the development of clinical candidates. Compound screening by evaluation of disease phenotypes in animal models facilitates preclinical research by enhancing understanding of human pathophysiology; however, there is still a continuous need to improve methods for evaluating disease. Current clinical assessment methods are challenged by the subjective nature of scoring-based methods, time-consuming longitudinal experiments, and the requirement for better functional readouts with relevance to human disease. To address these needs, we developed a low-touch, digital platform for phenotyping preclinical rodent models of disease. As a proof-of-concept, we utilized the rat collagen-induced arthritis (CIA) model of RA and developed the Digital Arthritis Index (DAI), an objective and automated behavioral metric that does not require human-animal interaction during the measurement and calculation of disease parameters. The DAI detected the development of arthritis similar to standard in vivo methods, including ankle joint measurements and arthritis scores, as well as demonstrated a positive correlation to ankle joint histopathology. The DAI also determined responses to multiple standard-of-care (SOC) treatments and nine repurposed compounds predicted by the SMarTR TM Engine to have varying degrees of impact on RA. The disease profiles generated by the DAI complemented those generated by standard methods. The DAI is a highly reproducible and automated approach that can be used in-conjunction with standard methods for detecting RA disease progression and conducting phenotypic drug screens.
Kobayashi, Kazuhiro; Hama, Takanori; Murakami, Kasumi; Ogawa, Rei
2016-01-01
Objective: In this study, we evaluated the effect of scalp massage on hair in Japanese males and the effect of stretching forces on human dermal papilla cells in vitro. Methods: Nine healthy men received 4 minutes of standardized scalp massage per day for 24 weeks using a scalp massage device. Total hair number, hair thickness, and hair growth rate were evaluated. The mechanical effect of scalp massage on subcutaneous tissue was analyzed using a finite element method. To evaluate the effect of mechanical forces, human dermal papilla cells were cultured using a 72-hour stretching cycle. Gene expression change was analyzed using DNA microarray analyses. In addition, expression of hair cycle-related genes including IL6, NOGGIN, BMP4, and SMAD4 were evaluated using real-time reverse transcription-polymerase chain reaction. Results: Standardized scalp massage resulted in increased hair thickness 24 weeks after initiation of massage (0.085 ± 0.003 mm vs 0.092 ± 0.001 mm). Finite element method showed that scalp massage caused z-direction displacement and von Mises stress on subcutaneous tissue. In vitro, DNA microarray showed gene expression change significantly compared with nonstretching human dermal papilla cells. A total of 2655 genes were upregulated and 2823 genes were downregulated. Real-time reverse transcription-polymerase chain reaction demonstrated increased expression of hair cycle–related genes such as NOGGIN, BMP4, SMAD4, and IL6ST and decrease in hair loss–related genes such as IL6. Conclusions: Stretching forces result in changes in gene expression in human dermal papilla cells. Standardized scalp massage is a way to transmit mechanical stress to human dermal papilla cells in subcutaneous tissue. Hair thickness was shown to increase with standardized scalp massage. PMID:26904154
Eppenhof, Koen A J; Pluim, Josien P W
2018-04-01
Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.
Freeman, Karoline; Tsertsvadze, Alexander; Taylor-Phillips, Sian; McCarthy, Noel; Mistry, Hema; Manuel, Rohini; Mason, James
2017-01-01
Multiplex gastrointestinal pathogen panel (GPP) tests simultaneously identify bacterial, viral and parasitic pathogens from the stool samples of patients with suspected infectious gastroenteritis presenting in hospital or the community. We undertook a systematic review to compare the accuracy of GPP tests with standard microbiology techniques. Searches in Medline, Embase, Web of Science and the Cochrane library were undertaken from inception to January 2016. Eligible studies compared GPP tests with standard microbiology techniques in patients with suspected gastroenteritis. Quality assessment of included studies used tailored QUADAS-2. In the absence of a reference standard we analysed test performance taking GPP tests and standard microbiology techniques in turn as the benchmark test, using random effects meta-analysis of proportions. No study provided an adequate reference standard with which to compare the test accuracy of GPP and conventional tests. Ten studies informed a meta-analysis of positive and negative agreement. Positive agreement across all pathogens was 0.93 (95% CI 0.90 to 0.96) when conventional methods were the benchmark and 0.68 (95% CI: 0.58 to 0.77) when GPP provided the benchmark. Negative agreement was high in both instances due to the high proportion of negative cases. GPP testing produced a greater number of pathogen-positive findings than conventional testing. It is unclear whether these additional 'positives' are clinically important. GPP testing has the potential to simplify testing and accelerate reporting when compared to conventional microbiology methods. However the impact of GPP testing upon the management, treatment and outcome of patients is poorly understood and further studies are needed to evaluate the health economic impact of GPP testing compared with standard methods. The review protocol is registered with PROSPERO as CRD42016033320.
Hu, Beizhen; Cai, Haijiang; Song, Weihua
2012-09-01
A method was developed for the determination of eight pesticide residues (fipronil, imidacloprid, acetamiprid, buprofezin, triadimefon, triadimenol, profenofos, pyridaben) in tea by liquid chromatography-tandem mass spectrometry. The sample was extracted by accelerated solvent extraction with acetone-dichloromethane (1:1, v/v) as solvent, and the extract was then cleaned-up with a Carb/NH2 solid phase extraction (SPE) column. The separation was performed on a Hypersil Gold C, column (150 mm x 2. 1 mm, 5 microm) and with the gradient elution of acetonitrile and 0. 1% formic acid. The eight pesticides were determined in the modes of electrospray ionization (ESI) and multiple reaction monitoring (MRM). The analytes were quantified by matrix-matched internal standard method for imidacloprid and acetamiprid, by matrix-matched external standard method for the other pesticides. The calibration curves showed good linearity in 1 - 100 microg/L for fipronil, and in 5 -200 microg/L for the other pesticides. The limits of quantification (LOQs, S/N> 10) were 2 p.g/kg for fipronil and 10 microg/kg for the other pesticides. The average recoveries ranged from 75. 5% to 115.0% with the relative standard deviations of 2.7% - 7.7% at the spiked levels of 2, 5, 50 microg/kg for fipronil and 10, 50, 100 microg/kg for the other pesticides. The uncertainty evaluation for the results was carried out according to JJF 1059-1999 "Evaluation and Expression of Uncertainty in Measurement". Items constituting measurement uncertainty involved standard solution, weighing of sample, sample pre-treatment, and the measurement repeatability of the equipment were evaluated. The results showed that the measurement uncertainty is mainly due to sample pre-treatment, standard curves and measurement repeatability of the equipment. The method developed is suitable for the conformation and quantification of the pesticides in tea.
Visual memories for perceived length are well preserved in older adults.
Norman, J Farley; Holmin, Jessica S; Bartholomew, Ashley N
2011-09-15
Three experiments compared younger (mean age was 23.7years) and older (mean age was 72.1years) observers' ability to visually discriminate line length using both explicit and implicit standard stimuli. In Experiment 1, the method of constant stimuli (with an explicit standard) was used to determine difference thresholds, whereas the method of single stimuli (where the knowledge of the standard length was only implicit and learned from previous test stimuli) was used in Experiments 2 and 3. The study evaluated whether increases in age affect older observers' ability to learn, retain, and utilize effective implicit visual standards. Overall, the observers' length difference thresholds were 5.85% of the standard when the method of constant stimuli was used and improved to 4.39% of the standard for the method of single stimuli (a decrease of 25%). Both age groups performed similarly in all conditions. The results demonstrate that older observers retain the ability to create, remember, and utilize effective implicit standards from a series of visual stimuli. Copyright © 2011 Elsevier Ltd. All rights reserved.
STANDARDIZATION AND VALIDATION OF MICROBIOLOGICAL METHODS FOR EXAMINATION OF BIOSOLIDS
The objective of this presentation is to discuss pathogens of concern in biosolids, the analytical techniques used to evaluate microorganisms in biosolids, and to discuss standardization and validation of analytical protocols for microbes within a complex matrix. Implications of ...
MICROORGANISMS IN BIOSOLIDS: ANALYTICAL METHODS DEVELOPMENT, STANDARDIZATION, AND VALIDATION
The objective of this presentation is to discuss pathogens of concern in biosolids, the analytical techniques used to evaluate microorganisms in biosolids, and to discuss standardization and validation of analytical protocols for microbes within such a complex matrix. Implicatio...
Wang, Li-Li; Zhang, Yun-Bin; Sun, Xiao-Ya; Chen, Sui-Qing
2016-05-08
Establish a quantitative analysis of multi-components by the single marker (QAMS) method for quality evaluation and validate its feasibilities by the simultaneous quantitative assay of four main components in Linderae Reflexae Radix. Four main components of pinostrobin, pinosylvin, pinocembrin, and 3,5-dihydroxy-2-(1- p -mentheneyl)- trans -stilbene were selected as analytes to evaluate the quality by RP-HPLC coupled with a UV-detector. The method was evaluated by a comparison of the quantitative results between the external standard method and QAMS with a different HPLC system. The results showed that no significant differences were found in the quantitative results of the four contents of Linderae Reflexae Radix determined by the external standard method and QAMS (RSD <3%). The contents of four analytes (pinosylvin, pinocembrin, pinostrobin, and Reflexanbene I) in Linderae Reflexae Radix were determined by the single marker of pinosylvin. This fingerprint was the spectra determined by Shimadzu LC-20AT and Waters e2695 HPLC that were equipped with three different columns.
Visualization of postoperative anterior cruciate ligament reconstruction bone tunnels
2011-01-01
Background and purpose Non-anatomic bone tunnel placement is the most common cause of a failed ACL reconstruction. Accurate and reproducible methods to visualize and document bone tunnel placement are therefore important. We evaluated the reliability of standard radiographs, CT scans, and a 3-dimensional (3D) virtual reality (VR) approach in visualizing and measuring ACL reconstruction bone tunnel placement. Methods 50 consecutive patients who underwent single-bundle ACL reconstructions were evaluated postoperatively by standard radiographs, CT scans, and 3D VR images. Tibial and femoral tunnel positions were measured by 2 observers using the traditional methods of Amis, Aglietti, Hoser, Stäubli, and the method of Benereau for the VR approach. Results The tunnel was visualized in 50–82% of the standard radiographs and in 100% of the CT scans and 3D VR images. Using the intraclass correlation coefficient (ICC), the inter- and intraobserver agreement was between 0.39 and 0.83 for the standard femoral and tibial radiographs. CT scans showed an ICC range of 0.49–0.76 for the inter- and intraobserver agreement. The agreement in 3D VR was almost perfect, with an ICC of 0.83 for the femur and 0.95 for the tibia. Interpretation CT scans and 3D VR images are more reliable in assessing postoperative bone tunnel placement following ACL reconstruction than standard radiographs. PMID:21999625
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Frollo, Ivan
2017-12-01
The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.
Rathi, Monika; Ahrenkiel, S P; Carapella, J J; Wanlass, M W
2013-02-01
Given an unknown multicomponent alloy, and a set of standard compounds or alloys of known composition, can one improve upon popular standards-based methods for energy dispersive X-ray (EDX) spectrometry to quantify the elemental composition of the unknown specimen? A method is presented here for determining elemental composition of alloys using transmission electron microscopy-based EDX with appropriate standards. The method begins with a discrete set of related reference standards of known composition, applies multivariate statistical analysis to those spectra, and evaluates the compositions with a linear matrix algebra method to relate the spectra to elemental composition. By using associated standards, only limited assumptions about the physical origins of the EDX spectra are needed. Spectral absorption corrections can be performed by providing an estimate of the foil thickness of one or more reference standards. The technique was applied to III-V multicomponent alloy thin films: composition and foil thickness were determined for various III-V alloys. The results were then validated by comparing with X-ray diffraction and photoluminescence analysis, demonstrating accuracy of approximately 1% in atomic fraction.
Long, H. Keith; Daddow, Richard L.; Farrar, Jerry W.
1998-01-01
Since 1962, the U.S. Geological Survey (USGS) has operated the Standard Reference Sample Project to evaluate the performance of USGS, cooperator, and contractor analytical laboratories that analyze chemical constituents of environmental samples. The laboratories are evaluated by using performance evaluation samples, called Standard Reference Samples (SRSs). SRSs are submitted to laboratories semi-annually for round-robin laboratory performance comparison purposes. Currently, approximately 100 laboratories are evaluated for their analytical performance on six SRSs for inorganic and nutrient constituents. As part of the SRS Project, a surplus of homogeneous, stable SRSs is maintained for purchase by USGS offices and participating laboratories for use in continuing quality-assurance and quality-control activities. Statistical evaluation of the laboratories results provides information to compare the analytical performance of the laboratories and to determine possible analytical deficiences and problems. SRS results also provide information on the bias and variability of different analytical methods used in the SRS analyses.
Adaptability and stability of soybean cultivars for grain yield and seed quality.
Silva, K B; Bruzi, A T; Zambiazzi, E V; Soares, I O; Pereira, J L A R; Carvalho, M L M
2017-05-10
This study aimed at verifying the adaptability and stability of soybean cultivars, considering the grain yield and quality of seeds, adopting univariate and multivariate approaches. The experiments were conducted in two crops, three environments, in 2013/2014 and 2014/2015 crop seasons, in the county of Inconfidentes, Lavras, and Patos de Minas, in the Minas Gerais State, Brazil. We evaluated 17 commercial soybean cultivars. For adaptability and stability evaluations, the Graphic and GGE biplot methods were employed. Previously, a selection index was estimated based on the sum of the standardized variables (Z index). The data relative to grain yield, mass of one thousand grain, uniformity test (sieve retention), and germination test were standardized (Z ij ) per cultivar. With the sum of Z ij , we obtained the selection index for the four traits evaluated together. In the Graphic method evaluation, cultivars NA 7200 RR and CD 2737 RR presented the highest values for selection index Z. By the GGE biplot method, we verified that cultivar NA 7200 RR presented greater stability in both univariate evaluations, for grain yield, and for selection index Z.
SETs: stand evaluation tools: II. tree value conversion standards for hardwood sawtimber
Joseph J. Mendel; Paul S. DeBald; Martin E. Dale
1976-01-01
Tree quatity index tables are presented for 12 important hardwood species of the oak-hickory forest. From these, tree value conversion standards are developed for each species, log grade, merchantable height, and diameter at breast height. The method of calculating tree value conversion standards and adapting them to different conditions is explained. A computer...
Kolaczinski, Jan; Hanson, Kara
2006-01-01
Background Insecticide-treated nets (ITNs) are an effective and cost-effective means of malaria control. Scaling-up coverage of ITNs is challenging. It requires substantial resources and there are a number of strategies to choose from. Information on the cost of different strategies is still scarce. To guide the choice of a delivery strategy (or combination of strategies), reliable and standardized cost information for the different options is required. Methods The electronic online database PubMed was used for a systematic search of the published English literature on costing and economic evaluations of ITN distribution programmes. The keywords used were: net, bednet, insecticide, treated, ITN, cost, effectiveness, economic and evaluation. Identified papers were analysed to determine and evaluate the costing methods used. Methods were judged against existing standards of cost analysis to arrive at proposed standards for undertaking and presenting cost analyses. Results Cost estimates were often not readily comparable or could not be adjusted to a different context. This resulted from the wide range of methods applied and measures of output chosen. Most common shortcomings were the omission of certain costs and failure to adjust financial costs to generate economic costs. Generalisability was hampered by authors not reporting quantities and prices of resources separately and not examining the sensitivity of their results to variations in underlying assumptions. Conclusion The observed shortcomings have arisen despite the abundance of literature and guidelines on costing of health care interventions. This paper provides ITN specific recommendations in the hope that these will help to standardize future cost estimates. PMID:16681856
Pattison, Kira M.; Brooks, Dina; Cameron, Jill I.
2015-01-01
Background The use of standardized assessment tools is an element of evidence-informed rehabilitation, but physical therapists report administering these tools inconsistently poststroke. An in-depth understanding of physical therapists' approaches to walking assessment is needed to develop strategies to advance assessment practice. Objectives The objective of this study was to explore the methods physical therapists use to evaluate walking poststroke, reasons for selecting these methods, and the use of assessment results in clinical practice. Design A qualitative descriptive study involving semistructured telephone interviews was conducted. Methods Registered physical therapists assessing a minimum of 10 people with stroke per year in Ontario, Canada, were purposively recruited from acute care, rehabilitation, and outpatient settings. Interviews were audiotaped and transcribed verbatim. Transcripts were coded line by line by the interviewer. Credibility was optimized through triangulation of analysts, audit trail, and collection of field notes. Results Study participants worked in acute care (n=8), rehabilitation (n=11), or outpatient (n=9) settings and reported using movement observation and standardized assessment tools to evaluate walking. When selecting methods to evaluate walking, physical therapists described being influenced by a hierarchy of factors. Factors included characteristics of the assessment tool, the therapist, the workplace, and patients, as well as influential individuals or organizations. Familiarity exerted the primary influence on adoption of a tool into a therapist's assessment repertoire, whereas patient factors commonly determined daily use. Participants reported using the results from walking assessments to communicate progress to the patient and health care professionals. Conclusions Multilevel factors influence physical therapists' adoption and daily administration of standardized tools to assess walking. Findings will inform knowledge translation efforts aimed at increasing the standardized assessment of walking poststroke. PMID:25929532
NASA Astrophysics Data System (ADS)
Grunin, A. P.; Kalinov, G. A.; Bolokhovtsev, A. V.; Sai, S. V.
2018-05-01
This article reports on a novel method to improve the accuracy of positioning an object by a low frequency hyperbolic radio navigation system like an eLoran. This method is based on the application of the standard Kalman filter. Investigations of an affection of the filter parameters and the type of the movement on accuracy of the vehicle position estimation are carried out. Evaluation of the method accuracy was investigated by separating data from the semi-empirical movement model to different types of movements.
Standardized Methods for Electronic Shearography
NASA Technical Reports Server (NTRS)
Lansing, Matthew D.
1997-01-01
Research was conducted in development of operating procedures and standard methods to evaluate fiber reinforced composite materials, bonded or sprayed insulation, coatings, and laminated structures with MSFC electronic shearography systems. Optimal operating procedures were developed for the Pratt and Whitney Electronic Holography/Shearography Inspection System (EH/SIS) operating in shearography mode, as well as the Laser Technology, Inc. (LTI) SC-4000 and Ettemeyer SHS-94 ISTRA shearography systems. Operating practices for exciting the components being inspected were studied, including optimal methods for transient heating with heat lamps and other methods as appropriate to enhance inspection capability.
Sonsmann, F K; Strunk, M; Gediga, K; John, C; Schliemann, S; Seyfarth, F; Elsner, P; Diepgen, T L; Kutz, G; John, S M
2014-05-01
To date, there are no legally binding requirements concerning product testing in cosmetics. This leads to various manufacturer-specific test methods and absent transparent information on skin cleansing products. A standardized in vivo test procedure for assessment of cleansing efficacy and corresponding barrier impairment by the cleaning process is needed, especially in the occupational context where repeated hand washing procedures may be performed at short intervals. For the standardization of the cleansing procedure, an Automated Cleansing Device (ACiD) was designed and evaluated. Different smooth washing surfaces of the equipment for ACiD (incl. goat hair, felt, felt covered with nitrile caps) were evaluated regarding their skin compatibility. ACiD allows an automated, fully standardized skin washing procedure. Felt covered with nitrile as washing surface of the rotating washing units leads to a homogenous cleansing result and does not cause detectable skin irritation, neither clinically nor as assessed by skin bioengineering methods (transepidermal water loss, chromametry). Automated Cleansing Device may be useful for standardized evaluation of the cleansing effectiveness and parallel assessment of the corresponding irritancy potential of industrial skin cleansers. This will allow objectifying efficacy and safety of industrial skin cleansers, thus enabling market transparency and facilitating rational choice of products. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Proposed test method for and evaluation of wheelchair seating system (WCSS) crashworthiness.
van Roosmalen, L; Bertocci, G; Ha, D R; Karg, P; Szobota, S
2000-01-01
Safety of motor vehicle seats is of great importance in providing crash protection to the occupant. An increasing number of wheelchair users use their wheelchairs as motor vehicle seats when traveling. A voluntary standard requires that compliant wheelchairs be dynamically sled impact tested. However, testing to evaluate the crashworthiness of add-on wheelchair seating systems (WCSS) independent of their wheelchair frame is not addressed by this standard. To address this need, this study developed a method to evaluate the crash-worthiness of WCSS with independent frames. Federal Motor Vehicle Safety Standards (FMVSS) 207 test protocols, used to test the strength of motor vehicle seats, were modified and used to test the strength of three WCSS. Forward and rearward loads were applied at the WCSS center of gravity (CGSS), and a moment was applied at the uppermost point of the seat back. Each of the three tested WCSS met the strength requirements of FMVSS 207. Wheelchair seat-back stiffness was also investigated and compared to motor vehicle seat-back stiffness.
Vavalle, Nicholas A; Jelen, Benjamin C; Moreno, Daniel P; Stitzel, Joel D; Gayzik, F Scott
2013-01-01
Objective evaluation methods of time history signals are used to quantify how well simulated human body responses match experimental data. As the use of simulations grows in the field of biomechanics, there is a need to establish standard approaches for comparisons. There are 2 aims of this study. The first is to apply 3 objective evaluation methods found in the literature to a set of data from a human body finite element model. The second is to compare the results of each method, examining how they are correlated to each other and the relative strengths and weaknesses of the algorithms. In this study, the methods proposed by Sprague and Geers (magnitude and phase error, SGM and SGP), Rhule et al. (cumulative standard deviation, CSD), and Gehre et al. (CORrelation and Analysis, or CORA, size, phase, shape, corridor) were compared. A 40 kph frontal sled test presented by Shaw et al. was simulated using the Global Human Body Models Consortium midsized male full-body finite element model (v. 3.5). Mean and standard deviation experimental data (n = 5) from Shaw et al. were used as the benchmark. Simulated data were output from the model at the appropriate anatomical locations for kinematic comparison. Force data were output at the seat belts, seat pan, knee, and foot restraints. Objective comparisons from 53 time history data channels were compared to the experimental results. To compare the different methods, all objective comparison metrics were cross-plotted and linear regressions were calculated. The following ratings were found to be statistically significantly correlated (P < .01): SGM and CORrelation and Analysis (CORA) size, R (2) = 0.73; SGP and CORA shape, R (2) = 0.82; and CSD and CORA's corridor factor, R (2) = 0.59. Relative strengths of the correlated ratings were then investigated. For example, though correlated to CORA size, SGM carries a sign to indicate whether the simulated response is greater than or less than the benchmark signal. A further analysis of the advantages and drawbacks of each method is discussed. The results demonstrate that a single metric is insufficient to provide a complete assessment of how well the simulated results match the experiments. The CORA method provided the most comprehensive evaluation of the signal. Regardless of the method selected, one primary recommendation of this work is that for any comparison, the results should be reported to provide separate assessments of a signal's match to experimental variance, magnitude, phase, and shape. Future work planned includes implementing any forthcoming International Organization for Standardization standards for objective evaluations. Supplemental materials are available for this article. Go to the publisher's online edition of Traffic Injury Prevention to view the supplemental file.
Performance evaluation of infrared imaging system in field test
NASA Astrophysics Data System (ADS)
Wang, Chensheng; Guo, Xiaodong; Ren, Tingting; Zhang, Zhi-jie
2014-11-01
Infrared imaging system has been applied widely in both military and civilian fields. Since the infrared imager has various types and different parameters, for system manufacturers and customers, there is great demand for evaluating the performance of IR imaging systems with a standard tool or platform. Since the first generation IR imager was developed, the standard method to assess the performance has been the MRTD or related improved methods which are not perfect adaptable for current linear scanning imager or 2D staring imager based on FPA detector. For this problem, this paper describes an evaluation method based on the triangular orientation discrimination metric which is considered as the effective and emerging method to evaluate the synthesis performance of EO system. To realize the evaluation in field test, an experiment instrument is developed. And considering the importance of operational environment, the field test is carried in practical atmospheric environment. The test imagers include panoramic imaging system and staring imaging systems with different optics and detectors parameters (both cooled and uncooled). After showing the instrument and experiment setup, the experiment results are shown. The target range performance is analyzed and discussed. In data analysis part, the article gives the range prediction values obtained from TOD method, MRTD method and practical experiment, and shows the analysis and results discussion. The experimental results prove the effectiveness of this evaluation tool, and it can be taken as a platform to give the uniform performance prediction reference.
Meyer, Michael T.; Loftin, Keith A.; Lee, Edward A.; Hinshaw, Gary H.; Dietze, Julie E.; Scribner, Elisabeth A.
2009-01-01
The U.S. Geological Survey method (0-2141-09) presented is approved for the determination of glyphosate, its degradation product aminomethylphosphonic acid (AMPA), and glufosinate in water. It was was validated to demonstrate the method detection levels (MDL), compare isotope dilution to standard addition, and evaluate method and compound stability. The original method USGS analytical method 0-2136-01 was developed using liquid chromatography/mass spectrometry and quantitation by standard addition. Lower method detection levels and increased specificity were achieved in the modified method, 0-2141-09, by using liquid chromatography/tandem mass spectrometry (LC/MS/MS). The use of isotope dilution for glyphosate and AMPA and pseudo isotope dilution of glufosinate in place of standard addition was evaluated. Stable-isotope labeled AMPA and glyphosate were used as the isotope dilution standards. In addition, the stability of glyphosate and AMPA was studied in raw filtered and derivatized water samples. The stable-isotope labeled glyphosate and AMPA standards were added to each water sample and the samples then derivatized with 9-fluorenylmethylchloroformate. After derivatization, samples were concentrated using automated online solid-phase extraction (SPE) followed by elution in-line with the LC mobile phase; the compounds separated and then were analyzed by LC/MS/MS using electrospray ionization in negative-ion mode with multiple-reaction monitoring. The deprotonated derivatized parent molecule and two daughter-ion transition pairs were identified and optimized for glyphosate, AMPA, glufosinate, and the glyphosate and AMPA stable-isotope labeled internal standards. Quantitative comparison between standard addition and isotope dilution was conducted using 473 samples analyzed between April 2004 and June 2006. The mean percent difference and relative standard deviation between the two quantitation methods was 7.6 plus or minus 6.30 (n = 179), AMPA 9.6 plus or minus 8.35 (n = 206), and glufosinate 9.3 plus or minus 9.16 (n = 16). The analytical variation of the method, comparison of quantitation by isotope dilution and multipoint linear regressed standard curves, and method detection levels were evaluated by analyzing six sets of distilled-water, groundwater, and surface-water samples spiked in duplicate at 0.0, 0.05, 0.10 and 0.50 microgram per liter and analyzed on 6 different days during 1 month. The grand means of the normalized concentration percentage recovery for glyphosate, AMPA, and glufosinate among all three matrices and spiked concentrations ranged from 99 to 114 plus or minus 2 to 7 percent of the expected spiked concentration. The grand mean of the percentage difference between concentrations calculated by standard addition and linear regressed multipoint standard curves ranged from 8 to 15 plus or minus 2 to 9 percent for the three compounds. The method reporting levels calculated from all the 0.05- microgram per liter spiked samples were 0.02 microgram per liter for all three compounds. Compound stability experiments were conducted on 10 samples derivatized four times for periods between 136 to 269 days. The glyphosate and AMPA concentrations remained relatively constant in samples held up to 136 days before derivatization. The half life of glyphosate varied from 169 to 223 days in the underivatized samples. Derivatized samples were analyzed the day after derivitization, and again 54 and 64 days after derivatization. The derivatized samples analyzed at days 52 and 64 were within 20 percent of the concentrations of the derivatized samples analyzed the day after derivatization.
New Method for Evaluation of Virucidal Activity of Antiseptics and Disinfectants
Papageorgiou, Georgios T.; Mocé-Llivina, Laura; Jofre, Juan
2001-01-01
Counting culturable viruses adsorbed to cellulose nitrate filters (the VIRADEN method) is proposed as a simple procedure for the evaluation of the virucidal activity of antiseptics and disinfectants. The virucidal activities of two different doses of iodine, chlorine, glutaraldehyde, and chlorhexidine digluconate on poliovirus 1 were tested with a standardized procedure and with the VIRADEN method. The two procedures assayed provided similar results. PMID:11722944
Evaluation of a direct blood culture disk diffusion antimicrobial susceptibility test.
Doern, G V; Scott, D R; Rashad, A L; Kim, K S
1981-01-01
A total of 556 unique blood culture isolates of nonfastidious aerobic and facultatively anaerobic bacteria were examined by direct and standardized disk susceptibility test methods (4,234 antibiotic-organism comparisons). When discrepancies which could be accounted for by the variability inherent in disk diffusion susceptibility tests were excluded, the direct method demonstrated 96.8% overall agreement with the standardized method. A total of 1.6% minor, 1.5% major, and 0.1% very major discrepancies were noted. PMID:7325634
Preoperative breast marking in reduction mammaplasty.
Gasperoni, C; Salgarello, M
1987-10-01
A simple method of preoperative marking for reduction mammaplasty is described. This method may be used in macromastias when the technique chosen implies a postoperative scar with the shape of an inverted T. The marking sequence follows standard steps, but the drawing is always different because it is a consequence of the shape of the breast. This marking method reduces the chance of making mistakes due to excessive personal evaluations or to the use of standard drawing patterns that may be not suitable for all breast shapes.
Labrador, Mirian; Rota, María C; Pérez, Consuelo; Herrera, Antonio; Bayarri, Susana
2018-05-01
The food industry is in need of rapid, reliable methodologies for the detection of Listeria monocytogenes in ready-to-eat products, as an alternative to the International Organization of Standardization (ISO) 11290-1 reference method. The aim of this study was to evaluate impedanciometry combined with chromogenic agar culture for the detection of L. monocytogenes in dry-cured ham. The experimental setup consisted in assaying four strains of L. monocytogenes and two strains of Listeria innocua in pure culture. The method was evaluated according to the ISO 16140:2003 standard through a comparative study with the ISO reference method with 119 samples of dry-cured ham. Significant determination coefficients ( R 2 of up to 0.99) for all strains assayed in pure culture were obtained. The comparative study results had 100% accuracy, 100% specificity, and 100% sensitivity. Impedanciometry followed by chromogenic agar culture was capable of detecting 1 CFU/25 g of food. L. monocytogenes was not detected in the 65 commercial samples tested. The method evaluated herein represents a promising alternative for the food industry in its efforts to control L. monocytogenes. Overall analysis time is shorter and the method permits a straightforward analysis of a large number of samples with reliable results.
ABSTRACT Aims This study developed and systematically evaluated performance and limit of detection of an off-the-slide genotyping procedure for both Cryptosporidium oocysts and Giardia cysts. Methods and Results Slide standards containing flow sorted (oo)cysts were used to e...
Li, H; Zhang, L
2017-03-20
In recent years, malnutrition in patients with liver cirrhosis has been taken more and more seriously in clinical physicians, and patients' nutritional status is closely associated with prognosis. At present, there are many methods for the evaluation of nutritional status in patients with liver cirrhosis, but there are still no unified standards. This article reviews the common evaluation indices and methods used in clinical practice in China and foreign countries, in order to provide a basis for accurately evaluating nutritional status and guiding nutritional therapy in patients with liver cirrhosis.
[Evaluation of inflammatory cells (tumor infiltrating lymphocytes - TIL) in malignant melanoma].
Dundr, Pavel; Němejcová, Kristýna; Bártů, Michaela; Tichá, Ivana; Jakša, Radek
2018-01-01
The evaluation of inflammatory infiltrate (tumor infiltrating lymphocytes - TIL) should be a standard part of biopsy examination for malignant melanoma. Currently, the most commonly used assessment method according to Clark is not optimal and there have been attempts to find an alternative system. Here we present an overview of possible approaches involving five different evaluation methods based on hematoxylin-eosin staining, including the recent suggestion of unified TIL evaluation method for all solid tumors. The issue of methodology, prognostic and predictive significance of TIL determination as well as the importance of immunohistochemical subtyping of inflammatory infiltrate is discussed.
Evaluation on determination of iodine in coal by energy dispersive X-ray fluorescence
Wang, B.; Jackson, J.C.; Palmer, C.; Zheng, B.; Finkelman, R.B.
2005-01-01
A quick and inexpensive method of relative high iodine determination from coal samples was evaluated. Energy dispersive X-ray fluorescence (EDXRF) provided a detection limit of about 14 ppm (3 times of standard deviations of the blank sample), without any complex sample preparation. An analytical relative standard deviation of 16% was readily attainable for coal samples. Under optimum conditions, coal samples with iodine concentrations higher than 5 ppm can be determined using this EDXRF method. For the time being, due to the general iodine concentrations of coal samples lower than 5 ppm, except for some high iodine content coal, this method can not effectively been used for iodine determination. More work needed to meet the requirement of determination of iodine from coal samples for this method. Copyright ?? 2005 by The Geochemical Society of Japan.
Miura, Tsutomu; Chiba, Koichi; Kuroiwa, Takayoshi; Narukawa, Tomohiro; Hioki, Akiharu; Matsue, Hideaki
2010-09-15
Neutron activation analysis (NAA) coupled with an internal standard method was applied for the determination of As in the certified reference material (CRM) of arsenobetaine (AB) standard solutions to verify their certified values. Gold was used as an internal standard to compensate for the difference of the neutron exposure in an irradiation capsule and to improve the sample-to-sample repeatability. Application of the internal standard method significantly improved linearity of the calibration curve up to 1 microg of As, too. The analytical reliability of the proposed method was evaluated by k(0)-standardization NAA. The analytical results of As in AB standard solutions of BCR-626 and NMIJ CRM 7901-a were (499+/-55)mgkg(-1) (k=2) and (10.16+/-0.15)mgkg(-1) (k=2), respectively. These values were found to be 15-20% higher than the certified values. The between-bottle variation of BCR-626 was much larger than the expanded uncertainty of the certified value, although that of NMIJ CRM 7901-a was almost negligible. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Near-infrared fluorescence image quality test methods for standardized performance evaluation
NASA Astrophysics Data System (ADS)
Kanniyappan, Udayakumar; Wang, Bohan; Yang, Charles; Ghassemi, Pejhman; Wang, Quanzeng; Chen, Yu; Pfefer, Joshua
2017-03-01
Near-infrared fluorescence (NIRF) imaging has gained much attention as a clinical method for enhancing visualization of cancers, perfusion and biological structures in surgical applications where a fluorescent dye is monitored by an imaging system. In order to address the emerging need for standardization of this innovative technology, it is necessary to develop and validate test methods suitable for objective, quantitative assessment of device performance. Towards this goal, we develop target-based test methods and investigate best practices for key NIRF imaging system performance characteristics including spatial resolution, depth of field and sensitivity. Characterization of fluorescence properties was performed by generating excitation-emission matrix properties of indocyanine green and quantum dots in biological solutions and matrix materials. A turbid, fluorophore-doped target was used, along with a resolution target for assessing image sharpness. Multi-well plates filled with either liquid or solid targets were generated to explore best practices for evaluating detection sensitivity. Overall, our results demonstrate the utility of objective, quantitative, target-based testing approaches as well as the need to consider a wide range of factors in establishing standardized approaches for NIRF imaging system performance.
Rishikesh, N.; Quélennec, G.
1983-01-01
Vector resistance and other constraints have necessitated consideration of the use of alternative materials and methods in an integrated approach to vector control. Bacillus thuringiensis serotype H-14 is a promising biological control agent which acts as a conventional larvicide through its delta-endotoxin (active ingredient) and which now has to be suitably formulated for application in vector breeding habitats. The active ingredient in the formulations has so far not been chemically characterized or quantified and therefore recourse has to be taken to a bioassay method. Drawing on past experience and through the assistance mainly of various collaborating centres, the World Health Organization has standardized a bioassay method (described in the Annex), which gives consistent and reproducible results. The method permits the determination of the potency of a B.t. H-14 preparation through comparison with a standard powder. The universal adoption of the standardized bioassay method will ensure comparability of the results of different investigators. PMID:6601545
Evaluating concentration estimation errors in ELISA microarray experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; White, Amanda M.; Varnum, Susan M.
Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less
Chekri, Rachida; Noël, Laurent; Vastel, Christelle; Millour, Sandrine; Kadar, Ali; Guérin, Thierry
2010-01-01
This paper describes a validation process in compliance with the NFIEN ISO/IEC 17025 standard for the determination of the macrominerals calcium, magnesium, sodium, and potassium in foodstuffs by microsampling with flame atomic absorption spectrometry after closed-vessel microwave digestion. The French Standards Commission (Agence Francaise de Normalisation) standards NF V03-110, NF EN V03-115, and XP T-90-210 were used to evaluate this method. The method was validated in the context of an analysis of the 1322 food samples of the second French Total Diet Study (TDS). Several performance criteria (linearity, LOQ, specificity, trueness, precision under repeatability conditions, and intermediate precision reproducibility) were evaluated. Furthermore, the method was monitored by several internal quality controls. The LOQ values obtained (25, 5, 8.3, and 8.3 mg/kg for Ca, Mg, Na, and K, respectively) were in compliance with the needs of the TDS. The method provided accurate results as demonstrated by a repeatability CV (CVr) of < 7% and a reproducibility CV (CVR) of < 12% for all the elements. Therefore, the results indicated that this method could be used in the laboratory for the routine determination of these four elements in foodstuffs with acceptable analytical performance.
Trends in Teacher Evaluation: What Every Special Education Teacher Should Know
ERIC Educational Resources Information Center
Benedict, Amber E; Thomas, Rachel A.; Kimerling, Jenna; Leko, Christopher
2013-01-01
The article reflects on current methods of teacher evaluation within the context of recent accountability policy, specifically No Child Left Behind. An overview is given of the most common forms of teacher evaluation, including performance evaluations, checklists, peer review, portfolios, the CEC and InTASC standards, the Charlotte Danielson…
Evaluating an alternative method for rapid urinary creatinine determination
Creatinine (CR) is an endogenously-produced chemical routinely assayed in urine specimens to assess kidney function, sample dilution. The industry-standard method for CR determination, known as the kinetic Jaffe (KJ) method, relies on an exponential rate of a colorimetric change,...
Ohno, Yoshiharu; Koyama, Hisanobu; Yoshikawa, Takeshi; Kishida, Yuji; Seki, Shinichiro; Takenaka, Daisuke; Yui, Masao; Miyazaki, Mitsue; Sugimura, Kazuro
2017-08-01
Purpose To compare the capability of pulmonary thin-section magnetic resonance (MR) imaging with ultrashort echo time (UTE) with that of standard- and reduced-dose thin-section computed tomography (CT) in nodule detection and evaluation of nodule type. Materials and Methods The institutional review board approved this study, and written informed consent was obtained from each patient. Standard- and reduced-dose chest CT (60 and 250 mA) and MR imaging with UTE were used to examine 52 patients; 29 were men (mean age, 66.4 years ± 7.3 [standard deviation]; age range, 48-79 years) and 23 were women (mean age, 64.8 years ± 10.1; age range, 42-83 years). Probability of nodule presence was assessed for all methods with a five-point visual scoring system. All nodules were then classified as missed, ground-glass, part-solid, or solid nodules. To compare nodule detection capability of the three methods, consensus for performances was rated by using jackknife free-response receiver operating characteristic analysis, and κ analysis was used to compare intermethod agreement for nodule type classification. Results There was no significant difference (F = 0.70, P = .59) in figure of merit between methods (standard-dose CT, 0.86; reduced-dose CT, 0.84; MR imaging with UTE, 0.86). There was no significant difference in sensitivity between methods (standard-dose CT vs reduced-dose CT, P = .50; standard-dose CT vs MR imaging with UTE, P = .50; reduced-dose CT vs MR imaging with UTE, P >.99). Intermethod agreement was excellent (standard-dose CT vs reduced-dose CT, κ = 0.98, P < .001; standard-dose CT vs MR imaging with UTE, κ = 0.98, P < .001; reduced-dose CT vs MR imaging with UTE, κ = 0.99, P < .001). Conclusion Pulmonary thin-section MR imaging with UTE was useful in nodule detection and evaluation of nodule type, and it is considered at least as efficacious as standard- or reduced-dose thin-section CT. © RSNA, 2017 Online supplemental material is available for this article.
Toward a standard in structural genome annotation for prokaryotes
Tripp, H. James; Sutton, Granger; White, Owen; ...
2015-07-25
In an effort to identify the best practice for finding genes in prokaryotic genomes and propose it as a standard for automated annotation pipelines, we collected 1,004,576 peptides from various publicly available resources, and these were used as a basis to evaluate various gene-calling methods. The peptides came from 45 bacterial replicons with an average GC content from 31 % to 74 %, biased toward higher GC content genomes. Automated, manual, and semi-manual methods were used to tally errors in three widely used gene calling methods, as evidenced by peptides mapped outside the boundaries of called genes. We found thatmore » the consensus set of identical genes predicted by the three methods constitutes only about 70 % of the genes predicted by each individual method (with start and stop required to coincide). Peptide data was useful for evaluating some of the differences between gene callers, but not reliable enough to make the results conclusive, due to limitations inherent in any proteogenomic study. A single, unambiguous, unanimous best practice did not emerge from this analysis, since the available proteomics data were not adequate to provide an objective measurement of differences in the accuracy between these methods. However, as a result of this study, software, reference data, and procedures have been better matched among participants, representing a step toward a much-needed standard. In the absence of sufficient amount of experimental data to achieve a universal standard, our recommendation is that any of these methods can be used by the community, as long as a single method is employed across all datasets to be compared.« less
Gao, Xiaoli; Zhang, Qibin; Meng, Da; Issac, Giorgis; Zhao, Rui; Fillmore, Thomas L.; Chu, Rosey K.; Zhou, Jianying; Tang, Keqi; Hu, Zeping; Moore, Ronald J.; Smith, Richard D.; Katze, Michael G.; Metz, Thomas O.
2012-01-01
Lipidomics is a critical part of metabolomics and aims to study all the lipids within a living system. We present here the development and evaluation of a sensitive capillary UPLC-MS method for comprehensive top-down/bottom-up lipid profiling. Three different stationary phases were evaluated in terms of peak capacity, linearity, reproducibility, and limit of quantification (LOQ) using a mixture of lipid standards representative of the lipidome. The relative standard deviations of the retention times and peak abundances of the lipid standards were 0.29% and 7.7%, respectively, when using the optimized method. The linearity was acceptable at >0.99 over 3 orders of magnitude, and the LOQs were sub-fmol. To demonstrate the performance of the method in the analysis of complex samples, we analyzed lipids extracted from a human cell line, rat plasma, and a model human skin tissue, identifying 446, 444, and 370 unique lipids, respectively. Overall, the method provided either higher coverage of the lipidome, greater measurement sensitivity, or both, when compared to other approaches of global, untargeted lipid profiling based on chromatography coupled with MS. PMID:22354571
Rudzki, Piotr J; Gniazdowska, Elżbieta; Buś-Kwaśnik, Katarzyna
2018-06-05
Liquid chromatography coupled to mass spectrometry (LC-MS) is a powerful tool for studying pharmacokinetics and toxicokinetics. Reliable bioanalysis requires the characterization of the matrix effect, i.e. influence of the endogenous or exogenous compounds on the analyte signal intensity. We have compared two methods for the quantitation of matrix effect. The CVs(%) of internal standard normalized matrix factors recommended by the European Medicines Agency were evaluated against internal standard normalized relative matrix effects derived from Matuszewski et al. (2003). Both methods use post-extraction spiked samples, but matrix factors require also neat solutions. We have tested both approaches using analytes of diverse chemical structures. The study did not reveal relevant differences in the results obtained with both calculation methods. After normalization with the internal standard, the CV(%) of the matrix factor was on average 0.5% higher than the corresponding relative matrix effect. The method adopted by the European Medicines Agency seems to be slightly more conservative in the analyzed datasets. Nine analytes of different structures enabled a general overview of the problem, still, further studies are encouraged to confirm our observations. Copyright © 2018 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Haist, Steven A.; Lineberry, Michelle J.; Griffith, Charles H.; Hoellein, Andrew R.; Talente, Gregg M.; Wilson, John F.
2008-01-01
Background: Sexual history and HIV counseling (SHHIVC) are essential clinical skills. Our project's purpose was to evaluate a standardized patient educational intervention teaching third-year medical students SHHIVC. Methods: A four-hour standardized patient workshop was delivered to one-half of the class each of three consecutive years at one…
ERIC Educational Resources Information Center
Battistone, William A., Jr.
2017-01-01
Problem: There is an existing cycle of questionable grading practices at the K-12 level. As a result, districts continue to search for innovative methods of evaluating and reporting student progress. One result of this effort has been the adoption of a standards-based grading approach. Research concerning standards-based grading implementation has…
Treating Depression during Pregnancy and the Postpartum: A Preliminary Meta-Analysis
ERIC Educational Resources Information Center
Bledsoe, Sarah E.; Grote, Nancy K.
2006-01-01
Objectives: This meta-analysis evaluates treatment effects for nonpsychotic major depression during pregnancy and postpartum comparing interventions by type and timing. Methods: Studies for decreasing depressive severity during pregnancy and postpartum applying treatment trials and standardized measures were included. Standardized mean differences…
Establishment of metrological traceability in porosity measurements by x-ray computed tomography
NASA Astrophysics Data System (ADS)
Hermanek, Petr; Carmignato, Simone
2017-09-01
Internal porosity is an inherent phenomenon to many manufacturing processes, such as casting, additive manufacturing, and others. Since these defects cannot be completely avoided by improving production processes, it is important to have a reliable method to detect and evaluate them accurately. The accurate evaluation becomes even more important concerning current industrial trends to minimize size and weight of products on one side, and enhance their complexity and performance on the other. X-ray computed tomography (CT) has emerged as a promising instrument for holistic porosity measurements offering several advantages over equivalent methods already established in the detection of internal defects. The main shortcomings of the conventional techniques pertain to too general information about total porosity content (e.g. Archimedes method) or the destructive way of testing (e.g. microscopy of cross-sections). On the contrary, CT is a nondestructive technique providing complete information about size, shape and distribution of internal porosity. However, due to the lack of international standards and the fact that it is relatively a new measurement technique, CT as a measurement technology has not yet reached maturity. This study proposes a procedure for the establishment of measurement traceability in porosity measurements by CT including the necessary evaluation of measurement uncertainty. The traceability transfer is carried out through a novel reference standard calibrated by optical and tactile coordinate measuring systems. The measurement uncertainty is calculated following international standards and guidelines. In addition, the accuracy of porosity measurements by CT with the associated measurement uncertainty is evaluated using the reference standard.
Endoscope field of view measurement.
Wang, Quanzeng; Khanicheh, Azadeh; Leiner, Dennis; Shafer, David; Zobel, Jurgen
2017-03-01
The current International Organization for Standardization (ISO) standard (ISO 8600-3: 1997 including Amendment 1: 2003) for determining endoscope field of view (FOV) does not accurately characterize some novel endoscopic technologies such as endoscopes with a close focus distance and capsule endoscopes. We evaluated the endoscope FOV measurement method (the FOV WS method) in the current ISO 8600-3 standard and proposed a new method (the FOV EP method). We compared the two methods by measuring the FOV of 18 models of endoscopes (one device for each model) from seven key international manufacturers. We also estimated the device to device variation of two models of colonoscopes by measuring several hundreds of devices. Our results showed that the FOV EP method was more accurate than the FOV WS method, and could be used for all endoscopes. We also found that the labelled FOV values of many commercial endoscopes are significantly overstated. Our study can help endoscope users understand endoscope FOV and identify a proper method for FOV measurement. This paper can be used as a reference to revise the current endoscope FOV measurement standard.
Endoscope field of view measurement
Wang, Quanzeng; Khanicheh, Azadeh; Leiner, Dennis; Shafer, David; Zobel, Jurgen
2017-01-01
The current International Organization for Standardization (ISO) standard (ISO 8600-3: 1997 including Amendment 1: 2003) for determining endoscope field of view (FOV) does not accurately characterize some novel endoscopic technologies such as endoscopes with a close focus distance and capsule endoscopes. We evaluated the endoscope FOV measurement method (the FOVWS method) in the current ISO 8600-3 standard and proposed a new method (the FOVEP method). We compared the two methods by measuring the FOV of 18 models of endoscopes (one device for each model) from seven key international manufacturers. We also estimated the device to device variation of two models of colonoscopes by measuring several hundreds of devices. Our results showed that the FOVEP method was more accurate than the FOVWS method, and could be used for all endoscopes. We also found that the labelled FOV values of many commercial endoscopes are significantly overstated. Our study can help endoscope users understand endoscope FOV and identify a proper method for FOV measurement. This paper can be used as a reference to revise the current endoscope FOV measurement standard. PMID:28663840
Water Quality Evaluation of the Yellow River Basin Based on Gray Clustering Method
NASA Astrophysics Data System (ADS)
Fu, X. Q.; Zou, Z. H.
2018-03-01
Evaluating the water quality of 12 monitoring sections in the Yellow River Basin comprehensively by grey clustering method based on the water quality monitoring data from the Ministry of environmental protection of China in May 2016 and the environmental quality standard of surface water. The results can reflect the water quality of the Yellow River Basin objectively. Furthermore, the evaluation results are basically the same when compared with the fuzzy comprehensive evaluation method. The results also show that the overall water quality of the Yellow River Basin is good and coincident with the actual situation of the Yellow River basin. Overall, gray clustering method for water quality evaluation is reasonable and feasible and it is also convenient to calculate.
Huang, Biao; Zhao, Yongcun
2014-01-01
Estimating standard-exceeding probabilities of toxic metals in soil is crucial for environmental evaluation. Because soil pH and land use types have strong effects on the bioavailability of trace metals in soil, they were taken into account by some environmental protection agencies in making composite soil environmental quality standards (SEQSs) that contain multiple metal thresholds under different pH and land use conditions. This study proposed a method for estimating the standard-exceeding probability map of soil cadmium using a composite SEQS. The spatial variability and uncertainty of soil pH and site-specific land use type were incorporated through simulated realizations by sequential Gaussian simulation. A case study was conducted using a sample data set from a 150 km2 area in Wuhan City and the composite SEQS for cadmium, recently set by the State Environmental Protection Administration of China. The method may be useful for evaluating the pollution risks of trace metals in soil with composite SEQSs. PMID:24672364
Opening the black box of ethics policy work: evaluating a covert practice.
Frolic, Andrea; Drolet, Katherine; Bryanton, Kim; Caron, Carole; Cupido, Cynthia; Flaherty, Barb; Fung, Sylvia; McCall, Lori
2012-01-01
Hospital ethics committees (HECs) and ethicists generally describe themselves as engaged in four domains of practice: case consultation, research, education, and policy work. Despite the increasing attention to quality indicators, practice standards, and evaluation methods for the other domains, comparatively little is known or published about the policy work of HECs or ethicists. This article attempts to open the "black box" of this health care ethics practice by providing two detailed case examples of ethics policy reviews. We also describe the development and application of an evaluation strategy to assess the quality of ethics policy review work, and to enable continuous improvement of ethics policy review processes. Given the potential for policy work to impact entire patient populations and organizational systems, it is imperative that HECs and ethicists develop clearer roles, responsibilities, procedural standards, and evaluation methods to ensure the delivery of consistent, relevant, and high-quality ethics policy reviews.
A protocol for lifetime energy and environmental impact assessment of building insulation materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, Som S., E-mail: shresthass@ornl.gov; Biswas, Kaushik; Desjarlais, Andre O.
This article describes a proposed protocol that is intended to provide a comprehensive list of factors to be considered in evaluating the direct and indirect environmental impacts of building insulation materials, as well as detailed descriptions of standardized calculation methodologies to determine those impacts. The energy and environmental impacts of insulation materials can generally be divided into two categories: (1) direct impact due to the embodied energy of the insulation materials and other factors and (2) indirect or environmental impacts avoided as a result of reduced building energy use due to addition of insulation. Standards and product category rules exist,more » which provide guidelines about the life cycle assessment (LCA) of materials, including building insulation products. However, critical reviews have suggested that these standards fail to provide complete guidance to LCA studies and suffer from ambiguities regarding the determination of the environmental impacts of building insulation and other products. The focus of the assessment protocol described here is to identify all factors that contribute to the total energy and environmental impacts of different building insulation products and, more importantly, provide standardized determination methods that will allow comparison of different insulation material types. Further, the intent is not to replace current LCA standards but to provide a well-defined, easy-to-use comparison method for insulation materials using existing LCA guidelines. - Highlights: • We proposed a protocol to evaluate the environmental impacts of insulation materials. • The protocol considers all life cycle stages of an insulation material. • Both the direct environmental impacts and the indirect impacts are defined. • Standardized calculation methods for the ‘avoided operational energy’ is defined. • Standardized calculation methods for the ‘avoided environmental impact’ is defined.« less
Development of delineator testing standard.
DOT National Transportation Integrated Search
2015-02-01
The objective of this project was to develop a new test method for evaluating the impact performance : of delineators for given applications. The researchers focused on developing a test method that was : reproducible and attempted to reproduce failu...
Campbell, Brittany E; Miller, Dini M
2017-03-15
Standard toxicity evaluations of insecticides against insect pests are primarily conducted on adult insects. Evaluations are based on a dose-response or concentration-response curve, where mortality increases as the dose or concentration of an insecticide is increased. Standard lethal concentration (LC50) and lethal dose (LD50) tests that result in 50% mortality of a test population can be challenging for evaluating toxicity of insecticides against non-adult insect life stages, such as eggs and early instar or nymphal stages. However, this information is essential for understanding insecticide efficacy in all bed bug life stages, which affects control and treatment efforts. This protocol uses a standard dipping bioassay modified for bed bug eggs and a contact insecticidal assay for treating nymphal first instars. These assays produce a concentration-response curve to further quantify LC50 values for insecticide evaluations.
Kolaczinski, Jan; Hanson, Kara
2006-05-08
Insecticide-treated nets (ITNs) are an effective and cost-effective means of malaria control. Scaling-up coverage of ITNs is challenging. It requires substantial resources and there are a number of strategies to choose from. Information on the cost of different strategies is still scarce. To guide the choice of a delivery strategy (or combination of strategies), reliable and standardized cost information for the different options is required. The electronic online database PubMed was used for a systematic search of the published English literature on costing and economic evaluations of ITN distribution programmes. The keywords used were: net, bednet, insecticide, treated, ITN, cost, effectiveness, economic and evaluation. Identified papers were analysed to determine and evaluate the costing methods used. Methods were judged against existing standards of cost analysis to arrive at proposed standards for undertaking and presenting cost analyses. Cost estimates were often not readily comparable or could not be adjusted to a different context. This resulted from the wide range of methods applied and measures of output chosen. Most common shortcomings were the omission of certain costs and failure to adjust financial costs to generate economic costs. Generalisability was hampered by authors not reporting quantities and prices of resources separately and not examining the sensitivity of their results to variations in underlying assumptions. The observed shortcomings have arisen despite the abundance of literature and guidelines on costing of health care interventions. This paper provides ITN specific recommendations in the hope that these will help to standardize future cost estimates.
Marijnissen, A C A; Vincken, K L; Vos, P A J M; Saris, D B F; Viergever, M A; Bijlsma, J W J; Bartels, L W; Lafeber, F P J G
2008-02-01
Radiography is still the golden standard for imaging features of osteoarthritis (OA), such as joint space narrowing, subchondral sclerosis, and osteophyte formation. Objective assessment, however, remains difficult. The goal of the present study was to evaluate a novel digital method to analyse standard knee radiographs. Standardized radiographs of 20 healthy and 55 OA knees were taken in general practise according to the semi-flexed method by Buckland-Wright. Joint Space Width (JSW), osteophyte area, subchondral bone density, joint angle, and tibial eminence height were measured as continuous variables using newly developed Knee Images Digital Analysis (KIDA) software on a standard PC. Two observers evaluated the radiographs twice, each on two different occasions. The observers were blinded to the source of the radiographs and to their previous measurements. Statistical analysis to compare measurements within and between observers was performed according to Bland and Altman. Correlations between KIDA data and Kellgren & Lawrence (K&L) grade were calculated and data of healthy knees were compared to those of OA knees. Intra- and inter-observer variations for measurement of JSW, subchondral bone density, osteophytes, tibial eminence, and joint angle were small. Significant correlations were found between KIDA parameters and K&L grade. Furthermore, significant differences were found between healthy and OA knees. In addition to JSW measurement, objective evaluation of osteophyte formation and subchondral bone density is possible on standard radiographs. The measured differences between OA and healthy individuals suggest that KIDA allows detection of changes in time, although sensitivity to change has to be demonstrated in long-term follow-up studies.
A human reliability based usability evaluation method for safety-critical software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, R. L.; Tran, T. Q.; Gertman, D. I.
2006-07-01
Boring and Gertman (2005) introduced a novel method that augments heuristic usability evaluation methods with that of the human reliability analysis method of SPAR-H. By assigning probabilistic modifiers to individual heuristics, it is possible to arrive at the usability error probability (UEP). Although this UEP is not a literal probability of error, it nonetheless provides a quantitative basis to heuristic evaluation. This method allows one to seamlessly prioritize and identify usability issues (i.e., a higher UEP requires more immediate fixes). However, the original version of this method required the usability evaluator to assign priority weights to the final UEP, thusmore » allowing the priority of a usability issue to differ among usability evaluators. The purpose of this paper is to explore an alternative approach to standardize the priority weighting of the UEP in an effort to improve the method's reliability. (authors)« less
Building an Evaluation Scale using Item Response Theory.
Lalor, John P; Wu, Hao; Yu, Hong
2016-11-01
Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) from psychometrics as an alternative means for gold-standard test-set generation and NLP system evaluation. IRT is able to describe characteristics of individual items - their difficulty and discriminating power - and can account for these characteristics in its estimation of human intelligence or ability for an NLP task. In this paper, we demonstrate IRT by generating a gold-standard test set for Recognizing Textual Entailment. By collecting a large number of human responses and fitting our IRT model, we show that our IRT model compares NLP systems with the performance in a human population and is able to provide more insight into system performance than standard evaluation metrics. We show that a high accuracy score does not always imply a high IRT score, which depends on the item characteristics and the response pattern.
Building an Evaluation Scale using Item Response Theory
Lalor, John P.; Wu, Hao; Yu, Hong
2016-01-01
Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) from psychometrics as an alternative means for gold-standard test-set generation and NLP system evaluation. IRT is able to describe characteristics of individual items - their difficulty and discriminating power - and can account for these characteristics in its estimation of human intelligence or ability for an NLP task. In this paper, we demonstrate IRT by generating a gold-standard test set for Recognizing Textual Entailment. By collecting a large number of human responses and fitting our IRT model, we show that our IRT model compares NLP systems with the performance in a human population and is able to provide more insight into system performance than standard evaluation metrics. We show that a high accuracy score does not always imply a high IRT score, which depends on the item characteristics and the response pattern.1 PMID:28004039
Propulsion Diagnostic Method Evaluation Strategy (ProDiMES) User's Guide
NASA Technical Reports Server (NTRS)
Simon, Donald L.
2010-01-01
This report is a User's Guide for the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES). ProDiMES is a standard benchmarking problem and a set of evaluation metrics to enable the comparison of candidate aircraft engine gas path diagnostic methods. This Matlab (The Mathworks, Inc.) based software tool enables users to independently develop and evaluate diagnostic methods. Additionally, a set of blind test case data is also distributed as part of the software. This will enable the side-by-side comparison of diagnostic approaches developed by multiple users. The Users Guide describes the various components of ProDiMES, and provides instructions for the installation and operation of the tool.
[Quantitative method for simultaneous assay of four coumarins with one marker in Fraxini Cortex].
Feng, Weihong; Wang, Zhimin; Zhang, Qiwei; Liu, Limei; Wang, Jinyu; Yang, Fei
2011-07-01
To establish a new quantitative method for simultaneous determination of multi-coumarins in Fraxini Cortex by using one chemical reference substance, and validate its feasibilities. The new quality evaluation method, quantitative analysis of multi-components by singer-marker (QAMS), was established and validated with Fraxini Cortex. Four main coumarins were selected as analytes to evaluate the quality and their relative correlation factors (RCF) were determined by HPLC-DAD. Within the linear range, the values of RCF at 340 nm of aesculin to asculetin, fraxin and fraxetin were 1.771, 0.799, 1.409, respectively. And the contents of aesculin in samples of Fraxini Cortex were authentically determined by the external standard method, and the contents of the three other coumarins were calculated by their RCF. The contents of these four coumarins in all samples were also determined by the external standard method. Within a certain range, the RCF had a good reproducibility (RSD 2.5%-3.9%). Significant differences were not observed between the quantitative results of two methods. It is feasible and suitable to evaluate the quality of Fraxini Cortex and its Yinpian by QAMS.
Measuring acetabular component position on lateral radiographs - ischio-lateral method.
Pulos, Nicholas; Tiberi Iii, John V; Schmalzried, Thomas P
2011-01-01
The standard method for the evaluation of arthritis and postoperative assessment of arthroplasty treatment is observation and measurement from plain films, using the flm edge for orientation. A more recent employment of an anatomical landmark, the ischial tuberosity, has come into use as orientation for evaluation and is called the ischio-lateral method. In this study, the use of this method was evaluated as a first report to the literature on acetabular component measurement using a skeletal reference with lateral radiographs. Postoperative radiographs of 52 hips, with at least three true lateral radiographs taken at different time periods, were analyzed. Component position was measured with the historical method (using the flm edge for orientation) and with the new method using the ischio-lateral method. The mean standard deviation (SD) for the historical approach was 3.7° and for the ischio-lateral method, 2.2° (p < 0.001). With the historical method, 19 (36.5%) hips had a SD greater than ± 4°, compared to six hips (11.5%) with the ischio-lateral method. By using a skeletal reference, the ischio-lateral method provides a more consistent measurement of acetabular component position. The high intra-class correlation coefficients for both intra- and inter-observer reliability indicate that the angle measured with this simple method, which employs no further technology, increased time, or cost, is consistent and reproducible for multiple observers.
Evaluating the Risks of Clinical Research: Direct Comparative Analysis
Abdoler, Emily; Roberson-Nay, Roxann; Pine, Daniel S.; Wendler, David
2014-01-01
Abstract Objectives: Many guidelines and regulations allow children and adolescents to be enrolled in research without the prospect of clinical benefit when it poses minimal risk. However, few systematic methods exist to determine when research risks are minimal. This situation has led to significant variation in minimal risk judgments, raising concern that some children are not being adequately protected. To address this concern, we describe a new method for implementing the widely endorsed “risks of daily life” standard for minimal risk. This standard defines research risks as minimal when they do not exceed the risks posed by daily life activities or routine examinations. Methods: This study employed a conceptual and normative analysis, and use of an illustrative example. Results: Different risks are composed of the same basic elements: Type, likelihood, and magnitude of harm. Hence, one can compare the risks of research and the risks of daily life by comparing the respective basic elements with each other. We use this insight to develop a systematic method, direct comparative analysis, for implementing the “risks of daily life” standard for minimal risk. The method offers a way of evaluating research procedures that pose the same types of risk as daily life activities, such as the risk of experiencing anxiety, stress, or other psychological harm. We thus illustrate how direct comparative analysis can be applied in practice by using it to evaluate whether the anxiety induced by a respiratory CO2 challenge poses minimal or greater than minimal risks in children and adolescents. Conclusions: Direct comparative analysis is a systematic method for applying the “risks of daily life” standard for minimal risk to research procedures that pose the same types of risk as daily life activities. It thereby offers a method to protect children and adolescents in research, while ensuring that important studies are not blocked because of unwarranted concerns about research risks. PMID:25210944
Development of duplex real-time PCR for the detection of WSSV and PstDV1 in cultivated shrimp.
Leal, Carlos A G; Carvalho, Alex F; Leite, Rômulo C; Figueiredo, Henrique C P
2014-07-05
The White spot syndrome virus (WSSV) and Penaeus stylirostris penstyldensovirus 1 (previously named Infectious hypodermal and hematopoietic necrosis virus-IHHNV) are two of the most important viral pathogens of penaeid shrimp. Different methods have been applied for diagnosis of these viruses, including Real-time PCR (qPCR) assays. A duplex qPCR method allows the simultaneous detection of two viruses in the same sample, which is more cost-effective than assaying for each virus separately. Currently, an assay for the simultaneous detection of the WSSV and the PstDV1 in shrimp is unavailable. The aim of this study was to develop and standardize a duplex qPCR assay for the simultaneous detection of the WSSV and the PstDV1 in clinical samples of diseased L. vannamei. In addition, to evaluate the performance of two qPCR master mixes with regard to the clinical sensitivity of the qPCR assay, as well as, different methods for qPCR results evaluation. The duplex qPCR assay for detecting WSSV and PstDV1 in clinical samples was successfully standardized. No difference in the amplification of the standard curves was observed between the duplex and singleplex assays. Specificities and sensitivities similar to those of the singleplex assays were obtained using the optimized duplex qPCR. The analytical sensitivities of duplex qPCR were two copies of WSSV control plasmid and 20 copies of PstDV1 control plasmid. The standardized duplex qPCR confirmed the presence of viral DNA in 28 from 43 samples tested. There was no difference for WSSV detection using the two kits and the distinct methods for qPCR results evaluation. High clinical sensitivity for PstDV1 was obtained with TaqMan Universal Master Mix associated with relative threshold evaluation. Three cases of simultaneous infection by the WSSV and the PstDV1 were identified with duplex qPCR. The standardized duplex qPCR was shown to be a robust, highly sensitive, and feasible diagnostic tool for the simultaneous detection of the WSSV and the PstDV1 in whiteleg shrimp. The use of the TaqMan Universal Master Mix and the relative threshold method of data analysis in our duplex qPCR method provided optimal levels of sensitivity and specificity.
PM: RESEARCH METHODS FOR PM TOXIC COMPOUNDS - PARTICLE METHODS EVALUATION AND DEVELOPMENT
The Federal Reference Method (FRM) for Particulate Matter (PM) developed by EPA's National Exposure Research Laboratory (NERL) forms the backbone of the EPA's national monitoring strategy. It is the measurement that defines attainment of the National Ambient Air Quality Standard...
Spray Drift Reduction Evaluations of Spray Nozzles Using a Standardized Testing Protocol
2010-07-01
Drop Size Characteristics in a Spray Using Optical Nonimaging Light-Scattering Instruments,” Annual Book of ASTM Standards, Vol. 14-02, ASTM...Test Method for Determining Liquid Drop Size Characteristics in a Spray Using Optical Non- imaging Light-Scattering Instruments 22. AGDISP Model
Evaluation of pulse-oximetry oxygen saturation taken through skin protective covering
James, Jyotsna; Tiwari, Lokesh; Upadhyay, Pramod; Sreenivas, Vishnubhatla; Bhambhani, Vikas; Puliyel, Jacob M
2006-01-01
Background The hard edges of adult finger clip probes of the pulse oximetry oxygen saturation (POOS) monitor can cause skin damage if used for prolonged periods in a neonate. Covering the skin under the probe with Micropore surgical tape or a gauze piece might prevent such injury. The study was done to see if the protective covering would affect the accuracy of the readings. Methods POOS was studied in 50 full-term neonates in the first week of life. After obtaining consent from their parents the neonates had POOS readings taken directly (standard technique) and through the protective covering. Bland-Altman plots were used to compare the new method with the standard technique. A test of repeatability for each method was also performed. Results The Bland-Altman plots suggest that there is no significant loss of accuracy when readings are taken through the protective covering. The mean difference was 0.06 (SD of 1.39) and 0.04 (SD 1.3) with Micropore and gauze respectively compared to the standard method. The mean difference was 0.22 (SD 0.23) on testing repeatability with the standard method. Conclusion Interposing Micropore or gauze does not significantly affect the accuracy of the POOS reading. The difference between the standard method and the new method was less than the difference seen on testing repeatability of the standard method. PMID:16677394
Estimating Durability of Reinforced Concrete
NASA Astrophysics Data System (ADS)
Varlamov, A. A.; Shapovalov, E. L.; Gavrilov, V. B.
2017-11-01
In this article we propose to use the methods of fracture mechanics to evaluate concrete durability. To evaluate concrete crack resistance characteristics of concrete directly in the structure in order to implement the methods of fracture mechanics, we have developed special methods. Various experimental studies have been carried out to determine the crack resistance characteristics and the concrete modulus of elasticity during its operating. A comparison was carried out for the results obtained with the use of the proposed methods and those obtained with the standard methods for determining the concrete crack resistance characteristics.
Metal pipe coupling study : final report.
DOT National Transportation Integrated Search
1975-11-01
The specific aims of the study were: (1) to establish a standard design for the watertight coupling systems for the various kinds of metal culvert pipe and to evaluate the test method for determining watertight systems, (2) to evaluate seam connectio...
Approximating genomic reliabilities for national genomic evaluation
USDA-ARS?s Scientific Manuscript database
With the introduction of standard methods for approximating effective daughter/data contribution by Interbull in 2001, conventional EDC or reliabilities contributed by daughter phenotypes are directly comparable across countries and used in routine conventional evaluations. In order to make publishe...
Villeval, M; Carayol, M; Lamy, S; Lepage, B; Lang, T
2016-12-01
In the field of health, evidence-based medicine and associated methods like randomised controlled trials (RCTs) have become widely used. RCT has become the gold standard for evaluating causal links between interventions and health results. Originating in pharmacology, this method has been progressively expanded to medical devices, non-pharmacological individual interventions, as well as collective public health interventions. Its use in these domains has led to the formulation of several limits, and it has been called into question as an undisputed gold standard. Some of those limits (e.g. confounding biases and external validity) are common to these four different domains, while others are more specific. This paper describes the different limits, as well as several research avenues. Some are methodological reflections aiming at adapting RCT to the complexity of the tested interventions, and at overcoming some of its limits. Others are alternative methods. The objective is not to remove RCT from the range of evaluation methodologies, but to resituate it within this range. The aim is to encourage choosing between different methods according to the features and the level of the intervention to evaluate, thereby calling for methodological pluralism. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
[Industry regulation and its relationship to the rapid marketing of medical devices].
Matsuoka, Atsuko
2012-01-01
In the market of medical devices, non-Japanese products hold a large part even in Japan. To overcome this situation, the Japanese government has been announcing policies to encourage the medical devices industry, such as the 5-year strategy for medical innovation (June 6, 2012). The Division of Medical Devices has been contributing to rapid marketing of medical devices by working out the standards for approval review and accreditation of medical devices, guidances on evaluation of medical devices with emerging technology, and test methods for biological safety evaluation of medical devices, as a part of practice in the field of regulatory science. The recent outcomes are 822 standards of accreditation for Class II medical devices, 14 guidances on safety evaluation of medical devices with emerging technology, and the revised test methods for biological safety evaluation (MHLW Notification by Director, OMDE, Yakushokuki-hatsu 0301 No. 20 "Basic Principles of Biological Safety Evaluation Required for Application for Approval to Market Medical Devices").
Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya
2013-01-01
Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607
Evaluation of Elm and Speck Sensors
Particulate matter (PM) is a pollutant of high public interest regulated by national ambient air quality standards (NAAQS) using Federal Reference Method (FRM) and Federal Equivalent Method (FEM) instrumentation identified for environmental monitoring. The US EPA has been evaluat...
Place, Benjamin J
2017-05-01
To address community needs, the National Institute of Standards and Technology has developed a candidate Standard Reference Material (SRM) for infant/adult nutritional formula based on milk and whey protein concentrates with isolated soy protein called SRM 1869 Infant/Adult Nutritional Formula. One major component of this candidate SRM is the fatty acid content. In this study, multiple extraction techniques were evaluated to quantify the fatty acids in this new material. Extraction methods that were based on lipid extraction followed by transesterification resulted in lower mass fraction values for all fatty acids than the values measured by methods utilizing in situ transesterification followed by fatty acid methyl ester extraction (ISTE). An ISTE method, based on the identified optimal parameters, was used to determine the fatty acid content of the new infant/adult nutritional formula reference material.
Building a gold standard to construct search filters: a case study with biomarkers for oral cancer.
Frazier, John J; Stein, Corey D; Tseytlin, Eugene; Bekhuis, Tanja
2015-01-01
To support clinical researchers, librarians and informationists may need search filters for particular tasks. Development of filters typically depends on a "gold standard" dataset. This paper describes generalizable methods for creating a gold standard to support future filter development and evaluation using oral squamous cell carcinoma (OSCC) as a case study. OSCC is the most common malignancy affecting the oral cavity. Investigation of biomarkers with potential prognostic utility is an active area of research in OSCC. The methods discussed here should be useful for designing quality search filters in similar domains. The authors searched MEDLINE for prognostic studies of OSCC, developed annotation guidelines for screeners, ran three calibration trials before annotating the remaining body of citations, and measured inter-annotator agreement (IAA). We retrieved 1,818 citations. After calibration, we screened the remaining citations (n = 1,767; 97.2%); IAA was substantial (kappa = 0.76). The dataset has 497 (27.3%) citations representing OSCC studies of potential prognostic biomarkers. The gold standard dataset is likely to be high quality and useful for future development and evaluation of filters for OSCC studies of potential prognostic biomarkers. The methodology we used is generalizable to other domains requiring a reference standard to evaluate the performance of search filters. A gold standard is essential because the labels regarding relevance enable computation of diagnostic metrics, such as sensitivity and specificity. Librarians and informationists with data analysis skills could contribute to developing gold standard datasets and subsequent filters tuned for their patrons' domains of interest.
Best practices for evaluating single nucleotide variant calling methods for microbial genomics
Olson, Nathan D.; Lund, Steven P.; Colman, Rebecca E.; Foster, Jeffrey T.; Sahl, Jason W.; Schupp, James M.; Keim, Paul; Morrow, Jayne B.; Salit, Marc L.; Zook, Justin M.
2015-01-01
Innovations in sequencing technologies have allowed biologists to make incredible advances in understanding biological systems. As experience grows, researchers increasingly recognize that analyzing the wealth of data provided by these new sequencing platforms requires careful attention to detail for robust results. Thus far, much of the scientific Communit’s focus for use in bacterial genomics has been on evaluating genome assembly algorithms and rigorously validating assembly program performance. Missing, however, is a focus on critical evaluation of variant callers for these genomes. Variant calling is essential for comparative genomics as it yields insights into nucleotide-level organismal differences. Variant calling is a multistep process with a host of potential error sources that may lead to incorrect variant calls. Identifying and resolving these incorrect calls is critical for bacterial genomics to advance. The goal of this review is to provide guidance on validating algorithms and pipelines used in variant calling for bacterial genomics. First, we will provide an overview of the variant calling procedures and the potential sources of error associated with the methods. We will then identify appropriate datasets for use in evaluating algorithms and describe statistical methods for evaluating algorithm performance. As variant calling moves from basic research to the applied setting, standardized methods for performance evaluation and reporting are required; it is our hope that this review provides the groundwork for the development of these standards. PMID:26217378
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1993-06-01
The bibliography contains citations concerning standards and standard tests for water quality in drinking water sources, reservoirs, and distribution systems. Standards from domestic and international sources are presented. Glossaries and vocabularies that concern water quality analysis, testing, and evaluation are included. Standard test methods for individual elements, selected chemicals, sensory properties, radioactivity, and other chemical and physical properties are described. Discussions for proposed standards on new pollutant materials are briefly considered. (Contains a minimum of 203 citations and includes a subject term index and title list.)
Estimating standard errors in feature network models.
Frank, Laurence E; Heiser, Willem J
2007-05-01
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.
Grant, Aileen; Dreischulte, Tobias; Treweek, Shaun; Guthrie, Bruce
2012-08-28
Trials of complex interventions are criticized for being 'black box', so the UK Medical Research Council recommends carrying out a process evaluation to explain the trial findings. We believe it is good practice to pre-specify and publish process evaluation protocols to set standards and minimize bias. Unlike protocols for trials, little guidance or standards exist for the reporting of process evaluations. This paper presents the mixed-method process evaluation protocol of a cluster randomized trial, drawing on a framework designed by the authors. This mixed-method evaluation is based on four research questions and maps data collection to a logic model of how the data-driven quality improvement in primary care (DQIP) intervention is expected to work. Data collection will be predominately by qualitative case studies in eight to ten of the trial practices, focus groups with patients affected by the intervention and quantitative analysis of routine practice data, trial outcome and questionnaire data and data from the DQIP intervention. We believe that pre-specifying the intentions of a process evaluation can help to minimize bias arising from potentially misleading post-hoc analysis. We recognize it is also important to retain flexibility to examine the unexpected and the unintended. From that perspective, a mixed-methods evaluation allows the combination of exploratory and flexible qualitative work, and more pre-specified quantitative analysis, with each method contributing to the design, implementation and interpretation of the other.As well as strengthening the study the authors hope to stimulate discussion among their academic colleagues about publishing protocols for evaluations of randomized trials of complex interventions. DATA-DRIVEN QUALITY IMPROVEMENT IN PRIMARY CARE TRIAL REGISTRATION: ClinicalTrials.gov: NCT01425502.
Aguiar, Lorena Andrade de; Melo, Lauro; de Lacerda de Oliveira, Lívia
2018-04-03
A major drawback of conventional descriptive profile (CDP) in sensory evaluation is the long time spent in panel training. Rapid descriptive methods (RDM) have increased significantly. Some of them have been compared with CDP for validation. In Health Sciences, systematic reviews (SR) are performed to evaluate validation of diagnostic tests in relation to a gold standard method. SR present a well-defined protocol to summarize research evidence and to evaluate the quality of the studies with determined criteria. We adapted SR protocol to evaluate the validation of RDM against CDP as satisfactory procedures to obtain food characterization. We used "Population Intervention Comparison Outcome Study - PICOS" framework to design the research in which "Population" was food/ beverages; "intervention" were RDM, "Comparison" was CDP as gold standard, "Outcome" was the ability of RDM to generate similar descriptive profiles in comparison with CDP and "Studies" was sensory descriptive analyses. The proportion of studies concluding for similarity of the RDM with CDP ranged from 0% to 100%. Low and moderate risk of bias were reached by 87% and 13% of the studies, respectively, supporting the conclusions of SR. RDM with semi-trained assessors and evaluation of individual attributes presented higher percentages of concordance with CDP.
NASA Astrophysics Data System (ADS)
Zhu, Lianqing; Chen, Yunfang; Chen, Qingshan; Meng, Hao
2011-05-01
According to minimum zone condition, a method for evaluating the profile error of Archimedes helicoid surface based on Genetic Algorithm (GA) is proposed. The mathematic model of the surface is provided and the unknown parameters in the equation of surface are acquired through least square method. Principle of GA is explained. Then, the profile error of Archimedes Helicoid surface is obtained through GA optimization method. To validate the proposed method, the profile error of an Archimedes helicoid surface, Archimedes Cylindrical worm (ZA worm) surface, is evaluated. The results show that the proposed method is capable of correctly evaluating the profile error of Archimedes helicoid surface and satisfy the evaluation standard of the Minimum Zone Method. It can be applied to deal with the measured data of profile error of complex surface obtained by three coordinate measurement machines (CMM).
Evaluation of reference crop evapotranspiration methods in arid, semi-arid and humid regions
USDA-ARS?s Scientific Manuscript database
It is necessary to find a simpler method in different climatic regions to calculate reference crop evapotranspiration (ETo) since the application of the FAO-56 Penman-Monteith method is often restricted due to unavailability of a full weather data set. Seven ETo methods, the de facto standard FAO-56...
Development of a Methodology for Assessing Aircrew Workloads.
1981-11-01
Workload Feasibility Study. .. ...... 52 Subjects. .. .............. ........ 53 Equipment .. ............... ....... 53 Date Analysis ... analysis ; simulation; standard time systems; switching synthetic time systems; task activities; task interference; time study; tracking; workload; work sampl...standard data systems, information content analysis , work sampling and job evaluation. Con- ventional methods were found to be deficient in accounting
77 FR 20217 - Secondary National Ambient Air Quality Standards for Oxides of Nitrogen and Sulfur
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-03
... Planning and Standards (OAQPS), U.S. Environmental Protection Agency, Mail Code C504-06, Research Triangle... of Research 3. Implementation Challenges 4. Monitoring Plan Development and Stakeholder Participation B. Summary of Proposed Evaluation of Monitoring Methods C. Comments on Field Pilot Program and...
Discovering Morocco: Using the Five Fundamental Themes of Geography in Order to Discover Morocco
ERIC Educational Resources Information Center
Fitzhugh, William P.
2005-01-01
This curriculum unit, intended to be used with elementary school students, provides information about a North African, Moslem, Arab, country: Morocco. The unit presents objectives, strategies, materials needed, background notes, evaluation methods, and assignments that fulfill National Social Studies Standards and National Geography standards. It…
Modified Delphi Investigation of Lesson Planning Concepts for Physical Education Teacher Education
ERIC Educational Resources Information Center
Sager, Jack W.
2012-01-01
Improving the methods of instructing future educators, through program evaluation and improvement, should be a goal of all teacher education programs. In physical education, the National Association for Sport & Physical Education created standards for initial preparation of physical education teachers. The six standards for preparation include…
Evaluation of a new automated instrument for pretransfusion testing.
Morelati, F; Revelli, N; Maffei, L M; Poretti, M; Santoro, C; Parravicini, A; Rebulla, P; Cole, R; Sirchia, G
1998-10-01
A number of automated devices for pretransfusion testing have recently become available. This study evaluated a fully automated device based on column agglutination technology (AutoVue System, Ortho, Raritan, NJ). Some 6747 tests including forward and reverse ABO group, Rh type and phenotype, antibody screen, autocontrol, and crossmatch were performed on random samples from 1069 blood donors, 2063 patients, and 98 newborns and cord blood. Also tested were samples from 168 immunized patients and 53 donors expressing weak or variant A and D antigens. Test results and technician times required for their performance were compared with those obtained by standard methods (manual column agglutination technology, slide, semiautomatic handler). No erroneous conclusions were found in regard to the 5028 ABO group and Rh type or phenotype determinations carried out with the device. The device rejected 1.53 percent of tests for sample inadequacy. Of the remaining 18 tests with discrepant results found with the device and not confirmed with the standard methods, 6 gave such results because of mixed-field reactions, 10 gave negative results with A2 RBCs in reverse ABO grouping, and 2 gave very weak positive reactions in antibody screening and crossmatching. In the samples from immunized patients, the device missed one weak anti-K, whereas standard methods missed five weak antibodies. In addition, 48, 34, and 31 of the 53 weak or variant antigens were detected by the device, the slide method, and the semiautomated handler, respectively. Technician time with the standard methods was 1.6 to 7 times higher than that with the device. The technical performance of the device compared favorably with that of standard methods, with a number of advantages, including in particular the saving of technician time. Sample inadequacy was the most common cause of discrepancy, which suggests that standardization of sample collection can further improve the performance of the device.
Unmanned aircraft system sense and avoid integrity and continuity
NASA Astrophysics Data System (ADS)
Jamoom, Michael B.
This thesis describes new methods to guarantee safety of sense and avoid (SAA) functions for Unmanned Aircraft Systems (UAS) by evaluating integrity and continuity risks. Previous SAA efforts focused on relative safety metrics, such as risk ratios, comparing the risk of using an SAA system versus not using it. The methods in this thesis evaluate integrity and continuity risks as absolute measures of safety, as is the established practice in commercial aircraft terminal area navigation applications. The main contribution of this thesis is a derivation of a new method, based on a standard intruder relative constant velocity assumption, that uses hazard state estimates and estimate error covariances to establish (1) the integrity risk of the SAA system not detecting imminent loss of '"well clear," which is the time and distance required to maintain safe separation from intruder aircraft, and (2) the probability of false alert, the continuity risk. Another contribution is applying these integrity and continuity risk evaluation methods to set quantifiable and certifiable safety requirements on sensors. A sensitivity analysis uses this methodology to evaluate the impact of sensor errors on integrity and continuity risks. The penultimate contribution is an integrity and continuity risk evaluation where the estimation model is refined to address realistic intruder relative linear accelerations, which goes beyond the current constant velocity standard. The final contribution is an integrity and continuity risk evaluation addressing multiple intruders. This evaluation is a new innovation-based method to determine the risk of mis-associating intruder measurements. A mis-association occurs when the SAA system incorrectly associates a measurement to the wrong intruder, causing large errors in the estimated intruder trajectories. The new methods described in this thesis can help ensure safe encounters between aircraft and enable SAA sensor certification for UAS integration into the National Airspace System.
EVALUATION OF CRYPTOSPORIDIUM OOCYSTS AND GIARDIA CYSTS IN A WATERSHED RESERVOIR
This investigation evaluated the occurrence of Cryptosporidium oocysts and Giardia cysts at 17 sampling locations in Lake Texoma reservoir using method 1623 with standard Envirocheck" capsule filters. The watershed serves rural agricultural communities active in cattle ranching,...
EVALUATION OF CRYPTOSPORIDIUM OOCYSTS AND GIARDIA CYSTS IN A WATERSHED RESERVOIR
This investigation evaluated the occurrence of Cryptosporidium oocysts and Giardia cysts at 17 sampling locations in Lake Texoma reservoir using method 1623 with standard Envirocheck™ capsule filters. The watershed serves rural agricultural communities active in cattle ranching, ...
Flores-Montero, J; Sanoja-Flores, L; Paiva, B; Puig, N; García-Sánchez, O; Böttcher, S; van der Velden, V H J; Pérez-Morán, J-J; Vidriales, M-B; García-Sanz, R; Jimenez, C; González, M; Martínez-López, J; Corral-Mateos, A; Grigore, G-E; Fluxá, R; Pontes, R; Caetano, J; Sedek, L; Del Cañizo, M-C; Bladé, J; Lahuerta, J-J; Aguilar, C; Bárez, A; García-Mateo, A; Labrador, J; Leoz, P; Aguilera-Sanz, C; San-Miguel, J; Mateos, M-V; Durie, B; van Dongen, J J M; Orfao, A
2017-10-01
Flow cytometry has become a highly valuable method to monitor minimal residual disease (MRD) and evaluate the depth of complete response (CR) in bone marrow (BM) of multiple myeloma (MM) after therapy. However, current flow-MRD has lower sensitivity than molecular methods and lacks standardization. Here we report on a novel next generation flow (NGF) approach for highly sensitive and standardized MRD detection in MM. An optimized 2-tube 8-color antibody panel was constructed in five cycles of design-evaluation-redesign. In addition, a bulk-lysis procedure was established for acquisition of ⩾10 7 cells/sample, and novel software tools were constructed for automatic plasma cell gating. Multicenter evaluation of 110 follow-up BM from MM patients in very good partial response (VGPR) or CR showed a higher sensitivity for NGF-MRD vs conventional 8-color flow-MRD -MRD-positive rate of 47 vs 34% (P=0.003)-. Thus, 25% of patients classified as MRD-negative by conventional 8-color flow were MRD-positive by NGF, translating into a significantly longer progression-free survival for MRD-negative vs MRD-positive CR patients by NGF (75% progression-free survival not reached vs 7 months; P=0.02). This study establishes EuroFlow-based NGF as a highly sensitive, fully standardized approach for MRD detection in MM which overcomes the major limitations of conventional flow-MRD methods and is ready for implementation in routine diagnostics.
Evaluation of Field-deployed Low Cost PM Sensors
Background Particulate matter (PM) is a pollutant of high public interest regulated by national ambient air quality standards (NAAQS) using federal reference method (FRM) and federal equivalent method (FEM) instrumentation identified for environmental monitoring. PM is present i...
NASA Technical Reports Server (NTRS)
Whalen, Robert T.; Napel, Sandy; Yan, Chye H.
1996-01-01
Progress in development of the methods required to study bone remodeling as a function of time is reported. The following topics are presented: 'A New Methodology for Registration Accuracy Evaluation', 'Registration of Serial Skeletal Images for Accurately Measuring Changes in Bone Density', and 'Precise and Accurate Gold Standard for Multimodality and Serial Registration Method Evaluations.'
Evaluation of pulse-oximetry oxygen saturation taken through skin protective covering.
James, Jyotsna; Tiwari, Lokesh; Upadhyay, Pramod; Sreenivas, Vishnubhatla; Bhambhani, Vikas; Puliyel, Jacob M
2006-05-06
The hard edges of adult finger clip probes of the pulse oximetry oxygen saturation (POOS) monitor can cause skin damage if used for prolonged periods in a neonate. Covering the skin under the probe with Micropore surgical tape or a gauze piece might prevent such injury. The study was done to see if the protective covering would affect the accuracy of the readings. POOS was studied in 50 full-term neonates in the first week of life. After obtaining consent from their parents the neonates had POOS readings taken directly (standard technique) and through the protective covering. Bland-Altman plots were used to compare the new method with the standard technique. A test of repeatability for each method was also performed. The Bland-Altman plots suggest that there is no significant loss of accuracy when readings are taken through the protective covering. The mean difference was 0.06 (SD of 1.39) and 0.04 (SD 1.3) with Micropore and gauze respectively compared to the standard method. The mean difference was 0.22 (SD 0.23) on testing repeatability with the standard method. Interposing Micropore or gauze does not significantly affect the accuracy of the POOS reading. The difference between the standard method and the new method was less than the difference seen on testing repeatability of the standard method.
Havlicek, Martin; Jan, Jiri; Brazdil, Milan; Calhoun, Vince D.
2015-01-01
Increasing interest in understanding dynamic interactions of brain neural networks leads to formulation of sophisticated connectivity analysis methods. Recent studies have applied Granger causality based on standard multivariate autoregressive (MAR) modeling to assess the brain connectivity. Nevertheless, one important flaw of this commonly proposed method is that it requires the analyzed time series to be stationary, whereas such assumption is mostly violated due to the weakly nonstationary nature of functional magnetic resonance imaging (fMRI) time series. Therefore, we propose an approach to dynamic Granger causality in the frequency domain for evaluating functional network connectivity in fMRI data. The effectiveness and robustness of the dynamic approach was significantly improved by combining a forward and backward Kalman filter that improved estimates compared to the standard time-invariant MAR modeling. In our method, the functional networks were first detected by independent component analysis (ICA), a computational method for separating a multivariate signal into maximally independent components. Then the measure of Granger causality was evaluated using generalized partial directed coherence that is suitable for bivariate as well as multivariate data. Moreover, this metric provides identification of causal relation in frequency domain, which allows one to distinguish the frequency components related to the experimental paradigm. The procedure of evaluating Granger causality via dynamic MAR was demonstrated on simulated time series as well as on two sets of group fMRI data collected during an auditory sensorimotor (SM) or auditory oddball discrimination (AOD) tasks. Finally, a comparison with the results obtained from a standard time-invariant MAR model was provided. PMID:20561919
Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A
2017-01-01
The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60-90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity.
ESTABLISH AND STANDARDIZE METHODOLOGY FOR ...
Research is conducted to develop and standardize methods to detect and measure occurrence of human enteric viruses that cause waterborne disease. The viruses of concern include the emerging pathogens--hepatitis E virus and group B rotaviruses. Also of concern are the coxsackieviruses and echoviruses--two members of the Office of Water's Contaminant Candidate List (CCL). Under this task, indicators of fecal pollution are also being evaluated as to their importance in evaluating microbial water quality. Another focus of the research is to address the standardization, evaluation and promulgation of detection methods for bacterial viruses. Develop sensitive techniques to detect and identify emerging human waterborne pathogenic viruses and viruses on the CCL.Determine effectiveness of viral indicators to measure microbial quality in water matrices.Support activities: (a) culture and distribution of mammalian cells for Agency and scientific community research needs, (b) provide operator expertise for research requiring confocal and electron microscopy, (c) glassware cleaning, sterilization and biological waste disposal for the Cincinnati EPA facility, (d) operation of infectious pathogenic suite, (e) maintenance of walk-in constant temperature rooms and (f) provide Giardia cysts.
Peano, Clelia; Samson, Maria Cristina; Palmieri, Luisa; Gulli, Mariolina; Marmiroli, Nelson
2004-11-17
The presence of DNA in foodstuffs derived from or containing genetically modified organisms (GMO) is the basic requirement for labeling of GMO foods in Council Directive 2001/18/CE (Off. J. Eur. Communities 2001, L1 06/2). In this work, four different methods for DNA extraction were evaluated and compared. To rank the different methods, the quality and quantity of DNA extracted from standards, containing known percentages of GMO material and from different food products, were considered. The food products analyzed derived from both soybean and maize and were chosen on the basis of the mechanical, technological, and chemical treatment they had been subjected to during processing. Degree of DNA degradation at various stages of food production was evaluated through the amplification of different DNA fragments belonging to the endogenous genes of both maize and soybean. Genomic DNA was extracted from Roundup Ready soybean and maize MON810 standard flours, according to four different methods, and quantified by real-time Polymerase Chain Reaction (PCR), with the aim of determining the influence of the extraction methods on the DNA quantification through real-time PCR.
Validity and reliability of a method for assessment of cervical vertebral maturation.
Zhao, Xiao-Guang; Lin, Jiuxiang; Jiang, Jiu-Hui; Wang, Qingzhu; Ng, Sut Hong
2012-03-01
To evaluate the validity and reliability of the cervical vertebral maturation (CVM) method with a longitudinal sample. Eighty-six cephalograms from 18 subjects (5 males and 13 females) were selected from the longitudinal database. Total mandibular length was measured on each film; an increased rate served as the gold standard in examination of the validity of the CVM method. Eleven orthodontists, after receiving intensive training in the CVM method, evaluated all films twice. Kendall's W and the weighted kappa statistic were employed. Kendall's W values were higher than 0.8 at both times, indicating strong interobserver reproducibility, but interobserver agreement was documented twice at less than 50%. A wide range of intraobserver agreement was noted (40.7%-79.1%), and substantial intraobserver reproducibility was proved by kappa values (0.53-0.86). With regard to validity, moderate agreement was reported between the gold standard and observer staging at the initial time (kappa values 0.44-0.61). However, agreement seemed to be unacceptable for clinical use, especially in cervical stage 3 (26.8%). Even though the validity and reliability of the CVM method proved statistically acceptable, we suggest that many other growth indicators should be taken into consideration in evaluating adolescent skeletal maturation.
Wu, Jing; Philip, Ana-Maria; Podkowinski, Dominika; Gerendas, Bianca S; Langs, Georg; Simader, Christian; Waldstein, Sebastian M; Schmidt-Erfurth, Ursula M
2016-01-01
Development of image analysis and machine learning methods for segmentation of clinically significant pathology in retinal spectral-domain optical coherence tomography (SD-OCT), used in disease detection and prediction, is limited due to the availability of expertly annotated reference data. Retinal segmentation methods use datasets that either are not publicly available, come from only one device, or use different evaluation methodologies making them difficult to compare. Thus we present and evaluate a multiple expert annotated reference dataset for the problem of intraretinal cystoid fluid (IRF) segmentation, a key indicator in exudative macular disease. In addition, a standardized framework for segmentation accuracy evaluation, applicable to other pathological structures, is presented. Integral to this work is the dataset used which must be fit for purpose for IRF segmentation algorithm training and testing. We describe here a multivendor dataset comprised of 30 scans. Each OCT scan for system training has been annotated by multiple graders using a proprietary system. Evaluation of the intergrader annotations shows a good correlation, thus making the reproducibly annotated scans suitable for the training and validation of image processing and machine learning based segmentation methods. The dataset will be made publicly available in the form of a segmentation Grand Challenge.
Wu, Jing; Philip, Ana-Maria; Podkowinski, Dominika; Gerendas, Bianca S.; Langs, Georg; Simader, Christian
2016-01-01
Development of image analysis and machine learning methods for segmentation of clinically significant pathology in retinal spectral-domain optical coherence tomography (SD-OCT), used in disease detection and prediction, is limited due to the availability of expertly annotated reference data. Retinal segmentation methods use datasets that either are not publicly available, come from only one device, or use different evaluation methodologies making them difficult to compare. Thus we present and evaluate a multiple expert annotated reference dataset for the problem of intraretinal cystoid fluid (IRF) segmentation, a key indicator in exudative macular disease. In addition, a standardized framework for segmentation accuracy evaluation, applicable to other pathological structures, is presented. Integral to this work is the dataset used which must be fit for purpose for IRF segmentation algorithm training and testing. We describe here a multivendor dataset comprised of 30 scans. Each OCT scan for system training has been annotated by multiple graders using a proprietary system. Evaluation of the intergrader annotations shows a good correlation, thus making the reproducibly annotated scans suitable for the training and validation of image processing and machine learning based segmentation methods. The dataset will be made publicly available in the form of a segmentation Grand Challenge. PMID:27579177
Evaluating the risks of clinical research: direct comparative analysis.
Rid, Annette; Abdoler, Emily; Roberson-Nay, Roxann; Pine, Daniel S; Wendler, David
2014-09-01
Many guidelines and regulations allow children and adolescents to be enrolled in research without the prospect of clinical benefit when it poses minimal risk. However, few systematic methods exist to determine when research risks are minimal. This situation has led to significant variation in minimal risk judgments, raising concern that some children are not being adequately protected. To address this concern, we describe a new method for implementing the widely endorsed "risks of daily life" standard for minimal risk. This standard defines research risks as minimal when they do not exceed the risks posed by daily life activities or routine examinations. This study employed a conceptual and normative analysis, and use of an illustrative example. Different risks are composed of the same basic elements: Type, likelihood, and magnitude of harm. Hence, one can compare the risks of research and the risks of daily life by comparing the respective basic elements with each other. We use this insight to develop a systematic method, direct comparative analysis, for implementing the "risks of daily life" standard for minimal risk. The method offers a way of evaluating research procedures that pose the same types of risk as daily life activities, such as the risk of experiencing anxiety, stress, or other psychological harm. We thus illustrate how direct comparative analysis can be applied in practice by using it to evaluate whether the anxiety induced by a respiratory CO2 challenge poses minimal or greater than minimal risks in children and adolescents. Direct comparative analysis is a systematic method for applying the "risks of daily life" standard for minimal risk to research procedures that pose the same types of risk as daily life activities. It thereby offers a method to protect children and adolescents in research, while ensuring that important studies are not blocked because of unwarranted concerns about research risks.
Evaluating Metal Probe Meters for Soil Testing.
ERIC Educational Resources Information Center
Hershey, David R.
1992-01-01
Inexpensive metal probe meters that are sold by garden stores can be evaluated by students for their accuracy in measuring soil pH, moisture, fertility, and salinity. The author concludes that the meters are inaccurate and cannot be calibrated in standard units. However, the student evaluations are useful in learning the methods of soil analysis…
Development, standardization, and validation of analytical methods provides state-of-the-science
techniques to evaluate the presence, or absence, of select PPCPs in biosolids. This research
provides the approaches, methods, and tools to assess the exposures and redu...
Many PCR-based methods for microbial source tracking (MST) have been developed and validated within individual research laboratories. Inter-laboratory validation of these methods, however, has been minimal, and the effects of protocol standardization regimes have not been thor...
A CRITICAL EVALUATION OF A FLOW CYTOMETER USED FOR DETECTING ENTEROCOCCI IN RECREATIONAL WATERS
The current U. S. Environmental Protection Agency-approved method for enterococci (Method 1600) in recreational water is a membrane filter (MF) method that takes 24 hours to obtain results. If the recreational water is not in compliance with the standard, the risk of exposure to...
Analytical evaluation of current starch methods used in the international sugar industry: Part I
USDA-ARS?s Scientific Manuscript database
Several analytical starch methods currently exist in the international sugar industry that are used to prevent or mitigate starch-related processing challenges as well as assess the quality of traded end-products. These methods use simple iodometric chemistry, mostly potato starch standards, and uti...
Evaluating the Effects of Gamma-Irradiation for Decontamination of Medicinal Cannabis.
Hazekamp, Arno
2016-01-01
In several countries with a National medicinal cannabis program, pharmaceutical regulations specify that herbal cannabis products must adhere to strict safety standards regarding microbial contamination. Treatment by gamma irradiation currently seems the only method available to meet these requirements. We evaluated the effects of irradiation treatment of four different cannabis varieties covering different chemical compositions. Samples were compared before and after standard gamma-irradiation treatment by performing quantitative UPLC analysis of major cannabinoids, as well as qualitative GC analysis of full cannabinoid and terpene profiles. In addition, water content and microscopic appearance of the cannabis flowers was evaluated. This study found that treatment did not cause changes in the content of THC and CBD, generally considered as the most important therapeutically active components of medicinal cannabis. Likewise, the water content and the microscopic structure of the dried cannabis flowers were not altered by standard irradiation protocol in the cannabis varieties studied. The effect of gamma-irradiation was limited to a reduction of some terpenes present in the cannabis, but keeping the terpene profile qualitatively the same. Based on the results presented in this report, gamma irradiation of herbal cannabis remains the recommended method of decontamination, at least until other more generally accepted methods have been developed and validated.
Evaluating the Effects of Gamma-Irradiation for Decontamination of Medicinal Cannabis
Hazekamp, Arno
2016-01-01
In several countries with a National medicinal cannabis program, pharmaceutical regulations specify that herbal cannabis products must adhere to strict safety standards regarding microbial contamination. Treatment by gamma irradiation currently seems the only method available to meet these requirements. We evaluated the effects of irradiation treatment of four different cannabis varieties covering different chemical compositions. Samples were compared before and after standard gamma-irradiation treatment by performing quantitative UPLC analysis of major cannabinoids, as well as qualitative GC analysis of full cannabinoid and terpene profiles. In addition, water content and microscopic appearance of the cannabis flowers was evaluated. This study found that treatment did not cause changes in the content of THC and CBD, generally considered as the most important therapeutically active components of medicinal cannabis. Likewise, the water content and the microscopic structure of the dried cannabis flowers were not altered by standard irradiation protocol in the cannabis varieties studied. The effect of gamma-irradiation was limited to a reduction of some terpenes present in the cannabis, but keeping the terpene profile qualitatively the same. Based on the results presented in this report, gamma irradiation of herbal cannabis remains the recommended method of decontamination, at least until other more generally accepted methods have been developed and validated. PMID:27199751
Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Teshima, Reiko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi
2014-01-01
A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) maize event, MIR162. We first prepared a standard plasmid for MIR162 quantification. The conversion factor (Cf) required to calculate the genetically modified organism (GMO) amount was empirically determined for two real-time PCR instruments, the Applied Biosystems 7900HT (ABI7900) and the Applied Biosystems 7500 (ABI7500) for which the determined Cf values were 0.697 and 0.635, respectively. To validate the developed method, a blind test was carried out in an interlaboratory study. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSDr). The determined biases were less than 25% and the RSDr values were less than 20% at all evaluated concentrations. These results suggested that the limit of quantitation of the method was 0.5%, and that the developed method would thus be suitable for practical analyses for the detection and quantification of MIR162.
Campbell, Rebecca; Pierce, Steven J; Sharma, Dhruv B; Shaw, Jessica; Feeney, Hannah; Nye, Jeffrey; Schelling, Kristin; Fehler-Cabral, Giannina
2017-01-01
A growing number of U.S. cities have large numbers of untested sexual assault kits (SAKs) in police property facilities. Testing older kits and maintaining current case work will be challenging for forensic laboratories, creating a need for more efficient testing methods. We evaluated selective degradation methods for DNA extraction using actual case work from a sample of previously unsubmitted SAKs in Detroit, Michigan. We randomly assigned 350 kits to either standard or selective degradation testing methods and then compared DNA testing rates and CODIS entry rates between the two groups. Continuation-ratio modeling showed no significant differences, indicating that the selective degradation method had no decrement in performance relative to customary methods. Follow-up equivalence tests indicated that CODIS entry rates for the two methods could differ by more than ±5%. Selective degradation methods required less personnel time for testing and scientific review than standard testing. © 2016 American Academy of Forensic Sciences.
Katoh, Masakazu; Hamajima, Fumiyasu; Ogasawara, Takahiro; Hata, Ken-Ichiro
2009-06-01
A validation study of an in vitro skin irritation testing method using a reconstructed human skin model has been conducted by the European Centre for the Validation of Alternative Methods (ECVAM), and a protocol using EpiSkin (SkinEthic, France) has been approved. The structural and performance criteria of skin models for testing are defined in the ECVAM Performance Standards announced along with the approval. We have performed several evaluations of the new reconstructed human epidermal model LabCyte EPI-MODEL, and confirmed that it is applicable to skin irritation testing as defined in the ECVAM Performance Standards. We selected 19 materials (nine irritants and ten non-irritants) available in Japan as test chemicals among the 20 reference chemicals described in the ECVAM Performance Standard. A test chemical was applied to the surface of the LabCyte EPI-MODEL for 15 min, after which it was completely removed and the model then post-incubated for 42 hr. Cell v iability was measured by MTT assay and skin irritancy of the test chemical evaluated. In addition, interleukin-1 alpha (IL-1alpha) concentration in the culture supernatant after post-incubation was measured to provide a complementary evaluation of skin irritation. Evaluation of the 19 test chemicals resulted in 79% accuracy, 78% sensitivity and 80% specificity, confirming that the in vitro skin irritancy of the LabCyte EPI-MODEL correlates highly with in vivo skin irritation. These results suggest that LabCyte EPI-MODEL is applicable to the skin irritation testing protocol set out in the ECVAM Performance Standards.
Evaluation of new aquatic toxicity test methods for oil dispersants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pace, C.B.; Clark, J.R.; Bragin, G.E.
1994-12-31
Current aquatic toxicity test methods used for dispersant registration do not address real world exposure scenarios. Current test methods require 48 or 96 hour constant exposure conditions. In contrast, environmentally realistic exposures can be described as a pulse in which the initial concentration declines over time. Recent research using a specially designed testing apparatus (the California system) has demonstrated that exposure to Corexit 9527{reg_sign} under pulsed exposure conditions may be 3 to 22 times less toxic compared to continuous exposure scenarios. The objectives of this study were to compare results of toxicity tests using the California test system to resultsmore » from standardized tests, evaluate sensitivity of regional (Holmesimysis cast and Atherinops affinis) vs. standard test species (Mysidopsis bahia and Menidia beryllina) and determine if tests using the California test system and method are reproducible. All tests were conducted using Corexit 9527{reg_sign} as the test material. Standard toxicity tests conducted with M. bahia and H. cast resulted in LC50s similar to those from tests using the California apparatus. LC50s from tests conducted in the authors` laboratory with the California system and standard test species were within a factor of 2 to 6 of data previously reported for west coast species. Results of tests conducted with H. cast in the laboratory compared favorably to data reported by Singer et al. 1991.« less
Veronese, Paola; Bogana, Gianna; Cerutti, Alessia; Yeo, Lami; Romero, Roberto; Gervasi, Maria Teresa
2016-01-01
Objective To evaluate the performance of Fetal Intelligent Navigation Echocardiography (FINE) applied to spatiotemporal image correlation (STIC) volume datasets of the normal fetal heart in generating standard fetal echocardiography views. Methods In this prospective cohort study of patients with normal fetal hearts (19-30 gestational weeks), one or more STIC volume datasets were obtained of the apical four-chamber view. Each STIC volume successfully obtained was evaluated by STICLoop™ to determine its appropriateness before applying the FINE method. Visualization rates for standard fetal echocardiography views using diagnostic planes and/or Virtual Intelligent Sonographer Assistance (VIS-Assistance®) were calculated. Results One or more STIC volumes (n=463 total) were obtained in 246 patients. A single STIC volume per patient was analyzed using the FINE method. In normal cases, FINE was able to generate nine fetal echocardiography views using: 1) diagnostic planes in 76-100% of cases; 2) VIS-Assistance® in 96-100% of cases; and 3) a combination of diagnostic planes and/or VIS-Assistance® in 96-100% of cases. Conclusion FINE applied to STIC volumes can successfully generate nine standard fetal echocardiography views in 96-100% of cases in the second and third trimesters. This suggests that the technology can be used as a method to screen for congenital heart disease. PMID:27309391
[Automated procedure for volumetric measurement of metastases: estimation of tumor burden].
Fabel, M; Bolte, H
2008-09-01
Cancer is a common and increasing disease worldwide. Therapy monitoring in oncologic patient care requires accurate and reliable measurement methods for evaluation of the tumor burden. RECIST (response evaluation criteria in solid tumors) and WHO criteria are still the current standards for therapy response evaluation with inherent disadvantages due to considerable interobserver variation of the manual diameter estimations. Volumetric analysis of e.g. lung, liver and lymph node metastases, promises to be a more accurate, precise and objective method for tumor burden estimation.
Evaluation of a standard test method for screening fuels in soils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorini, S.S.; Schabron, J.F.
1996-12-31
A new screening method for fuel contamination in soils was recently developed as American Society for Testing and Materials (ASTM) Method D-5831-95, Standard Test Method for Screening Fuels in Soils. This method uses low-toxicity chemicals and can be sued to screen organic- rich soils, as well as being fast, easy, and inexpensive to perform. Fuels containing aromatic compounds, such as diesel fuel and gasoline, as well as other aromatic-containing hydrocarbon materials, such as motor oil, crude oil, and cola oil, can be determined. The screening method for fuels in soils was evaluated by conducting a Collaborative study on the method.more » In the Collaborative study, a sand and an organic soil spiked with various concentrations of diesel fuel were tested. Data from the Collaborative study were used to determine the reproducibility (between participants) and repeatability (within participants) precision of the method for screening the test materials. The Collaborative study data also provide information on the performance of portable field equipment (patent pending) versus laboratory equipment for performing the screening method and a comparison of diesel concentration values determined using the screening method versus a laboratory method.« less
Evaluation of methods for measuring particulate matter emissions from gas turbines.
Petzold, Andreas; Marsh, Richard; Johnson, Mark; Miller, Michael; Sevcenco, Yura; Delhaye, David; Ibrahim, Amir; Williams, Paul; Bauer, Heidi; Crayford, Andrew; Bachalo, William D; Raper, David
2011-04-15
The project SAMPLE evaluated methods for measuring particle properties in the exhaust of aircraft engines with respect to the development of standardized operation procedures for particulate matter measurement in aviation industry. Filter-based off-line mass methods included gravimetry and chemical analysis of carbonaceous species by combustion methods. Online mass methods were based on light absorption measurement or used size distribution measurements obtained from an electrical mobility analyzer approach. Number concentrations were determined using different condensation particle counters (CPC). Total mass from filter-based methods balanced gravimetric mass within 8% error. Carbonaceous matter accounted for 70% of gravimetric mass while the remaining 30% were attributed to hydrated sulfate and noncarbonaceous organic matter fractions. Online methods were closely correlated over the entire range of emission levels studied in the tests. Elemental carbon from combustion methods and black carbon from optical methods deviated by maximum 5% with respect to mass for low to medium emission levels, whereas for high emission levels a systematic deviation between online methods and filter based methods was found which is attributed to sampling effects. CPC based instruments proved highly reproducible for number concentration measurements with a maximum interinstrument standard deviation of 7.5%.
Hailu, Tadesse; Abera, Bayeh
2015-07-01
The parasite load within the sample and the amount of sample taken during examination greatly compromise the sensitivity of direct saline stool microscopy. A cross-sectional study was conducted in March 2011 in Bahir Dar city among 778 fresh single stool samples to evaluate the performance of direct saline (DS), Kato Katz (KK) and Formol ether concentration (FEC) methods against the 'Gold' standard. Among 778 stool samples from school age children, the highest prevalence of intestinal parasites was recorded by FEC (55.1%). The sensitivity of DS, FEC and KK were 61.1%, 92.3% and 58.7%, respectively. FEC is more sensitive than DS and KK. Hence, use of the latter is preferred. © The Author(s) 2015.
Screening and Evaluation Tool (SET) Users Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pincock, Layne
This document is the users guide to using the Screening and Evaluation Tool (SET). SET is a tool for comparing multiple fuel cycle options against a common set of criteria and metrics. It does this using standard multi-attribute utility decision analysis methods.
Berlinger, Balazs; Harper, Martin
2018-02-01
There is interest in the bioaccessible metal components of aerosols, but this has been minimally studied because standardized sampling and analytical methods have not yet been developed. An interlaboratory study (ILS) has been carried out to evaluate a method for determining the water-soluble component of realistic welding fume (WF) air samples. Replicate samples were generated in the laboratory and distributed to participating laboratories to be analyzed according to a standardized procedure. Within-laboratory precision of replicate sample analysis (repeatability) was very good. Reproducibility between laboratories was not as good, but within limits of acceptability for the analysis of typical aerosol samples. These results can be used to support the development of a standardized test method.
Comprehensive analysis of translational osteochondral repair: Focus on the histological assessment.
Orth, Patrick; Peifer, Carolin; Goebel, Lars; Cucchiarini, Magali; Madry, Henning
2015-10-01
Articular cartilage guarantees for an optimal functioning of diarthrodial joints by providing a gliding surface for smooth articulation, weight distribution, and shock absorbing while the subchondral bone plays a crucial role in its biomechanical and nutritive support. Both tissues together form the osteochondral unit. The structural assessment of the osteochondral unit is now considered the key standard procedure for evaluating articular cartilage repair in translational animal models. The aim of this review is to give a detailed overview of the different methods for a comprehensive evaluation of osteochondral repair. The main focus is on the histological assessment as the gold standard, together with immunohistochemistry, and polarized light microscopy. Additionally, standards of macroscopic, non-destructive imaging such as high resolution MRI and micro-CT, biochemical, and molecular biological evaluations are addressed. Potential pitfalls of analysis are outlined. A second focus is to suggest recommendations for osteochondral evaluation. Copyright © 2015 Elsevier GmbH. All rights reserved.
NASA Astrophysics Data System (ADS)
Grova, C.; Jannin, P.; Biraben, A.; Buvat, I.; Benali, H.; Bernard, A. M.; Scarabin, J. M.; Gibaud, B.
2003-12-01
Quantitative evaluation of brain MRI/SPECT fusion methods for normal and in particular pathological datasets is difficult, due to the frequent lack of relevant ground truth. We propose a methodology to generate MRI and SPECT datasets dedicated to the evaluation of MRI/SPECT fusion methods and illustrate the method when dealing with ictal SPECT. The method consists in generating normal or pathological SPECT data perfectly aligned with a high-resolution 3D T1-weighted MRI using realistic Monte Carlo simulations that closely reproduce the response of a SPECT imaging system. Anatomical input data for the SPECT simulations are obtained from this 3D T1-weighted MRI, while functional input data result from an inter-individual analysis of anatomically standardized SPECT data. The method makes it possible to control the 'brain perfusion' function by proposing a theoretical model of brain perfusion from measurements performed on real SPECT images. Our method provides an absolute gold standard for assessing MRI/SPECT registration method accuracy since, by construction, the SPECT data are perfectly registered with the MRI data. The proposed methodology has been applied to create a theoretical model of normal brain perfusion and ictal brain perfusion characteristic of mesial temporal lobe epilepsy. To approach realistic and unbiased perfusion models, real SPECT data were corrected for uniform attenuation, scatter and partial volume effect. An anatomic standardization was used to account for anatomic variability between subjects. Realistic simulations of normal and ictal SPECT deduced from these perfusion models are presented. The comparison of real and simulated SPECT images showed relative differences in regional activity concentration of less than 20% in most anatomical structures, for both normal and ictal data, suggesting realistic models of perfusion distributions for evaluation purposes. Inter-hemispheric asymmetry coefficients measured on simulated data were found within the range of asymmetry coefficients measured on corresponding real data. The features of the proposed approach are compared with those of other methods previously described to obtain datasets appropriate for the assessment of fusion methods.
Cimetiere, Nicolas; Soutrel, Isabelle; Lemasle, Marguerite; Laplanche, Alain; Crocq, André
2013-01-01
The study of the occurrence and fate of pharmaceutical compounds in drinking or waste water processes has become very popular in recent years. Liquid chromatography with tandem mass spectrometry is a powerful analytical tool often used to determine pharmaceutical residues at trace level in water. However, many steps may disrupt the analytical procedure and bias the results. A list of 27 environmentally relevant molecules, including various therapeutic classes and (cardiovascular, veterinary and human antibiotics, neuroleptics, non-steroidal anti-inflammatory drugs, hormones and other miscellaneous pharmaceutical compounds), was selected. In this work, a method was developed using ultra performance liquid chromatography coupled to tandem mass spectrometry (UPLC-MS/MS) and solid-phase extraction to determine the concentration of the 27 targeted pharmaceutical compounds at the nanogram per litre level. The matrix effect was evaluated from water sampled at different treatment stages. Conventional methods with external calibration and internal standard correction were compared with the standard addition method (SAM). An accurate determination of pharmaceutical compounds in drinking water was obtained by the SAM associated with UPLC-MS/MS. The developed method was used to evaluate the occurrence and fate of pharmaceutical compounds in some drinking water treatment plants in the west of France.
Emerson, Jane F; Emerson, Scott S
2005-01-01
A standardized urinalysis and manual microscopic cell counting system was evaluated for its potential to reduce intra- and interoperator variability in urine and cerebrospinal fluid (CSF) cell counts. Replicate aliquots of pooled specimens were submitted blindly to technologists who were instructed to use either the Kova system with the disposable Glasstic slide (Hycor Biomedical, Inc., Garden Grove, CA) or the standard operating procedure of the University of California-Irvine (UCI), which uses plain glass slides for urine sediments and hemacytometers for CSF. The Hycor system provides a mechanical means of obtaining a fixed volume of fluid in which to resuspend the sediment, and fixes the volume of specimen to be microscopically examined by using capillary filling of a chamber containing in-plane counting grids. Ninety aliquots of pooled specimens of each type of body fluid were used to assess the inter- and intraoperator reproducibility of the measurements. The variability of replicate Hycor measurements made on a single specimen by the same or different observers was compared with that predicted by a Poisson distribution. The Hycor methods generally resulted in test statistics that were slightly lower than those obtained with the laboratory standard methods, indicating a trend toward decreasing the effects of various sources of variability. For 15 paired aliquots of each body fluid, tests for systematically higher or lower measurements with the Hycor methods were performed using the Wilcoxon signed-rank test. Also examined was the average difference between the Hycor and current laboratory standard measurements, along with a 95% confidence interval (CI) for the true average difference. Without increasing labor or the requirement for attention to detail, the Hycor method provides slightly better interrater comparisons than the current method used at UCI. Copyright 2005 Wiley-Liss, Inc.
Study Methods to Standardize Thermography NDE
NASA Technical Reports Server (NTRS)
Walker, James L.; Workman, Gary L.
1998-01-01
The purpose of this work is to develop thermographic inspection methods and standards for use in evaluating structural composites and aerospace hardware. Qualification techniques and calibration methods are investigated to standardize the thermographic method for use in the field. Along with the inspections of test standards structural hardware, support hardware is designed and fabricated to aid in the thermographic process. Also, a standard operating procedure is developed for performing inspections with the Bales Thermal Image Processor (TIP). Inspections are performed on a broad range of structural composites. These materials include various graphite/epoxies, graphite/cyanide-ester, graphite/silicon-carbide, graphite phenolic and Keviar/epoxy. Also metal honeycomb (titanium and aluminum faceplates over an aluminum honeycomb core) structures are investigated. Various structural shapes are investigated and the thickness of the structures vary from as few as 3 plies to as many as 80 plies. Special emphasis is placed on characterizing defects in attachment holes and bondlines, in addition to those resulting from impact damage and the inclusion of foreign matter. Image processing through statistical analysis and digital filtering is investigated to enhance the quality and quantify the NDE thermal images when necessary.
Study Methods to Standardize Thermography NDE
NASA Technical Reports Server (NTRS)
Walker, James L.; Workman, Gary L.
1998-01-01
The purpose of this work is to develop thermographic inspection methods and standards for use in evaluating structural composites and aerospace hardware. Qualification techniques and calibration methods are investigated to standardize the thermographic method for use in the field. Along with the inspections of test standards structural hardware, support hardware is designed and fabricated to aid in the thermographic process. Also, a standard operating procedure is developed for performing inspections with the Bales Thermal Image Processor (TIP). Inspections are performed on a broad range of structural composites. These materials include graphite/epoxies, graphite/cyanide-ester, graphite/silicon-carbide, graphite phenolic and Kevlar/epoxy. Also metal honeycomb (titanium and aluminum faceplates over an aluminum honeycomb core) structures are investigated. Various structural shapes are investigated and the thickness of the structures vary from as few as 3 plies to as many as 80 plies. Special emphasis is placed on characterizing defects in attachment holes and bondlines, in addition to those resulting from impact damage and the inclusion of foreign matter. Image processing through statistical analysis and digital filtering is investigated to enhance the quality and quantify the NDE thermal images when necessary.
Grahn, Anna; Bråve, Andreas; Tolfvenstam, Thomas; Studahl, Marie
2018-06-01
Nosocomial transmission of Lassa virus (LASV) is reported to be low when care for the index patient includes proper barrier nursing methods. We investigated whether asymptomatic LASV infection occurred in healthcare workers who used standard barrier nursing methods during the first 15 days of caring for a patient with Lassa fever in Sweden. Of 76 persons who were defined as having been potentially exposed to LASV, 53 provided blood samples for detection of LASV IgG. These persons also responded to a detailed questionnaire to evaluate exposure to different body fluids from the index patient. LASV-specific IgG was not detected in any of the 53 persons. Five of 53 persons had not been using proper barrier nursing methods. Our results strengthen the argument for a low risk of secondary transmission of LASV in humans when standard barrier nursing methods are used and the patient has only mild symptoms.
Duff, Kevin
2012-01-01
Repeated assessments are a relatively common occurrence in clinical neuropsychology. The current paper will review some of the relevant concepts (e.g., reliability, practice effects, alternate forms) and methods (e.g., reliable change index, standardized based regression) that are used in repeated neuropsychological evaluations. The focus will be on the understanding and application of these concepts and methods in the evaluation of the individual patient through examples. Finally, some future directions for assessing change will be described. PMID:22382384
NASA Astrophysics Data System (ADS)
Lewis, C. H.; Griffin, M. J.
1998-08-01
There are three current standards that might be used to assess the vibration and shock transmitted by a vehicle seat with respect to possible effects on human health: ISO 2631/1 (1985), BS 6841 (1987) and ISO 2631-1 (1997). Evaluations have been performed on the seat accelerations measured in nine different transport environments (bus, car, mobile crane, fork-lift truck, tank, ambulance, power boat, inflatable boat, mountain bike) in conditions that might be considered severe. For each environment, limiting daily exposure durations were estimated by comparing the frequency weighted root mean square (i.e., r.m.s.) accelerations and the vibration dose values (i.e.,VDV), calculated according to each standard with the relevant exposure limits, action level and health guidance caution zones. Very different estimates of the limiting daily exposure duration can be obtained using the methods described in the three standards. Differences were observed due to variations in the shapes of the frequency weightings, the phase responses of the frequency weighting filters, the method of combining multi-axis vibration, the averaging method, and the assessment method. With the evaluated motions, differences in the shapes of the weighting filters results in up to about 31% difference in r.m.s. acceleration between the “old” and the “new” ISO standard and up to about 14% difference between BS 6841 and the “new” ISO 2631. There were correspondingly greater differences in the estimates of safe daily exposure durations. With three of the more severe motions there was a difference of more than 250% between estimated safe daily exposure durations based on r.m.s. acceleration and those based on fourth power vibration dose values. The vibration dose values provided the more cautious assessments of the limiting daily exposure duration.
Comparative hazard evaluation of near-infrared diode lasers.
Marshall, W J
1994-05-01
Hazard evaluation methods from various laser protection standards differ when applied to extended-source, near-infrared lasers. By way of example, various hazard analyses are applied to laser training systems, which incorporate diode lasers, specifically those that assist in training military or law enforcement personnel in the proper use of weapons by simulating actual firing by the substitution of a beam of near-infrared energy for bullets. A correct hazard evaluation of these lasers is necessary since simulators are designed to be directed toward personnel during normal use. The differences among laser standards are most apparent when determining the hazard class of a laser. Hazard classification is based on a comparison of the potential exposures with the maximum permissible exposures in the 1986 and 1993 versions of the American National Standard for the Safe Use of Lasers, Z136.1, and the accessible emission limits of the federal laser product performance standard. Necessary safety design features of a particular system depend on the hazard class. The ANSI Z136.1-1993 standard provides a simpler and more accurate hazard assessment of low-power, near-infrared, diode laser systems than the 1986 ANSI standard. Although a specific system is evaluated, the techniques described can be readily applied to other near-infrared lasers or laser training systems.
An ecological method to understand agricultural standardization in peach orchard ecosystems
Wan, Nian-Feng; Zhang, Ming-Yi; Jiang, Jie-Xian; Ji, Xiang-Yun; Hao-Zhang
2016-01-01
While the worldwide standardization of agricultural production has been advocated and recommended, relatively little research has focused on the ecological significance of such a shift. The ecological concerns stemming from the standardization of agricultural production may require new methodology. In this study, we concentrated on how ecological two-sidedness and ecological processes affect the standardization of agricultural production which was divided into three phrases (pre-, mid- and post-production), considering both the positive and negative effects of agricultural processes. We constructed evaluation indicator systems for the pre-, mid- and post-production phases and here we presented a Standardization of Green Production Index (SGPI) based on the Full Permutation Polygon Synthetic Indicator (FPPSI) method which we used to assess the superiority of three methods of standardized production for peaches. The values of SGPI for pre-, mid- and post-production were 0.121 (Level IV, “Excellent” standard), 0.379 (Level III, “Good” standard), and 0.769 × 10−2 (Level IV, “Excellent” standard), respectively. Here we aimed to explore the integrated application of ecological two-sidedness and ecological process in agricultural production. Our results are of use to decision-makers and ecologists focusing on eco-agriculture and those farmers who hope to implement standardized agricultural production practices. PMID:26899360
An ecological method to understand agricultural standardization in peach orchard ecosystems.
Wan, Nian-Feng; Zhang, Ming-Yi; Jiang, Jie-Xian; Ji, Xiang-Yun; Hao-Zhang
2016-02-22
While the worldwide standardization of agricultural production has been advocated and recommended, relatively little research has focused on the ecological significance of such a shift. The ecological concerns stemming from the standardization of agricultural production may require new methodology. In this study, we concentrated on how ecological two-sidedness and ecological processes affect the standardization of agricultural production which was divided into three phrases (pre-, mid- and post-production), considering both the positive and negative effects of agricultural processes. We constructed evaluation indicator systems for the pre-, mid- and post-production phases and here we presented a Standardization of Green Production Index (SGPI) based on the Full Permutation Polygon Synthetic Indicator (FPPSI) method which we used to assess the superiority of three methods of standardized production for peaches. The values of SGPI for pre-, mid- and post-production were 0.121 (Level IV, "Excellent" standard), 0.379 (Level III, "Good" standard), and 0.769 × 10(-2) (Level IV, "Excellent" standard), respectively. Here we aimed to explore the integrated application of ecological two-sidedness and ecological process in agricultural production. Our results are of use to decision-makers and ecologists focusing on eco-agriculture and those farmers who hope to implement standardized agricultural production practices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1969-07-01
The Fifth International Conference on Nondestructive Testing was held in Montreal, Canada, for the purpose of promoting international collaboration in all matters related to the development and use of nondestructive test methods. A total of 82 papers were selected for presentation. Session titles included: evaluation of material quality; ultrasonics - identification and measurements; thermal methods; testing of welds; visual aids in nondestructive testing; measurements of stress and elastic properties; magnetic and eddy-current methods; surface methods and neutron radiography; standardization - general; ultrasonics at elevated temperatures; applications; x-ray techniques; radiography; ultrasonic standardization; training and qualification; and, correlation of weld defects.
A Tool for Estimating Variability in Wood Preservative Treatment Retention
Patricia K. Lebow; Adam M. Taylor; Timothy M. Young
2015-01-01
Composite sampling is standard practice for evaluation of preservative retention levels in preservative-treated wood. Current protocols provide an average retention value but no estimate of uncertainty. Here we describe a statistical method for calculating uncertainty estimates using the standard sampling regime with minimal additional chemical analysis. This tool can...
ERIC Educational Resources Information Center
Porter, Susan G.; Koch, Steven P.; Henderson, Andrew
2010-01-01
Background: There is a lack of consistent, comprehensible data collection and analysis methods for evaluating teacher preparation program's coverage of required standards for accreditation. Of particular concern is the adequate coverage of standards and competencies that address the teaching of English learners and teachers of students from…
42 CFR Appendix A to Part 75 - Standards for Accreditation of Educational Programs for Radiographers
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Standards for Accreditation of Educational Programs for Radiographers A Appendix A to Part 75 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH... film evaluation; (k) Methods of patient care; (l) Pathology; (m) Radiologic physics; and (n) Radiation...
A Standardized Mean Difference Effect Size for Single Case Designs
ERIC Educational Resources Information Center
Hedges, Larry V.; Pustejovsky, James E.; Shadish, William R.
2012-01-01
Single case designs are a set of research methods for evaluating treatment effects by assigning different treatments to the same individual and measuring outcomes over time and are used across fields such as behavior analysis, clinical psychology, special education, and medicine. Emerging standards for single case designs have focused attention on…
ERIC Educational Resources Information Center
Lanier, Paul; Kohl, Patrica L.; Benz, Joan; Swinger, Dawn; Moussette, Pam; Drake, Brett
2011-01-01
Objectives: The purpose of this study was to evaluate Parent-Child Interaction Therapy (PCIT) deployed in a community setting comparing in-home with the standard office-based intervention. Child behavior, parent stress, parent functioning, and attrition were examined. Methods: Using a quasi-experimental design, standardized measures at three time…
Seeking a Valid Gold Standard for an Innovative, Dialect-Neutral Language Test
ERIC Educational Resources Information Center
Pearson, Barbara Zurer; Jackson, Janice E.; Wu, Haotian
2014-01-01
Purpose: In this study, the authors explored alternative gold standards to validate an innovative, dialect-neutral language assessment. Method: Participants were 78 African American children, ages 5;0 (years;months) to 6;11. Twenty participants had previously been identified as having language impairment. The Diagnostic Evaluation of Language…
USDA-ARS?s Scientific Manuscript database
The soybean cyst nematode (SCN), Heterodera glycines Ichinohe, is distributed throughout the soybean (Glycine max [L.] Merr.) production areas of the United States and Canada. SCN remains the most economically important pathogen of soybean in North America; the most recent estimate of soybean yield...
La Barbera, Luigi; Galbusera, Fabio; Wilke, Hans-Joachim; Villa, Tomaso
2016-09-01
To discuss whether the available standard methods for preclinical evaluation of posterior spine stabilization devices can represent basic everyday life activities and how to compare the results obtained with different procedures. A comparative finite element study compared ASTM F1717 and ISO 12189 standards to validated instrumented L2-L4 segments undergoing standing, upper body flexion and extension. The internal loads on the spinal rod and the maximum stress on the implant are analysed. ISO recommended anterior support stiffness and force allow for reproducing bending moments measured in vivo on an instrumented physiological segment during upper body flexion. Despite the significance of ASTM model from an engineering point of view, the overly conservative vertebrectomy model represents an unrealistic worst case scenario. A method is proposed to determine the load to apply on assemblies with different anterior support stiffnesses to guarantee a comparable bending moment and reproduce specific everyday life activities. The study increases our awareness on the use of the current standards to achieve meaningful results easy to compare and interpret.
Phillips, Melissa M; Bedner, Mary; Reitz, Manuela; Burdette, Carolyn Q; Nelson, Michael A; Yen, James H; Sander, Lane C; Rimmer, Catherine A
2017-02-01
Two independent analytical approaches, based on liquid chromatography with absorbance detection and liquid chromatography with mass spectrometric detection, have been developed for determination of isoflavones in soy materials. These two methods yield comparable results for a variety of soy-based foods and dietary supplements. Four Standard Reference Materials (SRMs) have been produced by the National Institute of Standards and Technology to assist the food and dietary supplement community in method validation and have been assigned values for isoflavone content using both methods. These SRMs include SRM 3234 Soy Flour, SRM 3236 Soy Protein Isolate, SRM 3237 Soy Protein Concentrate, and SRM 3238 Soy-Containing Solid Oral Dosage Form. A fifth material, SRM 3235 Soy Milk, was evaluated using the methods and found to be inhomogeneous for isoflavones and unsuitable for value assignment. Graphical Abstract Separation of six isoflavone aglycones and glycosides found in Standard Reference Material (SRM) 3236 Soy Protein Isolate.
Li, Dan; Jiang, Jia; Han, Dandan; Yu, Xinyu; Wang, Kun; Zang, Shuang; Lu, Dayong; Yu, Aimin; Zhang, Ziwei
2016-04-05
A new method is proposed for measuring the antioxidant capacity by electron spin resonance spectroscopy based on the loss of electron spin resonance signal after Cu(2+) is reduced to Cu(+) with antioxidant. Cu(+) was removed by precipitation in the presence of SCN(-). The remaining Cu(2+) was coordinated with diethyldithiocarbamate, extracted into n-butanol and determined by electron spin resonance spectrometry. Eight standards widely used in antioxidant capacity determination, including Trolox, ascorbic acid, ferulic acid, rutin, caffeic acid, quercetin, chlorogenic acid, and gallic acid were investigated. The standard curves for determining the eight standards were plotted, and results showed that the linear regression correlation coefficients were all high enough (r > 0.99). Trolox equivalent antioxidant capacity values for the antioxidant standards were calculated, and a good correlation (r > 0.94) between the values obtained by the present method and cupric reducing antioxidant capacity method was observed. The present method was applied to the analysis of real fruit samples and the evaluation of the antioxidant capacity of these fruits.
Zhang, Shuai; Li, PeiPei; Yan, Zhongyong; Long, Ju; Zhang, Xiaojun
2017-03-01
An ultraperformance liquid chromatography-quadrupole time-of-flight high-resolution mass spectrometry method was developed and validated for the determination of nitrofurazone metabolites. Precolumn derivatization with 2,4-dinitrophenylhydrazine and p-dimethylaminobenzaldehyde as an internal standard was used successfully to determine the biomarker 5-nitro-2-furaldehyde. In negative electrospray ionization mode, the precise molecular weights of the derivatives were 320.0372 for the biomarker and 328.1060 for the internal standard (relative error 1.08 ppm). The matrix effect was evaluated and the analytical characteristics of the method and derivatization reaction conditions were validated. For comparison purposes, spiked samples were tested by both internal and external standard methods. The results show high precision can be obtained with p-dimethylaminobenzaldehyde as an internal standard for the identification and quantification of nitrofurazone metabolites in complex biological samples. Graphical Abstract A simplified preparation strategy for biological samples.
Comparison of methods for quantitative evaluation of endoscopic distortion
NASA Astrophysics Data System (ADS)
Wang, Quanzeng; Castro, Kurt; Desai, Viraj N.; Cheng, Wei-Chung; Pfefer, Joshua
2015-03-01
Endoscopy is a well-established paradigm in medical imaging, and emerging endoscopic technologies such as high resolution, capsule and disposable endoscopes promise significant improvements in effectiveness, as well as patient safety and acceptance of endoscopy. However, the field lacks practical standardized test methods to evaluate key optical performance characteristics (OPCs), in particular the geometric distortion caused by fisheye lens effects in clinical endoscopic systems. As a result, it has been difficult to evaluate an endoscope's image quality or assess its changes over time. The goal of this work was to identify optimal techniques for objective, quantitative characterization of distortion that are effective and not burdensome. Specifically, distortion measurements from a commercially available distortion evaluation/correction software package were compared with a custom algorithm based on a local magnification (ML) approach. Measurements were performed using a clinical gastroscope to image square grid targets. Recorded images were analyzed with the ML approach and the commercial software where the results were used to obtain corrected images. Corrected images based on the ML approach and the software were compared. The study showed that the ML method could assess distortion patterns more accurately than the commercial software. Overall, the development of standardized test methods for characterizing distortion and other OPCs will facilitate development, clinical translation, manufacturing quality and assurance of performance during clinical use of endoscopic technologies.
Van De Steene, Jet C; Lambert, Willy E
2008-05-01
When developing an LC-MS/MS-method matrix effects are a major issue. The effect of co-eluting compounds arising from the matrix can result in signal enhancement or suppression. During method development much attention should be paid to diminishing matrix effects as much as possible. The present work evaluates matrix effects from aqueous environmental samples in the simultaneous analysis of a group of 9 specific pharmaceuticals with HPLC-ESI/MS/MS and UPLC-ESI/MS/MS: flubendazole, propiconazole, pipamperone, cinnarizine, ketoconazole, miconazole, rabeprazole, itraconazole and domperidone. When HPLC-MS/MS is used, matrix effects are substantial and can not be compensated for with analogue internal standards. For different surface water samples different matrix effects are found. For accurate quantification the standard addition approach is necessary. Due to the better resolution and more narrow peaks in UPLC, analytes will co-elute less with interferences during ionisation, so matrix effects could be lower, or even eliminated. If matrix effects are eliminated with this technique, the standard addition method for quantification can be omitted and the overall method will be simplified. Results show that matrix effects are almost eliminated if internal standards (structural analogues) are used. Instead of the time-consuming and labour-intensive standard addition method, with UPLC the internal standardization can be used for quantification and the overall method is substantially simplified.
Ahlander, Britt-Marie; Engvall, Jan; Maret, Eva; Ericsson, Elisabeth
2018-03-01
To evaluate the effect of video information given before cardiovascular magnetic resonance imaging on patient anxiety and to compare patient experiences of cardiovascular magnetic resonance imaging versus myocardial perfusion scintigraphy. To evaluate whether additional information has an impact on motion artefacts. Cardiovascular magnetic resonance imaging and myocardial perfusion scintigraphy are technically advanced methods for the evaluation of heart diseases. Although cardiovascular magnetic resonance imaging is considered to be painless, patients may experience anxiety due to the closed environment. A prospective randomised intervention study, not registered. The sample (n = 148) consisted of 97 patients referred for cardiovascular magnetic resonance imaging, randomised to receive either video information in addition to standard text-information (CMR-video/n = 49) or standard text-information alone (CMR-standard/n = 48). A third group undergoing myocardial perfusion scintigraphy (n = 51) was compared with the cardiovascular magnetic resonance imaging-standard group. Anxiety was evaluated before, immediately after the procedure and 1 week later. Five questionnaires were used: Cardiac Anxiety Questionnaire, State-Trait Anxiety Inventory, Hospital Anxiety and Depression scale, MRI Fear Survey Schedule and the MRI-Anxiety Questionnaire. Motion artefacts were evaluated by three observers, blinded to the information given. Data were collected between April 2015-April 2016. The study followed the CONSORT guidelines. The CMR-video group scored lower (better) than the cardiovascular magnetic resonance imaging-standard group in the factor Relaxation (p = .039) but not in the factor Anxiety. Anxiety levels were lower during scintigraphic examinations compared to the CMR-standard group (p < .001). No difference was found regarding motion artefacts between CMR-video and CMR-standard. Patient ability to relax during cardiovascular magnetic resonance imaging increased by adding video information prior the exam, which is important in relation to perceived quality in nursing. No effect was seen on motion artefacts. Video information prior to examinations can be an easy and time effective method to help patients cooperate in imaging procedures. © 2017 John Wiley & Sons Ltd.
Standardized plant disease evaluations will enhance resistance gene discovery
USDA-ARS?s Scientific Manuscript database
Gene discovery and marker development using DNA-based tools require plant populations with well documented phenotypes. If dissimilar phenotype evaluation methods or data scoring techniques are employed with different crops, or at different labs for the same crops, then data mining for genetic marker...
[An integrated model for examination of aphasic patients and evaluation of treatment results].
Ansink, B J; Vanneste, J A; Endtz, L J
1980-02-01
This article is an overview of the literature on integrated, multidisciplinar examination of aphasic patients, its consequences for treatment and the evaluation of the results thereof; the need of virtually standardized methods of investigation for each language is stressed.
MULTI-SITE FIELD EVALUATION OF CANDIDATE SAMPLERS FOR MEASURING COARSE-MODE PM
In response to expected changes to the National Ambient Air Quality Standards for particulate matter, comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring coarse mode aerosols (i.e. PMc). Five separate PMc sampling approaches w...
Imaging evaluation of non-alcoholic fatty liver disease: focused on quantification.
Lee, Dong Ho
2017-12-01
Non-alcoholic fatty liver disease (NAFLD) has been an emerging major health problem, and the most common cause of chronic liver disease in Western countries. Traditionally, liver biopsy has been gold standard method for quantification of hepatic steatosis. However, its invasive nature with potential complication as well as measurement variability are major problem. Thus, various imaging studies have been used for evaluation of hepatic steatosis. Ultrasonography provides fairly good accuracy to detect moderate-to-severe degree hepatic steatosis, but limited accuracy for mild steatosis. Operator-dependency and subjective/qualitative nature of examination are another major drawbacks of ultrasonography. Computed tomography can be considered as an unsuitable imaging modality for evaluation of NAFLD due to potential risk of radiation exposure and limited accuracy in detecting mild steatosis. Both magnetic resonance spectroscopy and magnetic resonance imaging using chemical shift technique provide highly accurate and reproducible diagnostic performance for evaluating NAFLD, and therefore, have been used in many clinical trials as a non-invasive reference of standard method.
Imaging evaluation of non-alcoholic fatty liver disease: focused on quantification
2017-01-01
Non-alcoholic fatty liver disease (NAFLD) has been an emerging major health problem, and the most common cause of chronic liver disease in Western countries. Traditionally, liver biopsy has been gold standard method for quantification of hepatic steatosis. However, its invasive nature with potential complication as well as measurement variability are major problem. Thus, various imaging studies have been used for evaluation of hepatic steatosis. Ultrasonography provides fairly good accuracy to detect moderate-to-severe degree hepatic steatosis, but limited accuracy for mild steatosis. Operator-dependency and subjective/qualitative nature of examination are another major drawbacks of ultrasonography. Computed tomography can be considered as an unsuitable imaging modality for evaluation of NAFLD due to potential risk of radiation exposure and limited accuracy in detecting mild steatosis. Both magnetic resonance spectroscopy and magnetic resonance imaging using chemical shift technique provide highly accurate and reproducible diagnostic performance for evaluating NAFLD, and therefore, have been used in many clinical trials as a non-invasive reference of standard method. PMID:28994271
Targeted neonatal echocardiography services: need for standardized training and quality assurance.
Finan, Emer; Sehgal, Arvind; Khuffash, Afif El; McNamara, Patrick J
2014-10-01
Targeted neonatal echocardiography refers to a focused assessment of myocardial performance and hemodynamics directed by a specific clinical question. It has become the standard of care in many parts of the world, but practice is variable, and there has been a lack of standardized training and evaluation to date. Targeted neonatal echocardiography was first introduced to Canada in 2006. The purpose of this study was to examine the characteristics of targeted neonatal echocardiography practice and training methods in Canadian neonatal intensive care units (NICUs). A total of 142 Canadian neonatologists were invited to participate in an online survey, which was conducted in September 2010. The survey consisted of questions related to the availability of targeted neonatal echocardiography, clinical indications, benefits and risks, and training methods. The overall survey response rate was 65%. Forty-eight respondents (34%) indicated that targeted neonatal echocardiography was available in their units, and the program was introduced within the preceding 1 to 5 years. In centers where it was unavailable, lack of on-site echocardiography expertise was cited as the major barrier to implementation. The most common indications for targeted neonatal echocardiography included evaluation of a hemodynamically significant ductus arteriosus, systemic or pulmonary blood flow, and response to cardiovascular treatments. Only 27% of respondents, working in centers where targeted neonatal echocardiography existed, actually performed the studies themselves; most individuals completed 11 to 20 studies per month. Almost half of the respondents said that training was available in their institutions, but methods of training and evaluation were inconsistent. Eighty-seven percent of respondents reported no formalized process for assessment of ongoing competency after the initial training period. Targeted neonatal echocardiography is becoming more widely available and is gaining acceptance in Canadian NICUs. Although training is provided in many institutions, the process is not well established, and formal evaluation is rarely performed. This study emphasizes the need for development of standards for formalized training, evaluation, and quality assurance. © 2014 by the American Institute of Ultrasound in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salvador Palau, A.; Eder, S. D., E-mail: sabrina.eder@uib.no; Kaltenbacher, T.
Time-of-flight (TOF) is a standard experimental technique for determining, among others, the speed ratio S (velocity spread) of a molecular beam. The speed ratio is a measure for the monochromaticity of the beam and an accurate determination of S is crucial for various applications, for example, for characterising chromatic aberrations in focussing experiments related to helium microscopy or for precise measurements of surface phonons and surface structures in molecular beam scattering experiments. For both of these applications, it is desirable to have as high a speed ratio as possible. Molecular beam TOF measurements are typically performed by chopping the beammore » using a rotating chopper with one or more slit openings. The TOF spectra are evaluated using a standard deconvolution method. However, for higher speed ratios, this method is very sensitive to errors related to the determination of the slit width and the beam diameter. The exact sensitivity depends on the beam diameter, the number of slits, the chopper radius, and the chopper rotation frequency. We present a modified method suitable for the evaluation of TOF measurements of high speed ratio beams. The modified method is based on a systematic variation of the chopper convolution parameters so that a set of independent measurements that can be fitted with an appropriate function are obtained. We show that with this modified method, it is possible to reduce the error by typically one order of magnitude compared to the standard method.« less
Development of a qualification standard for adhesives used in hybrid microcircuits
NASA Technical Reports Server (NTRS)
Licari, J. J.; Weigand, B. L.; Soykin, C. A.
1981-01-01
Improved qualification standards and test procedures for adhesives used in microelectronic packaging are developed. The test methods in specification for the Selection and Use of Organic Adhesives in Hybrid Microcircuits are reevaluated versus industry and government requirements. Four electrically insulative and four electrically conductive adhesives used in the assembly of hybrid microcircuits are selected to evaluate the proposed revised test methods. An estimate of the cost to perform qualification testing of an adhesive to the requirements of the revised specification is also prepared.
NASA Astrophysics Data System (ADS)
Anees, Amir; Khan, Waqar Ahmad; Gondal, Muhammad Asif; Hussain, Iqtadar
2013-07-01
The aim of this work is to make use of the mean of absolute deviation (MAD) method for the evaluation process of substitution boxes used in the advanced encryption standard. In this paper, we use the MAD technique to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, MAD is applied to advanced encryption standard (AES), affine power affine (APA), Gray, Lui J., Residue Prime, S8 AES, SKIPJACK, and Xyi substitution boxes.
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
Measurement properties of gingival biotype evaluation methods.
Alves, Patrick Henry Machado; Alves, Thereza Cristina Lira Pacheco; Pegoraro, Thiago Amadei; Costa, Yuri Martins; Bonfante, Estevam Augusto; de Almeida, Ana Lúcia Pompéia Fraga
2018-06-01
There are numerous methods to measure the dimensions of the gingival tissue, but few have compared the effectiveness of one method over another. This study aimed to describe a new method and to estimate the validity of gingival biotype assessment with the aid of computed tomography scanning (CTS). In each patient different methods of evaluation of the gingival thickness were used: transparency of periodontal probe, transgingival, photography, and a new method of CTS). Intrarater and interrater reliability considering the categorical classification of the gingival biotype were estimated with Cohen's kappa coefficient, intraclass correlation coefficient (ICC), and ANOVA (P < .05). The criterion validity of the CTS was determined using the transgingival method as the reference standard. Sensitivity and specificity values were computed along with theirs 95% CI. Twelve patients were subjected to assessment of their gingival thickness. The highest agreement was found between transgingival and CTS (86.1%). The comparison between the categorical classifications of CTS and the transgingival method (reference standard) showed high specificity (94.92%) and low sensitivity (53.85%) for definition of a thin biotype. The new method of CTS assessment to classify gingival tissue thickness can be considered reliable and clinically useful to diagnose thick biotype. © 2018 Wiley Periodicals, Inc.
Liang, Shanshan; Yuan, Fusong; Luo, Xu; Yu, Zhuoren; Tang, Zhihui
2018-04-05
Marginal discrepancy is key to evaluating the accuracy of fixed dental prostheses. An improved method of evaluating marginal discrepancy is needed. The purpose of this in vitro study was to evaluate the absolute marginal discrepancy of ceramic crowns fabricated using conventional and digital methods with a digital method for the quantitative evaluation of absolute marginal discrepancy. The novel method was based on 3-dimensional scanning, iterative closest point registration techniques, and reverse engineering theory. Six standard tooth preparations for the right maxillary central incisor, right maxillary second premolar, right maxillary second molar, left mandibular lateral incisor, left mandibular first premolar, and left mandibular first molar were selected. Ten conventional ceramic crowns and 10 CEREC crowns were fabricated for each tooth preparation. A dental cast scanner was used to obtain 3-dimensional data of the preparations and ceramic crowns, and the data were compared with the "virtual seating" iterative closest point technique. Reverse engineering software used edge sharpening and other functional modules to extract the margins of the preparations and crowns. Finally, quantitative evaluation of the absolute marginal discrepancy of the ceramic crowns was obtained from the 2-dimensional cross-sectional straight-line distance between points on the margin of the ceramic crowns and the standard preparations based on the circumferential function module along the long axis. The absolute marginal discrepancy of the ceramic crowns fabricated using conventional methods was 115 ±15.2 μm, and 110 ±14.3 μm for those fabricated using the digital technique was. ANOVA showed no statistical difference between the 2 methods or among ceramic crowns for different teeth (P>.05). The digital quantitative evaluation method for the absolute marginal discrepancy of ceramic crowns was established. The evaluations determined that the absolute marginal discrepancies were within a clinically acceptable range. This method is acceptable for the digital evaluation of the accuracy of complete crowns. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Kraetzer, Mary C.; And Others
The rationale, strategies, and methods of The Mercy College Self-Study Project are considered, and evaluation instruments are provided. This program of institutional evaluation and planning was initiated in 1980 and consists of: standardized surveys, a 10-year longitudinal (panel) study, and academic department self-studies. Questionnaires…
Berlin, R H; Janzon, B; Rybeck, B; Schantz, B; Seeman, T
1982-01-01
A standard methodology for estimating the energy transfer characteristics of small calibre bullets and other fast missiles is proposed, consisting of firings against targets made of soft soap. The target is evaluated by measuring the size of the permanent cavity remaining in it after the shot. The method is very simple to use and does not require access to any sophisticated measuring equipment. It can be applied under all circumstances, even under field conditions. Adequate methods of calibration to ensure good accuracy are suggested. The precision and limitations of the method are discussed.
ERIC Educational Resources Information Center
Ramchandani, Dilip
2011-01-01
Background/Objective: The author analyzed and compared various assessment methods for assessment of medical students; these methods included clinical assessment and the standardized National Board of Medical Education (NBME) subject examination. Method: Students were evaluated on their 6-week clerkship in psychiatry by both their clinical…
The current U. S. Environmental Protection Agency-approved method for Enterococci (Method 1600) in recreational water is a membrane filter (MF) method that takes 24 hours to obtain results. If the recreational water is not in compliance with the standard, the risk of exposure to...
Test methods for optical disk media characteristics (for 356 mm ruggedized magneto-optic media)
NASA Technical Reports Server (NTRS)
Podio, Fernando L.
1991-01-01
Standard test methods for computer storage media characteristics are essential and allow for conformance to media interchange standards. The test methods were developed for 356 mm two-sided laminated glass substrate with a magneto-optic active layer media technology. These test methods may be used for testing other media types, but in each case their applicability must be evaluated. Test methods are included for a series of different media characteristics, including operational, nonoperational, and storage environments; mechanical and physical characteristics; and substrate, recording layer, and preformat characteristics. Tests for environmental qualification and media lifetimes are also included. The best methods include testing conditions, testing procedures, a description of the testing setup, and the required calibration procedures.
Bronas, Ulf G; Hirsch, Alan T; Murphy, Timothy; Badenhop, Dalynn; Collins, Tracie C; Ehrman, Jonathan K; Ershow, Abby G; Lewis, Beth; Treat-Jacobson, Diane J; Walsh, M Eileen; Oldenburg, Niki; Regensteiner, Judith G
2009-11-01
The CLaudication: Exercise Vs Endoluminal Revascularization (CLEVER) study is the first randomized, controlled, clinical, multicenter trial that is evaluating a supervised exercise program compared with revascularization procedures to treat claudication. In this report, the methods and dissemination techniques of the supervised exercise training intervention are described. A total of 217 participants are being recruited and randomized to one of three arms: (1) optimal medical care; (2) aortoiliac revascularization with stent; or (3) supervised exercise training. Of the enrolled patients, 84 will receive supervised exercise therapy. Supervised exercise will be administered according to a protocol designed by a central CLEVER exercise training committee based on validated methods previously used in single center randomized control trials. The protocol will be implemented at each site by an exercise committee member using training methods developed and standardized by the exercise training committee. The exercise training committee reviews progress and compliance with the protocol of each participant weekly. In conclusion, a multicenter approach to disseminate the supervised exercise training technique and to evaluate its efficacy, safety and cost-effectiveness for patients with claudication due to peripheral arterial disease (PAD) is being evaluated for the first time in CLEVER. The CLEVER study will further establish the role of supervised exercise training in the treatment of claudication resulting from PAD and provide standardized methods for use of supervised exercise training in future PAD clinical trials as well as in clinical practice.
An assessment of air pollutant exposure methods in Mexico City, Mexico.
Rivera-González, Luis O; Zhang, Zhenzhen; Sánchez, Brisa N; Zhang, Kai; Brown, Daniel G; Rojas-Bracho, Leonora; Osornio-Vargas, Alvaro; Vadillo-Ortega, Felipe; O'Neill, Marie S
2015-05-01
Geostatistical interpolation methods to estimate individual exposure to outdoor air pollutants can be used in pregnancy cohorts where personal exposure data are not collected. Our objectives were to a) develop four assessment methods (citywide average (CWA); nearest monitor (NM); inverse distance weighting (IDW); and ordinary Kriging (OK)), and b) compare daily metrics and cross-validations of interpolation models. We obtained 2008 hourly data from Mexico City's outdoor air monitoring network for PM10, PM2.5, O3, CO, NO2, and SO2 and constructed daily exposure metrics for 1,000 simulated individual locations across five populated geographic zones. Descriptive statistics from all methods were calculated for dry and wet seasons, and by zone. We also evaluated IDW and OK methods' ability to predict measured concentrations at monitors using cross validation and a coefficient of variation (COV). All methods were performed using SAS 9.3, except ordinary Kriging which was modeled using R's gstat package. Overall, mean concentrations and standard deviations were similar among the different methods for each pollutant. Correlations between methods were generally high (r=0.77 to 0.99). However, ranges of estimated concentrations determined by NM, IDW, and OK were wider than the ranges for CWA. Root mean square errors for OK were consistently equal to or lower than for the IDW method. OK standard errors varied considerably between pollutants and the computed COVs ranged from 0.46 (least error) for SO2 and PM10 to 3.91 (most error) for PM2.5. OK predicted concentrations measured at the monitors better than IDW and NM. Given the similarity in results for the exposure methods, OK is preferred because this method alone provides predicted standard errors which can be incorporated in statistical models. The daily estimated exposures calculated using these different exposure methods provide flexibility to evaluate multiple windows of exposure during pregnancy, not just trimester or pregnancy-long exposures. Many studies evaluating associations between outdoor air pollution and adverse pregnancy outcomes rely on outdoor air pollution monitoring data linked to information gathered from large birth registries, and often lack residence location information needed to estimate individual exposure. This study simulated 1,000 residential locations to evaluate four air pollution exposure assessment methods, and describes possible exposure misclassification from using spatial averaging versus geostatistical interpolation models. An implication of this work is that policies to reduce air pollution and exposure among pregnant women based on epidemiologic literature should take into account possible error in estimates of effect when spatial averages alone are evaluated.
Evaluation of flaws in carbon steel piping. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahoor, A.; Gamble, R.M.; Mehta, H.S.
1986-10-01
The objective of this program was to develop flaw evaluation procedures and allowable flaw sizes for ferritic piping used in light water reactor (LWR) power generation facilities. The program results provide relevant ASME Code groups with the information necessary to define flaw evaluation procedures, allowable flaw sizes, and their associated bases for Section XI of the code. Because there are several possible flaw-related failure modes for ferritic piping over the LWR operating temperature range, three analysis methods were employed to develop the evaluation procedures. These include limit load analysis for plastic collapse, elastic plastic fracture mechanics (EPFM) analysis for ductilemore » tearing, and linear elastic fracture mechanics (LEFM) analysis for non ductile crack extension. To ensure the appropriate analysis method is used in an evaluation, a step by step procedure also is provided to identify the relevant acceptance standard or procedure on a case by case basis. The tensile strength and toughness properties required to complete the flaw evaluation for any of the three analysis methods are included in the evaluation procedure. The flaw evaluation standards are provided in tabular form for the plastic collapse and ductile tearing modes, where the allowable part through flaw depth is defined as a function of load and flaw length. For non ductile crack extension, linear elastic fracture mechanics analysis methods, similar to those in Appendix A of Section XI, are defined. Evaluation flaw sizes and procedures are developed for both longitudinal and circumferential flaw orientations and normal/upset and emergency/faulted operating conditions. The tables are based on margins on load of 2.77 and 1.39 for circumferential flaws and 3.0 and 1.5 for longitudinal flaws for normal/upset and emergency/faulted conditions, respectively.« less
Film-based delivery quality assurance for robotic radiosurgery: Commissioning and validation.
Blanck, Oliver; Masi, Laura; Damme, Marie-Christin; Hildebrandt, Guido; Dunst, Jürgen; Siebert, Frank-Andre; Poppinga, Daniela; Poppe, Björn
2015-07-01
Robotic radiosurgery demands comprehensive delivery quality assurance (DQA), but guidelines for commissioning of the DQA method is missing. We investigated the stability and sensitivity of our film-based DQA method with various test scenarios and routine patient plans. We also investigated the applicability of tight distance-to-agreement (DTA) Gamma-Index criteria. We used radiochromic films with multichannel film dosimetry and re-calibration and our analysis was performed in four steps: 1) Film-to-plan registration, 2) Standard Gamma-Index criteria evaluation (local-pixel-dose-difference ≤2%, distance-to-agreement ≤2 mm, pass-rate ≥90%), 3) Dose distribution shift until maximum pass-rate (Maxγ) was found (shift acceptance <1 mm), and 4) Final evaluation with tight DTA criteria (≤1 mm). Test scenarios consisted of purposefully introduced phantom misalignments, dose miscalibrations, and undelivered MU. Initial method evaluation was done on 30 clinical plans. Our method showed similar sensitivity compared to the standard End-2-End-Test and incorporated an estimate of global system offsets in the analysis. The simulated errors (phantom shifts, global robot misalignment, undelivered MU) were detected by our method while standard Gamma-Index criteria often did not reveal these deviations. Dose miscalibration was not detected by film alone, hence simultaneous ion-chamber measurement for film calibration is strongly recommended. 83% of the clinical patient plans were within our tight DTA tolerances. Our presented methods provide additional measurements and quality references for film-based DQA enabling more sensitive error detection. We provided various test scenarios for commissioning of robotic radiosurgery DQA and demonstrated the necessity to use tight DTA criteria. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Cenciani de Souza, Camila Prado; Aparecida de Abreu, Cleide; Coscione, Aline Renée; Alberto de Andrade, Cristiano; Teixeira, Luiz Antonio Junqueira; Consolini, Flavia
2018-01-01
Rapid, accurate, and low-cost alternative analytical methods for micronutrient quantification in fertilizers are fundamental in QC. The purpose of this study was to evaluate whether zinc (Zn) and copper (Cu) content in mineral fertilizers and industrial by-products determined by the alternative methods USEPA 3051a, 10% HCl, and 10% H2SO4 are statistically equivalent to the standard method, consisting of hot-plate digestion using concentrated HCl. The commercially marketed Zn and Cu sources in Brazil consisted of oxides, carbonate, and sulfate fertilizers and by-products consisting of galvanizing ash, galvanizing sludge, brass ash, and brass or scrap slag. The contents of sources ranged from 15 to 82% and 10 to 45%, respectively, for Zn and Cu. The Zn and Cu contents refer to the variation of the elements found in the different sources evaluated with the concentrated HCl method as shown in Table 1. A protocol based on the following criteria was used for the statistical analysis assessment of the methods: F-test modified by Graybill, t-test for the mean error, and linear correlation coefficient analysis. In terms of equivalents, 10% HCl extraction was equivalent to the standard method for Zn, and the results of the USEPA 3051a and 10% HCl methods indicated that these methods were equivalents for Cu. Therefore, these methods can be considered viable alternatives to the standard method of determination for Cu and Zn in mineral fertilizers and industrial by-products in future research for their complete validation.
Neuhaus, Philipp; Doods, Justin; Dugas, Martin
2015-01-01
Automatic coding of medical terms is an important, but highly complicated and laborious task. To compare and evaluate different strategies a framework with a standardized web-interface was created. Two UMLS mapping strategies are compared to demonstrate the interface. The framework is a Java Spring application running on a Tomcat application server. It accepts different parameters and returns results in JSON format. To demonstrate the framework, a list of medical data items was mapped by two different methods: similarity search in a large table of terminology codes versus search in a manually curated repository. These mappings were reviewed by a specialist. The evaluation shows that the framework is flexible (due to standardized interfaces like HTTP and JSON), performant and reliable. Accuracy of automatically assigned codes is limited (up to 40%). Combining different semantic mappers into a standardized Web-API is feasible. This framework can be easily enhanced due to its modular design.
2017-01-01
Chemical standardization, along with morphological and DNA analysis ensures the authenticity and advances the integrity evaluation of botanical preparations. Achievement of a more comprehensive, metabolomic standardization requires simultaneous quantitation of multiple marker compounds. Employing quantitative 1H NMR (qHNMR), this study determined the total isoflavone content (TIfCo; 34.5–36.5% w/w) via multimarker standardization and assessed the stability of a 10-year-old isoflavone-enriched red clover extract (RCE). Eleven markers (nine isoflavones, two flavonols) were targeted simultaneously, and outcomes were compared with LC-based standardization. Two advanced quantitative measures in qHNMR were applied to derive quantities from complex and/or overlapping resonances: a quantum mechanical (QM) method (QM-qHNMR) that employs 1H iterative full spin analysis, and a non-QM method that uses linear peak fitting algorithms (PF-qHNMR). A 10 min UHPLC-UV method provided auxiliary orthogonal quantitation. This is the first systematic evaluation of QM and non-QM deconvolution as qHNMR quantitation measures. It demonstrates that QM-qHNMR can account successfully for the complexity of 1H NMR spectra of individual analytes and how QM-qHNMR can be built for mixtures such as botanical extracts. The contents of the main bioactive markers were in good agreement with earlier HPLC-UV results, demonstrating the chemical stability of the RCE. QM-qHNMR advances chemical standardization by its inherent QM accuracy and the use of universal calibrants, avoiding the impractical need for identical reference materials. PMID:28067513
The Logic of Summative Confidence
ERIC Educational Resources Information Center
Gugiu, P. Cristian
2007-01-01
The constraints of conducting evaluations in real-world settings often necessitate the implementation of less than ideal designs. Unfortunately, the standard method for estimating the precision of a result (i.e., confidence intervals [CI]) cannot be used for evaluative conclusions that are derived from multiple indicators, measures, and data…
Findings of the District English Program Evaluation Committee.
ERIC Educational Resources Information Center
Price, A. Rae; And Others
In summer 1990, the English Department of the Metropolitan Community College District (MCCD) in Missouri conducted a self-study to determine whether the English program's subject matter, academic standards, and methods of instruction were consistent with objectives. An Evaluation Committee, consisting of three English instructors representing each…
EVALUATION OF FADROZOLE AS AN ENDOCRINE DISRUPTOR IN FATHEAD MINNOW (PIMEPHALES PROMELAS)
The EPA has received a legislative mandate to develop and implement standardized screening and testing methods to identify and assess potential endocrine disrupting chemicals (EDCs). The objective of this research was to evaluate a short-term EDC screening/testing assay which ass...
Heinänen, M; Barbas, C
2001-03-01
A method is described for ambroxol, trans-4-(2-amino-3,5-dibromobenzylamino) cyclohexanol hydrochloride, and benzoic acid separation by HPLC with UV detection at 247 nm in a syrup as pharmaceutical presentation. Optimal conditions were: Column Symmetry Shield RPC8, 5 microm 250 x 4.6 mm, and methanol/(H(3)PO(4) 8.5 mM/triethylamine pH=2.8) 40:60 v/v. Validation was performed using standards and the pharmaceutical preparation which contains the compounds described above. Results from both standards and samples show suitable validation parameters. The pharmaceutical grade substances were tested by factors that could influence the chemical stability. These reaction mixtures were analysed to evaluate the capability of the method to separate degradation products. Degradation products did not interfere with the determination of the substances tested by the assay.
Noble, Simon; Pease, Nikki; Sui, Jessica; Davies, James; Lewis, Sarah; Malik, Usman; Alikhan, Raza; Prout, Hayley; Nelson, Annmarie
2016-01-01
Objectives Cancer-associated thrombosis (CAT) complex condition, which may present to any healthcare professional and at any point during the cancer journey. As such, patients may be managed by a number of specialties, resulting in inconsistent practice and suboptimal care. We describe the development of a dedicated CAT service and its evaluation. Setting Specialist cancer centre, district general hospital and primary care. Participants Patients with CAT and their referring clinicians. Intervention A cross specialty team developed a dedicated CAT service , including clear referral pathways, consistent access to medicines, patient's information and a specialist clinic. Primary and secondary outcome measures The service was evaluated using a mixed-methods evaluation , including audits of clinical practice, clinical outcomes, staff surveys and qualitative interviewing of patients and healthcare professionals. Results Data from 457 consecutive referrals over an 18-month period were evaluated. The CAT service has led to an 88% increase in safe and consistent community prescribing of low-molecular-weight heparin, with improved access to specialist advice and information. Patients reported improved understanding of their condition, enabling better self-management as well as better access to support and information. Referring clinicians reported better care standards for their patients with improved access to expertise and appropriate management. Conclusions A dedicated CAT service improves overall standards of care and is viewed positively by patients and clinicians alike. Further health economic evaluation would enhance the case for establishing this as the standard model of care. PMID:27895068
Veterinary and human vaccine evaluation methods
Knight-Jones, T. J. D.; Edmond, K.; Gubbins, S.; Paton, D. J.
2014-01-01
Despite the universal importance of vaccines, approaches to human and veterinary vaccine evaluation differ markedly. For human vaccines, vaccine efficacy is the proportion of vaccinated individuals protected by the vaccine against a defined outcome under ideal conditions, whereas for veterinary vaccines the term is used for a range of measures of vaccine protection. The evaluation of vaccine effectiveness, vaccine protection assessed under routine programme conditions, is largely limited to human vaccines. Challenge studies under controlled conditions and sero-conversion studies are widely used when evaluating veterinary vaccines, whereas human vaccines are generally evaluated in terms of protection against natural challenge assessed in trials or post-marketing observational studies. Although challenge studies provide a standardized platform on which to compare different vaccines, they do not capture the variation that occurs under field conditions. Field studies of vaccine effectiveness are needed to assess the performance of a vaccination programme. However, if vaccination is performed without central co-ordination, as is often the case for veterinary vaccines, evaluation will be limited. This paper reviews approaches to veterinary vaccine evaluation in comparison to evaluation methods used for human vaccines. Foot-and-mouth disease has been used to illustrate the veterinary approach. Recommendations are made for standardization of terminology and for rigorous evaluation of veterinary vaccines. PMID:24741009
Day, Sara W.; McKeon, Leslie M.; Garcia, Jose; Wilimas, Judith A.; Carty, Rita M.; de Alarcon, Pedro; Antillon, Federico; Howard, Scott C.
2017-01-01
Background Inadequate nursing care is a major impediment to development of effective programs for treatment of childhood cancer in low-income countries. When the International Outreach Program at St. Jude Children’s Research Hospital established partner sites in low-income countries, few nurses had pediatric oncology skills or experience. A comprehensive nursing program was developed to promote the provision of quality nursing care, and in this manuscript we describe the program’s impact on 20 selected Joint Commission International (JCI) quality standards at the National Pediatric Oncology Unit in Guatemala. We utilized JCI standards to focus the nursing evaluation and implementation of improvements. These standards were developed to assess public hospitals in low-income countries and are recognized as the gold standard of international quality evaluation. Methods We compared the number of JCI standards met before and after the nursing program was implemented using direct observation of nursing care; review of medical records, policies, procedures, and job descriptions; and interviews with staff. Results In 2006, only 1 of the 20 standards was met fully, 2 partially, and 17 not met. In 2009, 16 were met fully, 1 partially, and 3 not met. Several factors contributed to the improvement. The pre-program quality evaluation provided objective and credible findings and an organizational framework for implementing change. The medical, administrative, and nursing staff worked together to improve nursing standards. Conclusion A systematic approach and involvement of all hospital disciplines led to significant improvement in nursing care that was reflected by fully meeting 16 of 20 standards. PMID:23015363
Measuring lip force by oral screens. Part 1: Importance of screen size and individual variability.
Wertsén, Madeleine; Stenberg, Manne
2017-06-01
To reduce drooling and facilitate food transport in rehabilitation of patients with oral motor dysfunction, lip force can be trained using an oral screen. Longitudinal studies evaluating the effect of training require objective methods. The aim of this study was to evaluate a method for measuring lip strength, to investigate normal values and fluctuation of lip force in healthy adults on 1 occasion and over time, to study how the size of the screen affects the force, to evaluate the most appropriate measure of reliability, and to identify force performed in relation to gender. Three different sizes of oral screens were used to measure the lip force for 24 healthy adults on 3 different occasions, during a period of 6 months, using an apparatus based on strain gauge. The maximum lip force as evaluated with this method depends on the area of the screen size. By calculating the projected area of the screen, the lip force could be normalized to an oral screen pressure quantity expressed in kPa, which can be used for comparing measurements from screens with different sizes. Both the mean value and standard deviation were shown to vary between individuals. The study showed no differences regarding gender and only small variation with age. Normal variation over time (months) may be up to 3 times greater than the standard error of measurement at a certain occasion. The lip force increases in relation to the projected area of the screen. No general standard deviation can be assigned to the method and all measurements should be analyzed individually based on oral screen pressure to compensate for different screen sizes.
Tweedell, Andrew J.; Haynes, Courtney A.
2017-01-01
The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60–90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity. PMID:28489897
Kapoor, Rupa; Avendaño, Leslie; Sandoval, Maria Antonieta; Cruz, Andrea T; Sampayo, Esther M; Soto, Miguel A; Camp, Elizabeth A; Crouse, Heather L
2017-01-01
Background: Few data exist for referral processes in resource-limited settings. We utilized mixed-methods to evaluate the impact of a standardized algorithm and training module developed for locally identified needs in referral/counter-referral procedures between primary health centers (PHCs) and a Guatemalan referral hospital. Methods : PHC personnel and hospital physicians participated in surveys and focus groups pre-implementation and 3, 6, and 12 months post-implementation to evaluate providers' experience with the system. Referred patient records were reviewed to evaluate system effectiveness. Results : A total of 111 initial focus group participants included 96 (86.5%) from PHCs and 15 from the hospital. Of these participants, 53 PHC physicians and nurses and 15 hospital physicians initially completed written surveys. Convenience samples participated in follow-up. Eighteen focus groups achieved thematic saturation. Four themes emerged: effective communication; provision of timely, quality patient care with adequate resources; educational opportunities; and development of empowerment and relationships. Pre- and post-implementation surveys demonstrated significant improvement at the PHCs ( P < .001) and the hospital ( P = .02). Chart review included 435 referrals, 98 (22.5%) pre-implementation and 337 (77.5%) post-implementation. There was a trend toward an increased percentage of appropriately referred patients requiring medical intervention (30% vs 40%, P = .08) and of patients requiring intervention who received it prior to transport (55% vs 73%, P = .06). Conclusions : Standardizing a referral/counter-referral system improved communication, education, and trust across different levels of pediatric health care delivery. This model may be used for extension throughout Guatemala or be modified for use in other countries. Mixed-methods research design can evaluate complex systems in resource-limited settings.
Chow, Clara K.; Corsi, Daniel J.; Lock, Karen; Madhavan, Manisha; Mackie, Pam; Li, Wei; Yi, Sun; Wang, Yang; Swaminathan, Sumathi; Lopez-Jaramillo, Patricio; Gomez-Arbelaez, Diego; Avezum, Álvaro; Lear, Scott A.; Dagenais, Gilles; Teo, Koon; McKee, Martin; Yusuf, Salim
2014-01-01
Background Previous research has shown that environments with features that encourage walking are associated with increased physical activity. Existing methods to assess the built environment using geographical information systems (GIS) data, direct audit or large surveys of the residents face constraints, such as data availability and comparability, when used to study communities in countries in diverse parts of the world. The aim of this study was to develop a method to evaluate features of the built environment of communities using a standard set of photos. In this report we describe the method of photo collection, photo analysis instrument development and inter-rater reliability of the instrument. Methods/Principal Findings A minimum of 5 photos were taken per community in 86 communities in 5 countries according to a standard set of instructions from a designated central point of each community by researchers at each site. A standard pro forma derived from reviewing existing instruments to assess the built environment was developed and used to score the characteristics of each community. Photo sets from each community were assessed independently by three observers in the central research office according to the pro forma and the inter-rater reliability was compared by intra-class correlation (ICC). Overall 87% (53 of 60) items had an ICC of ≥0.70, 7% (4 of 60) had an ICC between 0.60 and 0.70 and 5% (3 of 60) items had an ICC ≤0.50. Conclusions/Significance Analysis of photos using a standardized protocol as described in this study offers a means to obtain reliable and reproducible information on the built environment in communities in very diverse locations around the world. The collection of the photographic data required minimal training and the analysis demonstrated high reliability for the majority of items of interest. PMID:25369366
Software Architecture Evaluation in Global Software Development Projects
NASA Astrophysics Data System (ADS)
Salger, Frank
Due to ever increasing system complexity, comprehensive methods for software architecture evaluation become more and more important. This is further stressed in global software development (GSD), where the software architecture acts as a central knowledge and coordination mechanism. However, existing methods for architecture evaluation do not take characteristics of GSD into account. In this paper we discuss what aspects are specific for architecture evaluations in GSD. Our experiences from GSD projects at Capgemini sd&m indicate, that architecture evaluations differ in how rigorously one has to assess modularization, architecturally relevant processes, knowledge transfer and process alignment. From our project experiences, we derive nine good practices, the compliance to which should be checked in architecture evaluations in GSD. As an example, we discuss how far the standard architecture evaluation method used at Capgemini sd&m already considers the GSD-specific good practices, and outline what extensions are necessary to achieve a comprehensive architecture evaluation framework for GSD.
Kim, Ha-Young; Shin, Sang-Wan
2014-01-01
PURPOSE The aim of this review was to analyze the evaluation criteria on mandibular implant overdentures through a systematic review and suggest standardized evaluation criteria. MATERIALS AND METHODS A systematic literature search was conducted by PubMed search strategy and hand-searching of relevant journals from included studies considering inclusion and exclusion criteria. Randomized clinical trials (RCT) and clinical trial studies comparing attachment systems on mandibular implant overdentures until December, 2011 were selected. Twenty nine studies were finally selected and the data about evaluation methods were collected. RESULTS Evaluation criteria could be classified into 4 groups (implant survival, peri-implant tissue evaluation, prosthetic evaluation, and patient satisfaction). Among 29 studies, 21 studies presented implant survival rate, while any studies reporting implant failure did not present cumulative implant survival rate. Seventeen studies evaluating peri-implant tissue status presented following items as evaluation criteria; marginal bone level (14), plaque Index (13), probing depth (8), bleeding index (8), attachment gingiva level (8), gingival index (6), amount of keratinized gingiva (1). Eighteen studies evaluating prosthetic maintenance and complication also presented following items as evaluation criteria; loose matrix (17), female detachment (15), denture fracture (15), denture relining (14), abutment fracture (14), abutment screw loosening (11), and occlusal adjustment (9). Atypical questionnaire (9), Visual analog scales (VAS) (4), and Oral Health Impact Profile (OHIP) (1) were used as the format of criteria to evaluate patients satisfaction in 14 studies. CONCLUSION For evaluation of implant overdenture, it is necessary to include cumulative survival rate for implant evaluation. It is suggested that peri-implant tissue evaluation criteria include marginal bone level, plaque index, bleeding index, probing depth, and attached gingiva level. It is also suggested that prosthetic evaluation criteria include loose matrix, female detachment, denture fracture, denture relining, abutment fracture, abutment screw loosening, and occlusal adjustment. Finally standardized criteria like OHIP-EDENT or VAS are required for patient satisfaction. PMID:25352954
Evaluation of reference evapotranspiration methods in arid, semiarid, and humid regions
Fei Gao; Gary Feng; Ying Ouyang; Huixiao Wang; Daniel Fisher; Ardeshir Adeli; Johnie Jenkins
2017-01-01
It is often necessary to find a simpler method in different climatic regions to calculate reference crop evapotranspiration (ETo) since the application of the FAO-56 Penman-Monteith method is often restricted due to the unavailability of a comprehensive weather dataset. Seven ETo methods, namely the standard FAO-56 Penman-Monteith, the FAO-24 Radiation, FAO-24 Blaney...
Van Herpe, Tom; De Brabanter, Jos; Beullens, Martine; De Moor, Bart; Van den Berghe, Greet
2008-01-01
Introduction Blood glucose (BG) control performed by intensive care unit (ICU) nurses is becoming standard practice for critically ill patients. New (semi-automated) 'BG control' algorithms (or 'insulin titration' algorithms) are under development, but these require stringent validation before they can replace the currently used algorithms. Existing methods for objectively comparing different insulin titration algorithms show weaknesses. In the current study, a new approach for appropriately assessing the adequacy of different algorithms is proposed. Methods Two ICU patient populations (with different baseline characteristics) were studied, both treated with a similar 'nurse-driven' insulin titration algorithm targeting BG levels of 80 to 110 mg/dl. A new method for objectively evaluating BG deviations from normoglycemia was founded on a smooth penalty function. Next, the performance of this new evaluation tool was compared with the current standard assessment methods, on an individual as well as a population basis. Finally, the impact of four selected parameters (the average BG sampling frequency, the duration of algorithm application, the severity of disease, and the type of illness) on the performance of an insulin titration algorithm was determined by multiple regression analysis. Results The glycemic penalty index (GPI) was proposed as a tool for assessing the overall glycemic control behavior in ICU patients. The GPI of a patient is the average of all penalties that are individually assigned to each measured BG value based on the optimized smooth penalty function. The computation of this index returns a number between 0 (no penalty) and 100 (the highest penalty). For some patients, the assessment of the BG control behavior using the traditional standard evaluation methods was different from the evaluation with GPI. Two parameters were found to have a significant impact on GPI: the BG sampling frequency and the duration of algorithm application. A higher BG sampling frequency and a longer algorithm application duration resulted in an apparently better performance, as indicated by a lower GPI. Conclusion The GPI is an alternative method for evaluating the performance of BG control algorithms. The blood glucose sampling frequency and the duration of algorithm application should be similar when comparing algorithms. PMID:18302732
HEASD PM RESEARCH METHODS: PARTICLE METHODS EVALUATION AND DEVELOPMENT
The FRM developed by NERL forms the backbone of the EPA's national monitoring strategy. It is the measurement that defines attainment of the new standard. However, the agency has numerous other needs in assessing the physical and chemical characteristics of ambient fine particl...
EVALUATION OF BIOSOLID SAMPLE PROCESSING TECHNIQUES TO MAXIMIZE RECOVERY OF BACTERIA
Current federal regulations (40 CFR 503) require enumeration of fecal coliform or Salmoella prior to land application of Class A biosolids. This regulation specifies use of enumeration methods included in "Standard Methods for the Examination of Water and Wastewater 18th Edition,...
Kurashiki, T
1996-11-01
For resolving the discrepancy of concentrations found among anesthetic gas monitors, the author proposed a new method using a vaporizer as a standard anesthetic gas generator for calibration. In this method, the carrier gas volume is measured by a mass flow meter (SEF-510 + FI-101) installed before the inlet of the vaporizer. The vaporized weight of volatile anesthetic agent is simultaneously measured by an electronic force balance (E12000S), on which the vaporizer is placed directly. The molar percent of the anesthetic is calculated using these data and is transformed into the volume percent. These gases discharging from the vaporizer are utilized for calibrating anesthetic gas monitors. These monitors are normalized by the linear equation describing the relationship between concentrations of calibration gases and readings of the anesthetic gas monitors. By using normalized monitors, flow rate-concentration performance curves of several anesthetic vaporizers were obtained. The author concludes that this method can serve as a standard in evaluating anesthetic vaporizers.
Hemmer, Paul A.; Grau, Thomas; Pangaro, Louis N.
2001-10-01
This study examined the predictive validity of in-clerkship evaluation methods to identify medical students who have insufficient knowledge. Study subjects were 124 third-year medical students at the Uniformed Services University. Insufficient knowledge was defined by: (1) a clerkship 'pre-test' score one standard deviation below the mean or lower; or (2) any teacher verbally rating a student's general knowledge as 'marginal' or less; or (3) a student did not pass Step One of the United States Medical Licensing Examination (USMLE). We determined sensitivity and specificity using a standard score of = 300 on the end of clerkship National Board of Medical Examiners (NBME) subject examination in medicine as the outcome variable. Sixteen students scored = 300 on the NBME examination. The sensitivity of the 'pre-test' or verbal comments alone was 44% (seven of 16 students). By combining methods, 11 students were identified, for a sensitivity of 69%. The specificity of all methods was > 90%. Using USMLE Step One pass-fail performance did not improve sensitivity. Combining a 'pre-test' and instructors' formal evaluation session comments improves the early identification of students with insufficient knowledge, allowing for formative feedback and remediation during the clerkship.
On the Performance Evaluation of 3D Reconstruction Techniques from a Sequence of Images
NASA Astrophysics Data System (ADS)
Eid, Ahmed; Farag, Aly
2005-12-01
The performance evaluation of 3D reconstruction techniques is not a simple problem to solve. This is not only due to the increased dimensionality of the problem but also due to the lack of standardized and widely accepted testing methodologies. This paper presents a unified framework for the performance evaluation of different 3D reconstruction techniques. This framework includes a general problem formalization, different measuring criteria, and a classification method as a first step in standardizing the evaluation process. Performance characterization of two standard 3D reconstruction techniques, stereo and space carving, is also presented. The evaluation is performed on the same data set using an image reprojection testing methodology to reduce the dimensionality of the evaluation domain. Also, different measuring strategies are presented and applied to the stereo and space carving techniques. These measuring strategies have shown consistent results in quantifying the performance of these techniques. Additional experiments are performed on the space carving technique to study the effect of the number of input images and the camera pose on its performance.
Evaluation of extraction methods for ochratoxin A detection in cocoa beans employing HPLC.
Mishra, Rupesh K; Catanante, Gaëlle; Hayat, Akhtar; Marty, Jean-Louis
2016-01-01
Cocoa is an important ingredient for the chocolate industry and for many food products. However, it is prone to contamination by ochratoxin A (OTA), which is highly toxic and potentially carcinogenic to humans. In this work, four different extraction methods were tested and compared based on their recoveries. The best protocol was established which involves an organic solvent-free extraction method for the detection of OTA in cocoa beans using 1% sodium hydrogen carbonate (NaHCO3) in water within 30 min. The extraction method is rapid (as compared with existing methods), simple, reliable and practical to perform without complex experimental set-ups. The cocoa samples were freshly extracted and cleaned-up using immunoaffinity column (IAC) for HPLC analysis using a fluorescence detector. Under the optimised condition, the limit of detection (LOD) and limit of quantification (LOQ) for OTA were 0.62 and 1.25 ng ml(-1) respectively in standard solutions. The method could successfully quantify OTA in naturally contaminated samples. Moreover, good recoveries of OTA were obtained up to 86.5% in artificially spiked cocoa samples, with a maximum relative standard deviation (RSD) of 2.7%. The proposed extraction method could determine OTA at the level 1.5 µg kg(-)(1), which surpassed the standards set by the European Union for cocoa (2 µg kg(-1)). In addition, an efficiency comparison of IAC and molecular imprinted polymer (MIP) column was also performed and evaluated.
Strengthening Transparency in Regulatory Science
Where available and appropriate, EPA will use peer-reviewed information, standardized test methods, consistent data evaluation procedures, and good laboratory practices to ensure transparent, understandable, and reproducible scientific assessments.
An Evaluation of American Board Teacher Certification: Progress and Plans
ERIC Educational Resources Information Center
Glazerman, Steven; Tuttle, Christina
2006-01-01
Education policymakers have long sought to establish teaching standards that will measure new or continuing teachers against these standards. The problem is, existing methods for certifying teachers have been criticized for being either so onerous as to deter good candidates or so lax as to keep weak teachers in the profession. To provide another…
ERIC Educational Resources Information Center
Taylor, Arthur; Dalal, Heather A.
2014-01-01
Introduction: This paper aims to determine how appropriate information literacy instruction is for preparing students for these unmediated searches using commercial search engines and the Web. Method. A survey was designed using the 2000 Association of College and Research Libraries literacy competency standards for higher education. Survey…
The US Environmental Protection Agency (EPA) published a National Ambient Air Quality Standard (NAAQS) and the accompanying Federal Reference Method (FRM) for PM10 in 1987. The EPA revised the particle standards and FRM in 1997 to include PM2.5. In 2005, EPA...
Quantitative Technique for Comparing Simulant Materials through Figures of Merit
NASA Technical Reports Server (NTRS)
Rickman, Doug; Hoelzer, Hans; Fourroux, Kathy; Owens, Charles; McLemore, Carole; Fikes, John
2007-01-01
The 1989 workshop report entitled Workshop on Production and Uses of Simulated Lunar Materials and the Lunar Regolith Simulant Materials: Recommendations for Standardization, Production, and Usage, NASA Technical Publication both identified and reinforced a need for a set of standards and requirements for the production and usage of the Lunar simulant materials. As NASA prepares to return to the Moon, and set out to Mars, a set of early requirements have been developed for simulant materials and the initial methods to produce and measure those simulants have been defined. Addressed in the requirements document are: 1) a method for evaluating the quality of any simulant of a regolith, 2) the minimum characteristics for simulants of Lunar regolith, and 3) a method to produce simulants needed for NASA's Exploration mission. As an extension of the requirements document a method to evaluate new and current simulants has been rigorously defined through the mathematics of Figures of Merit (FoM). Requirements and techniques have been developed that allow the simulant provider to compare their product to a standard reference material through Figures of Merit. Standard reference material may be physical material such as the Apollo core samples or material properties predicted for any landing site. The simulant provider is not restricted to providing a single "high fidelity" simulant, which may be costly to produce. The provider can now develop "lower fidelity" simulants for engineering applications such as drilling and mobility applications.
Hatt, Mathieu; Lee, John A.; Schmidtlein, Charles R.; Naqa, Issam El; Caldwell, Curtis; De Bernardi, Elisabetta; Lu, Wei; Das, Shiva; Geets, Xavier; Gregoire, Vincent; Jeraj, Robert; MacManus, Michael P.; Mawlawi, Osama R.; Nestle, Ursula; Pugachev, Andrei B.; Schöder, Heiko; Shepherd, Tony; Spezi, Emiliano; Visvikis, Dimitris; Zaidi, Habib; Kirov, Assen S.
2017-01-01
Purpose The purpose of this educational report is to provide an overview of the present state-of-the-art PET auto-segmentation (PET-AS) algorithms and their respective validation, with an emphasis on providing the user with help in understanding the challenges and pitfalls associated with selecting and implementing a PET-AS algorithm for a particular application. Approach A brief description of the different types of PET-AS algorithms is provided using a classification based on method complexity and type. The advantages and the limitations of the current PET-AS algorithms are highlighted based on current publications and existing comparison studies. A review of the available image datasets and contour evaluation metrics in terms of their applicability for establishing a standardized evaluation of PET-AS algorithms is provided. The performance requirements for the algorithms and their dependence on the application, the radiotracer used and the evaluation criteria are described and discussed. Finally, a procedure for algorithm acceptance and implementation, as well as the complementary role of manual and auto-segmentation are addressed. Findings A large number of PET-AS algorithms have been developed within the last 20 years. Many of the proposed algorithms are based on either fixed or adaptively selected thresholds. More recently, numerous papers have proposed the use of more advanced image analysis paradigms to perform semi-automated delineation of the PET images. However, the level of algorithm validation is variable and for most published algorithms is either insufficient or inconsistent which prevents recommending a single algorithm. This is compounded by the fact that realistic image configurations with low signal-to-noise ratios (SNR) and heterogeneous tracer distributions have rarely been used. Large variations in the evaluation methods used in the literature point to the need for a standardized evaluation protocol. Conclusions Available comparison studies suggest that PET-AS algorithms relying on advanced image analysis paradigms provide generally more accurate segmentation than approaches based on PET activity thresholds, particularly for realistic configurations. However, this may not be the case for simple shape lesions in situations with a narrower range of parameters, where simpler methods may also perform well. Recent algorithms which employ some type of consensus or automatic selection between several PET-AS methods have potential to overcome the limitations of the individual methods when appropriately trained. In either case, accuracy evaluation is required for each different PET scanner and scanning and image reconstruction protocol. For the simpler, less robust approaches, adaptation to scanning conditions, tumor type, and tumor location by optimization of parameters is necessary. The results from the method evaluation stage can be used to estimate the contouring uncertainty. All PET-AS contours should be critically verified by a physician. A standard test, i.e., a benchmark dedicated to evaluating both existing and future PET-AS algorithms needs to be designed, to aid clinicians in evaluating and selecting PET-AS algorithms and to establish performance limits for their acceptance for clinical use. The initial steps toward designing and building such a standard are undertaken by the task group members. PMID:28120467
larvalign: Aligning Gene Expression Patterns from the Larval Brain of Drosophila melanogaster.
Muenzing, Sascha E A; Strauch, Martin; Truman, James W; Bühler, Katja; Thum, Andreas S; Merhof, Dorit
2018-01-01
The larval brain of the fruit fly Drosophila melanogaster is a small, tractable model system for neuroscience. Genes for fluorescent marker proteins can be expressed in defined, spatially restricted neuron populations. Here, we introduce the methods for 1) generating a standard template of the larval central nervous system (CNS), 2) spatial mapping of expression patterns from different larvae into a reference space defined by the standard template. We provide a manually annotated gold standard that serves for evaluation of the registration framework involved in template generation and mapping. A method for registration quality assessment enables the automatic detection of registration errors, and a semi-automatic registration method allows one to correct registrations, which is a prerequisite for a high-quality, curated database of expression patterns. All computational methods are available within the larvalign software package: https://github.com/larvalign/larvalign/releases/tag/v1.0.
Phillips, Melissa M.; Bedner, Mary; Gradl, Manuela; Burdette, Carolyn Q.; Nelson, Michael A.; Yen, James H.; Sander, Lane C.; Rimmer, Catherine A.
2017-01-01
Two independent analytical approaches, based on liquid chromatography with absorbance detection and liquid chromatography with mass spectrometric detection, have been developed for determination of isoflavones in soy materials. These two methods yield comparable results for a variety of soy-based foods and dietary supplements. Four Standard Reference Materials (SRMs) have been produced by the National Institute of Standards and Technology to assist the food and dietary supplement community in method validation and have been assigned values for isoflavone content using both methods. These SRMs include SRM 3234 Soy Flour, SRM 3236 Soy Protein Isolate, SRM 3237 Soy Protein Concentrate, and SRM 3238 Soy-Containing Solid Oral Dosage Form. A fifth material, SRM 3235 Soy Milk, was evaluated using the methods and found to be inhomogeneous for isoflavones and unsuitable for value assignment. PMID:27832301
Bartlett, Sofia R; Grebely, Jason; Eltahla, Auda A; Reeves, Jacqueline D; Howe, Anita Y M; Miller, Veronica; Ceccherini-Silberstein, Francesca; Bull, Rowena A; Douglas, Mark W; Dore, Gregory J; Harrington, Patrick; Lloyd, Andrew R; Jacka, Brendan; Matthews, Gail V; Wang, Gary P; Pawlotsky, Jean-Michel; Feld, Jordan J; Schinkel, Janke; Garcia, Federico; Lennerstrand, Johan; Applegate, Tanya L
2017-07-01
The significance of the clinical impact of direct-acting antiviral (DAA) resistance-associated substitutions (RASs) in hepatitis C virus (HCV) on treatment failure is unclear. No standardized methods or guidelines for detection of DAA RASs in HCV exist. To facilitate further evaluations of the impact of DAA RASs in HCV, we conducted a systematic review of RAS sequencing protocols, compiled a comprehensive public library of sequencing primers, and provided expert guidance on the most appropriate methods to screen and identify RASs. The development of standardized RAS sequencing protocols is complicated due to a high genetic variability and the need for genotype- and subtype-specific protocols for multiple regions. We have identified several limitations of the available methods and have highlighted areas requiring further research and development. The development, validation, and sharing of standardized methods for all genotypes and subtypes should be a priority. ( Hepatology Communications 2017;1:379-390).
STANDARD REFERENCE MATERIALS FOR THE POLYMERS INDUSTRY.
McDonough, Walter G; Orski, Sara V; Guttman, Charles M; Migler, Kalman D; Beers, Kathryn L
2016-01-01
The National Institute of Standards and Technology (NIST) provides science, industry, and government with a central source of well-characterized materials certified for chemical composition or for some chemical or physical property. These materials are designated Standard Reference Materials ® (SRMs) and are used to calibrate measuring instruments, to evaluate methods and systems, or to produce scientific data that can be referred readily to a common base. In this paper, we discuss the history of polymer based SRMs, their current status, and challenges and opportunities to develop new standards to address industrial measurement challenges.
Zhi, Ruicong; Zhao, Lei; Xie, Nan; Wang, Houyin; Shi, Bolin; Shi, Jingye
2016-01-13
A framework of establishing standard reference scale (texture) is proposed by multivariate statistical analysis according to instrumental measurement and sensory evaluation. Multivariate statistical analysis is conducted to rapidly select typical reference samples with characteristics of universality, representativeness, stability, substitutability, and traceability. The reasonableness of the framework method is verified by establishing standard reference scale of texture attribute (hardness) with Chinese well-known food. More than 100 food products in 16 categories were tested using instrumental measurement (TPA test), and the result was analyzed with clustering analysis, principal component analysis, relative standard deviation, and analysis of variance. As a result, nine kinds of foods were determined to construct the hardness standard reference scale. The results indicate that the regression coefficient between the estimated sensory value and the instrumentally measured value is significant (R(2) = 0.9765), which fits well with Stevens's theory. The research provides reliable a theoretical basis and practical guide for quantitative standard reference scale establishment on food texture characteristics.
Bidding cost evaluation with fuzzy methods on building project in Jakarta
NASA Astrophysics Data System (ADS)
Susetyo, Budi; Utami, Tin Budi
2017-11-01
National construction companies today demanded to become more competitive to face increasingly competition. Every construction company especially the contractor must work better than ever. Ability to prepare cost of the work that represents the efficiency and effectiveness of the implementation of the work necessary to produce cost - competitive. The project is considered successful if the target meets the quality, cost and time. From the aspect of cost, the project has been designed in accordance with certain technical criteria to be taken into account based on standard costs. To ensure the cost efficiency of the bidding process carried out meet the rules of a fairly and competitive. The research objective is to formulate the proper way to compare several deals with the standard cost of the work. The fuzzy technique is used as a evaluation methods to decision making. The evaluation not merely based on the lowest prices. The methods is looking for the most valuable and reasonable prices. The comparison is conducted to determine the most cost-competitive and reasonable as the winner of the bidding.
Dry and wet arc track propagation resistance testing
NASA Technical Reports Server (NTRS)
Beach, Rex
1995-01-01
The wet arc-propagation resistance test for wire insulation provides an assessment of the ability of an insulation to prevent damage in an electrical environment. Results of an arc-propagation test may vary slightly due to the method of arc initiation; therefore a standard test method must be selected to evaluate the general arc-propagation resistance characteristics of an insulation. This test method initiates an arc by dripping salt water over pre-damaged wires which creates a conductive path between the wires. The power supply, test current, circuit resistances, and other variables are optimized for testing 20 guage wires. The use of other wire sizes may require modifications to the test variables. The dry arc-propagation resistance test for wire insulation also provides an assessment of the ability of an insulation to prevent damage in an electrical arc environment. In service, electrical arcs may originate form a variety of factors including insulation deterioration, faulty installation, and chafing. Here too, a standard test method must be selected to evaluate the general arc-propagation resistance characteristics of an insulation. This test method initiates an arc with a vibrating blade. The test also evaluates the ability of the insulation to prevent further arc-propagation when the electrical arc is re-energized.
Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Teshima, Reiko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi
2016-01-01
A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) maize, 3272. We first attempted to obtain genome DNA from this maize using a DNeasy Plant Maxi kit and a DNeasy Plant Mini kit, which have been widely utilized in our previous studies, but DNA extraction yields from 3272 were markedly lower than those from non-GM maize seeds. However, lowering of DNA extraction yields was not observed with GM quicker or Genomic-tip 20/G. We chose GM quicker for evaluation of the quantitative method. We prepared a standard plasmid for 3272 quantification. The conversion factor (Cf), which is required to calculate the amount of a genetically modified organism (GMO), was experimentally determined for two real-time PCR instruments, the Applied Biosystems 7900HT (the ABI 7900) and the Applied Biosystems 7500 (the ABI7500). The determined Cf values were 0.60 and 0.59 for the ABI 7900 and the ABI 7500, respectively. To evaluate the developed method, a blind test was conducted as part of an interlaboratory study. The trueness and precision were evaluated as the bias and reproducibility of the relative standard deviation (RSDr). The determined values were similar to those in our previous validation studies. The limit of quantitation for the method was estimated to be 0.5% or less, and we concluded that the developed method would be suitable and practical for detection and quantification of 3272.
Effects of Computer-Based Training on Procedural Modifications to Standard Functional Analyses
ERIC Educational Resources Information Center
Schnell, Lauren K.; Sidener, Tina M.; DeBar, Ruth M.; Vladescu, Jason C.; Kahng, SungWoo
2018-01-01
Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to…
Using Photo-Interviewing as Tool for Research and Evaluation.
ERIC Educational Resources Information Center
Dempsey, John V.; Tucker, Susan A.
Arguing that photo-interviewing yields richer data than that usually obtained from verbal interviewing procedures alone, it is proposed that this method of data collection be added to "standard" methodologies in instructional development research and evaluation. The process, as described in this paper, consists of using photographs of…
EVALUATION OF ALTERNATIVE REFERENCE TOXICANTS FOR USE IN THE EARTHWORM TOXICITY TEST
The use of the 14-d earthworm toxicity test to aid in the evaluation of the ecological impact of contaminated soils is becoming increasingly widespread. However,the method is in need of further standardization. As part of this continuing process, the choice of reference toxicants...
Permanent Disability Evaluation
Chovil, A. C.
1975-01-01
This paper is a review of the theory and practice of disability evaluation with emphasis on the distinction between medical impairment and disability. The requirements for making an accurate assessment of medical impairments are discussed. The author suggests three basic standards which can be used for establishing a simplified method of assessing physical impairment. PMID:20469213
DOT National Transportation Integrated Search
1980-10-01
This report describes the methods used in the evaluation of a new continuous-flow, phase-dilution passenger oxygen mask for compliance to FAA technical Standard Order (TSO)-C64 requirements. Data presented include end expiratory partial pressures for...
Evaluating the Reliability, Validity, and Usefulness of Education Cost Studies
ERIC Educational Resources Information Center
Baker, Bruce D.
2006-01-01
Recent studies that purport to estimate the costs of constitutionally adequate education have been described as either a "gold standard" that should guide legislative school finance policy design and judicial evaluation, or as pure "alchemy." Methods for estimating the cost of constitutionally adequate education can be roughly…
Development and Evaluation of the School Cafeteria Nutrition Assessment Measures
ERIC Educational Resources Information Center
Krukowski, Rebecca A.; Philyaw Perez, Amanda G.; Bursac, Zoran; Goodell, Melanie; Raczynski, James M.; Smith West, Delia; Phillips, Martha M.
2011-01-01
Background: Foods provided in schools represent a substantial portion of US children's dietary intake; however, the school food environment has proven difficult to describe due to the lack of comprehensive, standardized, and validated measures. Methods: As part of the Arkansas Act 1220 evaluation project, we developed the School Cafeteria…
Evaluating the evaluation of cancer driver genes
Tokheim, Collin J.; Papadopoulos, Nickolas; Kinzler, Kenneth W.; Vogelstein, Bert; Karchin, Rachel
2016-01-01
Sequencing has identified millions of somatic mutations in human cancers, but distinguishing cancer driver genes remains a major challenge. Numerous methods have been developed to identify driver genes, but evaluation of the performance of these methods is hindered by the lack of a gold standard, that is, bona fide driver gene mutations. Here, we establish an evaluation framework that can be applied to driver gene prediction methods. We used this framework to compare the performance of eight such methods. One of these methods, described here, incorporated a machine-learning–based ratiometric approach. We show that the driver genes predicted by each of the eight methods vary widely. Moreover, the P values reported by several of the methods were inconsistent with the uniform values expected, thus calling into question the assumptions that were used to generate them. Finally, we evaluated the potential effects of unexplained variability in mutation rates on false-positive driver gene predictions. Our analysis points to the strengths and weaknesses of each of the currently available methods and offers guidance for improving them in the future. PMID:27911828
TOKYO criteria 2014 for transpapillary biliary stenting.
Isayama, Hiroyuki; Hamada, Tsuyoshi; Yasuda, Ichiro; Itoi, Takao; Ryozawa, Shomei; Nakai, Yousuke; Kogure, Hirofumi; Koike, Kazuhiko
2015-01-01
It is difficult to carry out meta-analyses or to compare the results of different studies of biliary stents because there is no uniform evaluation method. Therefore, a standardized reporting system is required. We propose a new standardized system for reporting on biliary stents, the 'TOKYO criteria 2014', based on a consensus among Japanese pancreatobiliary endoscopists. Instead of stent occlusion, we use recurrent biliary obstruction, which includes occlusion and migration. The time to recurrent biliary obstruction was estimated using Kaplan-Meier analysis with the log-rank test. We can evaluate both plastic and self-expandable metallic stents (uncovered and covered). We also propose specification of the cause of recurrent biliary obstruction, identification of complications other than recurrent biliary obstruction, indication of severity, measures of technical and clinical success, and a standard for clinical care. Most importantly, the TOKYO criteria 2014 allow comparison of biliary stent quality across studies. Because blocked stents can be drained not only using transpapillary techniques but also by an endoscopic ultrasonography-guided transmural procedure, we should devise an evaluation method that includes transmural stenting in the near future. © 2014 The Authors. Digestive Endoscopy © 2014 Japan Gastroenterological Endoscopy Society.
Dawes, Sharron E.; Palmer, Barton W.; Jeste, Dilip V.
2008-01-01
Purpose of review Although the basic standards of adjudicative competence were specified by the U.S. Supreme Court in 1960, there remain a number of complex conceptual and practical issues in interpreting and applying these standards. In this report we provide a brief overview regarding the general concept of adjudicative competence and its assessment, as well as some highlights of recent empirical studies on this topic. Findings Most adjudicative competence assessments are conducted by psychiatrists or psychologists. There are no universal certification requirements, but some states are moving toward required certification of forensic expertise for those conducting such assessments. Recent data indicate inconsistencies in application of the existing standards even among forensic experts, but the recent publication of consensus guidelines may foster improvements in this arena. There are also ongoing efforts to develop and validate structured instruments to aid competency evaluations. Telemedicine-based competency interviews may facilitate evaluation by those with specific expertise for evaluation of complex cases. There is also interest in empirical development of educational methods to enhance adjudicative competence. Summary Adjudicative competence may be difficult to measure accurately, but the assessments and tools available are advancing. More research is needed on methods of enhancing decisional capacity among those with impaired competence. PMID:18650693
Evaluating Algorithm Performance Metrics Tailored for Prognostics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2009-01-01
Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
NASA Technical Reports Server (NTRS)
Billica, Roger; Krupa, Debra T.; Stonestreet, Robert; Kizzee, Victor D.
1991-01-01
The purpose is to investigate and demonstrate equipment and techniques proposed for minor surgery on Space Station Freedom (SSF). The objectives are: (1) to test and evaluate methods of surgical instrument packaging and deployment; (2) to test and evaluate methods of surgical site preparation and draping; (3) to evaluate techniques of sterile procedure and maintaining sterile field; (4) to evaluate methods of trash management during medical/surgical procedures; and (4) to gain experience in techniques for performing surgery in microgravity. A KC-135 parabolic flight test was performed on March 30, 1990 with the goal of investigating and demonstrating surgical equipment and techniques under consideration for use on SSF. The flight followed the standard 40 parabola profile with 20 to 25 seconds of near-zero gravity in each parabola.
Bakken, Suzanne; Cimino, James J.; Haskell, Robert; Kukafka, Rita; Matsumoto, Cindi; Chan, Garrett K.; Huff, Stanley M.
2000-01-01
Objective: The purpose of this study was to test the adequacy of the Clinical LOINC (Logical Observation Identifiers, Names, and Codes) semantic structure as a terminology model for standardized assessment measures. Methods: After extension of the definitions, 1,096 items from 35 standardized assessment instruments were dissected into the elements of the Clinical LOINC semantic structure. An additional coder dissected at least one randomly selected item from each instrument. When multiple scale types occurred in a single instrument, a second coder dissected one randomly selected item representative of each scale type. Results: The results support the adequacy of the Clinical LOINC semantic structure as a terminology model for standardized assessments. Using the revised definitions, the coders were able to dissect into the elements of Clinical LOINC all the standardized assessment items in the sample instruments. Percentage agreement for each element was as follows: component, 100 percent; property, 87.8 percent; timing, 82.9 percent; system/sample, 100 percent; scale, 92.6 percent; and method, 97.6 percent. Discussion: This evaluation was an initial step toward the representation of standardized assessment items in a manner that facilitates data sharing and re-use. Further clarification of the definitions, especially those related to time and property, is required to improve inter-rater reliability and to harmonize the representations with similar items already in LOINC. PMID:11062226
Preiksaitis, J.; Tong, Y.; Pang, X.; Sun, Y.; Tang, L.; Cook, L.; Pounds, S.; Fryer, J.; Caliendo, A. M.
2015-01-01
Quantitative detection of cytomegalovirus (CMV) DNA has become a standard part of care for many groups of immunocompromised patients; recent development of the first WHO international standard for human CMV DNA has raised hopes of reducing interlaboratory variability of results. Commutability of reference material has been shown to be necessary if such material is to reduce variability among laboratories. Here we evaluated the commutability of the WHO standard using 10 different real-time quantitative CMV PCR assays run by eight different laboratories. Test panels, including aliquots of 50 patient samples (40 positive samples and 10 negative samples) and lyophilized CMV standard, were run, with each testing center using its own quantitative calibrators, reagents, and nucleic acid extraction methods. Commutability was assessed both on a pairwise basis and over the entire group of assays, using linear regression and correspondence analyses. Commutability of the WHO material differed among the tests that were evaluated, and these differences appeared to vary depending on the method of statistical analysis used and the cohort of assays included in the analysis. Depending on the methodology used, the WHO material showed poor or absent commutability with up to 50% of assays. Determination of commutability may require a multifaceted approach; the lack of commutability seen when using the WHO standard with several of the assays here suggests that further work is needed to bring us toward true consensus. PMID:26269622
Procedural Guide for Designation Surveys of Ocean Dredged Material Disposal Sites. Revision
1990-04-01
data standardization." One of the most frequently used clustering strategies is called UPGMA (unweighted pair-group method using arithmetic averages...Sneath and Sokal 1973). Romesburg (1984) 151 evaluated many possible methods and concluded that UPGMA is appropriate for most types of cluster
Some possible reference materials for fire toxicity tests
NASA Technical Reports Server (NTRS)
Hilado, C. J.; Solis, A. N.
1977-01-01
Suitable reference materials need to be selected in order to standardize any test method. The evaluation of cotton, polyethylene, polyether sulfone, polycarbonate, polystyrene, and polyurethane flexible and rigid foams as possible reference materials for the University of San Francisco/NASA toxicity screening test method is discussed.
Sediment toxicity tests are used for contaminated sediments, chemical registration, and water quality criteria evaluations and can be a core component of ecological risk assessments at contaminated sediments sites. Standard methods for conducting sediment toxicity tests have been...
Ahmed, Sara; Besser, Thomas E; Call, Douglas R; Weissman, Scott J; Jones, Lisa P; Davis, Margaret A
2016-05-01
Multi-locus sequence typing (MLST) is a useful system for phylogenetic and epidemiological studies of multidrug-resistant Escherichiacoli. Most studies utilize a seven-locus MLST, but an alternate two-locus typing method (fumC and fimH; CH typing) has been proposed that may offer a similar degree of discrimination at lower cost. Herein, we compare CH typing to the standard seven-locus method for typing commensal E. coli isolates from dairy cattle. In addition, we evaluated alternative combinations of eight loci to identify combinations that maximize discrimination and congruence with standard seven-locus MLST among commensal E. coli while minimizing the cost. We also compared both methods when used for typing uropathogenic E. coli (UPEC). CH typing was less discriminatory for commensal E. coli than the standard seven-locus method (Simpson's Index of Diversity=0.933 [0.902-0.964] and 0.97 [0.96-0.979], respectively). Combining fimH with housekeeping gene loci improved discriminatory power for commensal E. coli from cattle but resulted in poor congruence with MLST. We found that a four-locus typing method including the housekeeping genes adk, purA, gyrB and recA could be used to minimize cost without sacrificing discriminatory power or congruence with Achtman seven-locus MLST when typing commensal E. coli. Copyright © 2016 Elsevier B.V. All rights reserved.
Requirements and Techniques for Developing and Measuring Simulant Materials
NASA Technical Reports Server (NTRS)
Rickman, Doug; Owens, Charles; Howard, Rick
2006-01-01
The 1989 workshop report entitled Workshop on Production and Uses of Simulated Lunar Materials and the Lunar Regolith Simulant Materials: Recommendations for Standardization, Production, and Usage, NASA Technical Publication identify and reinforced a need for a set of standards and requirements for the production and usage of the lunar simulant materials. As NASA need prepares to return to the moon, a set of requirements have been developed for simulant materials and methods to produce and measure those simulants have been defined. Addressed in the requirements document are: 1) a method for evaluating the quality of any simulant of a regolith, 2) the minimum Characteristics for simulants of lunar regolith, and 3) a method to produce lunar regolith simulants needed for NASA's exploration mission. A method to evaluate new and current simulants has also been rigorously defined through the mathematics of Figures of Merit (FoM), a concept new to simulant development. A single FoM is conceptually an algorithm defining a single characteristic of a simulant and provides a clear comparison of that characteristic for both the simulant and a reference material. Included as an intrinsic part of the algorithm is a minimum acceptable performance for the characteristic of interest. The algorithms for the FoM for Standard Lunar Regolith Simulants are also explicitly keyed to a recommended method to make lunar simulants.
Yarita, Takashi; Aoyagi, Yoshie; Otake, Takamitsu
2015-05-29
The impact of the matrix effect in GC-MS quantification of pesticides in food using the corresponding isotope-labeled internal standards was evaluated. A spike-and-recovery study of nine target pesticides was first conducted using paste samples of corn, green soybean, carrot, and pumpkin. The observed analytical values using isotope-labeled internal standards were more accurate for most target pesticides than that obtained using the external calibration method, but were still biased from the spiked concentrations when a matrix-free calibration solution was used for calibration. The respective calibration curves for each target pesticide were also prepared using matrix-free calibration solutions and matrix-matched calibration solutions with blank soybean extract. The intensity ratio of the peaks of most target pesticides to that of the corresponding isotope-labeled internal standards was influenced by the presence of the matrix in the calibration solution; therefore, the observed slope varied. The ratio was also influenced by the type of injection method (splitless or on-column). These results indicated that matrix-matching of the calibration solution is required for very accurate quantification, even if isotope-labeled internal standards were used for calibration. Copyright © 2015 Elsevier B.V. All rights reserved.
Zeynoddin, Mohammad; Bonakdari, Hossein; Azari, Arash; Ebtehaj, Isa; Gharabaghi, Bahram; Riahi Madavar, Hossein
2018-09-15
A novel hybrid approach is presented that can more accurately predict monthly rainfall in a tropical climate by integrating a linear stochastic model with a powerful non-linear extreme learning machine method. This new hybrid method was then evaluated by considering four general scenarios. In the first scenario, the modeling process is initiated without preprocessing input data as a base case. While in other three scenarios, the one-step and two-step procedures are utilized to make the model predictions more precise. The mentioned scenarios are based on a combination of stationarization techniques (i.e., differencing, seasonal and non-seasonal standardization and spectral analysis), and normality transforms (i.e., Box-Cox, John and Draper, Yeo and Johnson, Johnson, Box-Cox-Mod, log, log standard, and Manly). In scenario 2, which is a one-step scenario, the stationarization methods are employed as preprocessing approaches. In scenario 3 and 4, different combinations of normality transform, and stationarization methods are considered as preprocessing techniques. In total, 61 sub-scenarios are evaluated resulting 11013 models (10785 linear methods, 4 nonlinear models, and 224 hybrid models are evaluated). The uncertainty of the linear, nonlinear and hybrid models are examined by Monte Carlo technique. The best preprocessing technique is the utilization of Johnson normality transform and seasonal standardization (respectively) (R 2 = 0.99; RMSE = 0.6; MAE = 0.38; RMSRE = 0.1, MARE = 0.06, UI = 0.03 &UII = 0.05). The results of uncertainty analysis indicated the good performance of proposed technique (d-factor = 0.27; 95PPU = 83.57). Moreover, the results of the proposed methodology in this study were compared with an evolutionary hybrid of adaptive neuro fuzzy inference system (ANFIS) with firefly algorithm (ANFIS-FFA) demonstrating that the new hybrid methods outperformed ANFIS-FFA method. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; David I. Gertman; Jeffrey C. Joe
2005-09-01
An ongoing issue within human-computer interaction (HCI) is the need for simplified or “discount” methods. The current economic slowdown has necessitated innovative methods that are results driven and cost effective. The myriad methods of design and usability are currently being cost-justified, and new techniques are actively being explored that meet current budgets and needs. Recent efforts in human reliability analysis (HRA) are highlighted by the ten-year development of the Standardized Plant Analysis Risk HRA (SPAR-H) method. The SPAR-H method has been used primarily for determining humancentered risk at nuclear power plants. The SPAR-H method, however, shares task analysis underpinnings withmore » HCI. Despite this methodological overlap, there is currently no HRA approach deployed in heuristic usability evaluation. This paper presents an extension of the existing SPAR-H method to be used as part of heuristic usability evaluation in HCI.« less
NASA Astrophysics Data System (ADS)
Zhang, Zhu; Li, Hongbin; Tang, Dengping; Hu, Chen; Jiao, Yang
2017-10-01
Metering performance is the key parameter of an electronic voltage transformer (EVT), and it requires high accuracy. The conventional off-line calibration method using a standard voltage transformer is not suitable for the key equipment in a smart substation, which needs on-line monitoring. In this article, we propose a method for monitoring the metering performance of an EVT on-line based on cyber-physics correlation analysis. By the electrical and physical properties of a substation running in three-phase symmetry, the principal component analysis method is used to separate the metering deviation caused by the primary fluctuation and the EVT anomaly. The characteristic statistics of the measured data during operation are extracted, and the metering performance of the EVT is evaluated by analyzing the change in statistics. The experimental results show that the method successfully monitors the metering deviation of a Class 0.2 EVT accurately. The method demonstrates the accurate evaluation of on-line monitoring of the metering performance on an EVT without a standard voltage transformer.
Tendency for interlaboratory precision in the GMO analysis method based on real-time PCR.
Kodama, Takashi; Kurosawa, Yasunori; Kitta, Kazumi; Naito, Shigehiro
2010-01-01
The Horwitz curve estimates interlaboratory precision as a function only of concentration, and is frequently used as a method performance criterion in food analysis with chemical methods. The quantitative biochemical methods based on real-time PCR require an analogous criterion to progressively promote method validation. We analyzed the tendency of precision using a simplex real-time PCR technique in 53 collaborative studies of seven genetically modified (GM) crops. Reproducibility standard deviation (SR) and repeatability standard deviation (Sr) of the genetically modified organism (GMO) amount (%) was more or less independent of GM crops (i.e., maize, soybean, cotton, oilseed rape, potato, sugar beet, and rice) and evaluation procedure steps. Some studies evaluated whole steps consisting of DNA extraction and PCR quantitation, whereas others focused only on the PCR quantitation step by using DNA extraction solutions. Therefore, SR and Sr for GMO amount (%) are functions only of concentration similar to the Horwitz curve. We proposed S(R) = 0.1971C 0.8685 and S(r) = 0.1478C 0.8424, where C is the GMO amount (%). We also proposed a method performance index in GMO quantitative methods that is analogous to the Horwitz Ratio.
Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars
NASA Astrophysics Data System (ADS)
Ruml, Mirjana; Vuković, Ana; Milatović, Dragan
2010-07-01
The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot ( Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7°C for full bloom, and from -0.5 to 6.6°C for harvest. However, the “Null” method (lower threshold set to 0°C) and “Fixed Value” method (lower threshold set to -2°C for full bloom and to 3°C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.
Russ, Alissa L; Jahn, Michelle A; Patel, Himalaya; Porter, Brian W; Nguyen, Khoa A; Zillich, Alan J; Linsky, Amy; Simon, Steven R
2018-06-01
An electronic medication reconciliation tool was previously developed by another research team to aid provider-patient communication for medication reconciliation. To evaluate the usability of this tool, we integrated artificial safety probes into standard usability methods. The objective of this article is to describe this method of using safety probes, which enabled us to evaluate how well the tool supports users' detection of medication discrepancies. We completed a mixed-method usability evaluation in a simulated setting with 30 participants: 20 healthcare professionals (HCPs) and 10 patients. We used factual scenarios but embedded three artificial safety probes: (1) a missing medication (i.e., omission); (2) an extraneous medication (i.e., commission); and (3) an inaccurate dose (i.e., dose discrepancy). We measured users' detection of each probe to estimate the probability that a HCP or patient would detect these discrepancies. Additionally, we recorded participants' detection of naturally occurring discrepancies. Each safety probe was detected by ≤50% of HCPs. Patients' detection rates were generally higher. Estimates indicate that a HCP and patient, together, would detect 44.8% of these medication discrepancies. Additionally, HCPs and patients detected 25 and 45 naturally-occurring discrepancies, respectively. Overall, detection of medication discrepancies was low. Findings indicate that more advanced interface designs are warranted. Future research is needed on how technologies can be designed to better aid HCPs' and patients' detection of medication discrepancies. This is one of the first studies to evaluate the usability of a collaborative medication reconciliation tool and assess HCPs' and patients' detection of medication discrepancies. Results demonstrate that embedded safety probes can enhance standard usability methods by measuring additional, clinically-focused usability outcomes. The novel safety probes we used may serve as an initial, standard set for future medication reconciliation research. More prevalent use of safety probes could strengthen usability research for a variety of health information technologies. Published by Elsevier Inc.
Using a fuzzy comprehensive evaluation method to determine product usability: A test case
Zhou, Ronggang; Chan, Alan H. S.
2016-01-01
BACKGROUND: In order to take into account the inherent uncertainties during product usability evaluation, Zhou and Chan [1] proposed a comprehensive method of usability evaluation for products by combining the analytic hierarchy process (AHP) and fuzzy evaluation methods for synthesizing performance data and subjective response data. This method was designed to provide an integrated framework combining the inevitable vague judgments from the multiple stages of the product evaluation process. OBJECTIVE AND METHODS: In order to illustrate the effectiveness of the model, this study used a summative usability test case to assess the application and strength of the general fuzzy usability framework. To test the proposed fuzzy usability evaluation framework [1], a standard summative usability test was conducted to benchmark the overall usability of a specific network management software. Based on the test data, the fuzzy method was applied to incorporate both the usability scores and uncertainties involved in the multiple components of the evaluation. Then, with Monte Carlo simulation procedures, confidence intervals were used to compare the reliabilities among the fuzzy approach and two typical conventional methods combining metrics based on percentages. RESULTS AND CONCLUSIONS: This case study showed that the fuzzy evaluation technique can be applied successfully for combining summative usability testing data to achieve an overall usability quality for the network software evaluated. Greater differences of confidence interval widths between the method of averaging equally percentage and weighted evaluation method, including the method of weighted percentage averages, verified the strength of the fuzzy method. PMID:28035942
Comparison of two methods of standard setting: the performance of the three-level Angoff method.
Jalili, Mohammad; Hejri, Sara M; Norcini, John J
2011-12-01
Cut-scores, reliability and validity vary among standard-setting methods. The modified Angoff method (MA) is a well-known standard-setting procedure, but the three-level Angoff approach (TLA), a recent modification, has not been extensively evaluated. This study aimed to compare standards and pass rates in an objective structured clinical examination (OSCE) obtained using two methods of standard setting with discussion and reality checking, and to assess the reliability and validity of each method. A sample of 105 medical students participated in a 14-station OSCE. Fourteen and 10 faculty members took part in the MA and TLA procedures, respectively. In the MA, judges estimated the probability that a borderline student would pass each station. In the TLA, judges estimated whether a borderline examinee would perform the task correctly or not. Having given individual ratings, judges discussed their decisions. One week after the examination, the procedure was repeated using normative data. The mean score for the total test was 54.11% (standard deviation: 8.80%). The MA cut-scores for the total test were 49.66% and 51.52% after discussion and reality checking, respectively (the consequent percentages of passing students were 65.7% and 58.1%, respectively). The TLA yielded mean pass scores of 53.92% and 63.09% after discussion and reality checking, respectively (rates of passing candidates were 44.8% and 12.4%, respectively). Compared with the TLA, the MA showed higher agreement between judges (0.94 versus 0.81) and a narrower 95% confidence interval in standards (3.22 versus 11.29). The MA seems a more credible and reliable procedure with which to set standards for an OSCE than does the TLA, especially when a reality check is applied. © Blackwell Publishing Ltd 2011.
Recommendations for evaluation of computational methods
NASA Astrophysics Data System (ADS)
Jain, Ajay N.; Nicholls, Anthony
2008-03-01
The field of computational chemistry, particularly as applied to drug design, has become increasingly important in terms of the practical application of predictive modeling to pharmaceutical research and development. Tools for exploiting protein structures or sets of ligands known to bind particular targets can be used for binding-mode prediction, virtual screening, and prediction of activity. A serious weakness within the field is a lack of standards with respect to quantitative evaluation of methods, data set preparation, and data set sharing. Our goal should be to report new methods or comparative evaluations of methods in a manner that supports decision making for practical applications. Here we propose a modest beginning, with recommendations for requirements on statistical reporting, requirements for data sharing, and best practices for benchmark preparation and usage.
Ito, Shinya; Tsukada, Katsuo
2002-01-11
An evaluation of the feasibility of liquid chromatography-mass spectrometry (LC-MS) with atmospheric pressure ionization was made for quantitation of four diarrhetic shellfish poisoning toxins, okadaic acid, dinophysistoxin-1, pectenotoxin-6 and yessotoxin in scallops. When LC-MS was applied to the analysis of scallop extracts, large signal suppressions were observed due to coeluting substances from the column. To compensate for these matrix signal suppressions, the standard addition method was applied. First, the sample was analyzed and then the sample involving the addition of calibration standards is analyzed. Although this method requires two LC-MS runs per analysis, effective correction of quantitative errors was found.
24 CFR 35.1300 - Purpose and applicability.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint Hazard Evaluation and Hazard Reduction Activities § 35.1300 Purpose and applicability. The...
24 CFR 35.1300 - Purpose and applicability.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint Hazard Evaluation and Hazard Reduction Activities § 35.1300 Purpose and applicability. The...
24 CFR 35.1300 - Purpose and applicability.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint Hazard Evaluation and Hazard Reduction Activities § 35.1300 Purpose and applicability. The...
24 CFR 35.1300 - Purpose and applicability.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint Hazard Evaluation and Hazard Reduction Activities § 35.1300 Purpose and applicability. The...
24 CFR 35.1300 - Purpose and applicability.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint Hazard Evaluation and Hazard Reduction Activities § 35.1300 Purpose and applicability. The...
A Primer on Health Economic Evaluations in Thoracic Oncology.
Whittington, Melanie D; Atherly, Adam J; Bocsi, Gregary T; Camidge, D Ross
2016-08-01
There is growing interest for economic evaluation in oncology to illustrate the value of multiple new diagnostic and therapeutic interventions. As these analyses have started to move from specialist publications into mainstream medical literature, the wider medical audience consuming this information may need additional education to evaluate it appropriately. Here we review standard practices in economic evaluation, illustrating the different methods with thoracic oncology examples where possible. When interpreting and conducting health economic studies, it is important to appraise the method, perspective, time horizon, modeling technique, discount rate, and sensitivity analysis. Guidance on how to do this is provided. To provide a method to evaluate this literature, a literature search was conducted in spring 2015 to identify economic evaluations published in the Journal of Thoracic Oncology. Articles were reviewed for their study design, and areas for improvement were noted. Suggested improvements include using more rigorous sensitivity analyses, adopting a standard approach to reporting results, and conducting complete economic evaluations. Researchers should design high-quality studies to ensure the validity of the results, and consumers of this research should interpret these studies critically on the basis of a full understanding of the methodologies used before considering any of the conclusions. As advancements occur on both the research and consumer sides, this literature can be further developed to promote the best use of resources for this field. Copyright © 2016 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.
Marinelli, L; Cottarelli, A; Solimini, A G; Del Cimmuto, A; De Giusti, M
2017-01-01
In this study we estimated the presence of Legionella species, viable but non-culturable (VBNC), in hospital water networks. We also evaluated the time and load of Legionella appearance in samples found negative using the standard culture method. A total of 42 samples was obtained from the tap water of five hospital buildings. The samples were tested for Legionella by the standard culture method and were monitored for up to 12 months for the appearance of VBNC Legionella. All the 42 samples were negative at the time of collection. Seven of the 42 samples (17.0%) became positive for Legionella at different times of monitoring. The time to the appearance of VBNC Legionella was extremely variable, from 15 days to 9 months from sampling. The most frequent Legionella species observed were Legionella spp and L. anisa and only in one sample L. pneumophila srg.1. Our study confirms the presence of VBNC Legionella in samples resulting negative using the standard culture method and highlights the different time to its appearance that can occur several months after sampling. The results are important for risk assessment and risk management of engineered water systems.
NASA Astrophysics Data System (ADS)
Kanisch, G.
2017-05-01
The concepts of ISO 11929 (2010) are applied to evaluation of radionuclide activities from more complex multi-nuclide gamma-ray spectra. From net peak areas estimated by peak fitting, activities and their standard uncertainties are calculated by weighted linear least-squares method with an additional step, where uncertainties of the design matrix elements are taken into account. A numerical treatment of the standard's uncertainty function, based on ISO 11929 Annex C.5, leads to a procedure for deriving decision threshold and detection limit values. The methods shown allow resolving interferences between radionuclide activities also in case of calculating detection limits where they can improve the latter by including more than one gamma line per radionuclide. The co"mmon single nuclide weighted mean is extended to an interference-corrected (generalized) weighted mean, which, combined with the least-squares method, allows faster detection limit calculations. In addition, a new grouped uncertainty budget was inferred, which for each radionuclide gives uncertainty budgets from seven main variables, such as net count rates, peak efficiencies, gamma emission intensities and others; grouping refers to summation over lists of peaks per radionuclide.
Uwamahoro, Marie Christine; Massicotte, Richard; Hurtubise, Yves; Gagné-Bourque, François; Mafu, Akier Assanta; Yahia, L’Hocine
2018-01-01
Spore-forming pathogenic bacteria, such as Clostridium difficile, are associated with nosocomial infection, leading to the increased use of sporicidal disinfectants, which impacts socioeconomic costs. However, C. difficile can be prevented using microorganisms such as Bacillus amyloliquefaciens, a prophylactic agent that has been proven to be effective against it in recent tests or it can be controlled by sporicidal disinfectants. These disinfectants against spores should be evaluated according to a known and recommended standard. Unfortunately, some newly manufactured disinfectants like Bioxy products have not yet been tested. ASTM E2197-11 is a standard test that uses stainless steel disks (1 cm in diameter) as carriers, and the performance of the test formulation is calculated by comparing the number of viable test organisms to that on the control carriers. Surface tests are preferable for evaluating disinfectants with sporicidal effects on hard surfaces. This study applies improved methods, based on the ASTM E2197-11 standard, for evaluating and comparing the sporicidal efficacies of several disinfectants against spores of C. difficile and B. amyloliquefaciens, which are used as the test organisms. With the improved method, all spores were recovered through vortexing and membrane filtration. The results show that chlorine-based products are effective in 5 min and Bioxy products at 5% w/v are effective in 10 min. Although Bioxy products may take longer to prove their effectiveness, their non-harmful effects to hospital surfaces and people have been well established in the literature. PMID:29459891
Uwamahoro, Marie Christine; Massicotte, Richard; Hurtubise, Yves; Gagné-Bourque, François; Mafu, Akier Assanta; Yahia, L'Hocine
2018-01-01
Spore-forming pathogenic bacteria, such as Clostridium difficile , are associated with nosocomial infection, leading to the increased use of sporicidal disinfectants, which impacts socioeconomic costs. However, C. difficile can be prevented using microorganisms such as Bacillus amyloliquefaciens , a prophylactic agent that has been proven to be effective against it in recent tests or it can be controlled by sporicidal disinfectants. These disinfectants against spores should be evaluated according to a known and recommended standard. Unfortunately, some newly manufactured disinfectants like Bioxy products have not yet been tested. ASTM E2197-11 is a standard test that uses stainless steel disks (1 cm in diameter) as carriers, and the performance of the test formulation is calculated by comparing the number of viable test organisms to that on the control carriers. Surface tests are preferable for evaluating disinfectants with sporicidal effects on hard surfaces. This study applies improved methods, based on the ASTM E2197-11 standard, for evaluating and comparing the sporicidal efficacies of several disinfectants against spores of C. difficile and B. amyloliquefaciens , which are used as the test organisms. With the improved method, all spores were recovered through vortexing and membrane filtration. The results show that chlorine-based products are effective in 5 min and Bioxy products at 5% w/v are effective in 10 min. Although Bioxy products may take longer to prove their effectiveness, their non-harmful effects to hospital surfaces and people have been well established in the literature.
New Techniques to Evaluate the Incendiary Behavior of Insulators
NASA Technical Reports Server (NTRS)
Buhler, Charles; Calle, Carlos; Clements, Sid; Trigwell, Steve; Ritz, Mindy
2008-01-01
New techniques for evaluating the incendiary behavior of insulators is presented. The onset of incendive brush discharges in air is evaluated using standard spark probe techniques for the case simulating approaches of an electrically grounded sphere to a charged insulator in the presence of a flammable atmosphere. However, this standard technique is unsuitable for the case of brush discharges that may occur during the charging-separation process for two insulator materials. We present experimental techniques to evaluate this hazard in the presence of a flammable atmosphere which is ideally suited to measure the incendiary nature of micro-discharges upon separation, a measurement never before performed. Other measurement techniques unique to this study include; surface potential measurements of insulators before, during and after contact and separation, as well as methods to verify fieldmeter calibrations using a charge insulator surface opposed to standard high voltage plates. Key words: Kapton polyimide film, incendiary discharges, brush discharges, contact and frictional electrification, ignition hazards, insulators, contact angle, surface potential measurements.
MO-F-CAMPUS-J-04: One-Year Analysis of Elekta CBCT Image Quality Using NPS and MTF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakahara, S; Tachibana, M; Watanabe, Y
2015-06-15
Purpose: To compare quantitative image quality (IQ) evaluation methods using Noise Power Spectrum (NPS) and Modulation Transfer Function (MTF) with standard IQ analyses for minimizing the observer subjectivity of the standard methods and maximizing the information content. Methods: For our routine IQ tests of Elekta XVI Cone-Beam CT, image noise was quantified by the standard deviation of CT number (CT#) (Sigma) over a small area in an IQ test phantom (CatPhan), and the high spatial resolution (HSR) was evaluated by the number of line-pairs (LP#) visually recognizable on the image. We also measured the image uniformity, the low contrast resolutionmore » ratio, and the distances of two points for geometrical accuracy. For this study, we did additional evaluation of the XVI data for 12 monthly IQ tests by using NPS for noise, MTF for HSR, and the CT#-to-density relationship. NPS was obtained by applying Fourier analysis in a small area on the uniformity test section of CatPhan. The MTF analysis was performed by applying the Droege-Morin (D-M) method to the line pairs on the phantom. The CT#-to-density was obtained for inserts in the low-contrast test section of the phantom. Results: All the quantities showed a noticeable change over the one-year period. Especially the noise level changed significantly after a repair of the imager. NPS was more sensitive to the IQ change than Sigma. MTF could provide more quantitative and objective evaluation of the HSR. The CT# was very different from the expected CT#; but, the CT#-to-density curves were constant within 5% except two months. Conclusion: Since the D-M method is easy to implement, we recommend using MTF instead of the LP# even for routine periodic QA. The month-to-month variation of IQ was not negligible; hence a routine IQ test must be performed, particularly after any modification of hardware including detector calibration.« less
Fast analytical spectral filtering methods for magnetic resonance perfusion quantification.
Reddy, Kasireddy V; Mitra, Abhishek; Yalavarthy, Phaneendra K
2016-08-01
The deconvolution in the perfusion weighted imaging (PWI) plays an important role in quantifying the MR perfusion parameters. The PWI application to stroke and brain tumor studies has become a standard clinical practice. The standard approach for this deconvolution is oscillatory-limited singular value decomposition (oSVD) and frequency domain deconvolution (FDD). The FDD is widely recognized as the fastest approach currently available for deconvolution of MR perfusion data. In this work, two fast deconvolution methods (namely analytical fourier filtering and analytical showalter spectral filtering) are proposed. Through systematic evaluation, the proposed methods are shown to be computationally efficient and quantitatively accurate compared to FDD and oSVD.
Riccardi, M; Mele, G; Pulvento, C; Lavini, A; d'Andria, R; Jacobsen, S-E
2014-06-01
Leaf chlorophyll content provides valuable information about physiological status of plants; it is directly linked to photosynthetic potential and primary production. In vitro assessment by wet chemical extraction is the standard method for leaf chlorophyll determination. This measurement is expensive, laborious, and time consuming. Over the years alternative methods, rapid and non-destructive, have been explored. The aim of this work was to evaluate the applicability of a fast and non-invasive field method for estimation of chlorophyll content in quinoa and amaranth leaves based on RGB components analysis of digital images acquired with a standard SLR camera. Digital images of leaves from different genotypes of quinoa and amaranth were acquired directly in the field. Mean values of each RGB component were evaluated via image analysis software and correlated to leaf chlorophyll provided by standard laboratory procedure. Single and multiple regression models using RGB color components as independent variables have been tested and validated. The performance of the proposed method was compared to that of the widely used non-destructive SPAD method. Sensitivity of the best regression models for different genotypes of quinoa and amaranth was also checked. Color data acquisition of the leaves in the field with a digital camera was quick, more effective, and lower cost than SPAD. The proposed RGB models provided better correlation (highest R (2)) and prediction (lowest RMSEP) of the true value of foliar chlorophyll content and had a lower amount of noise in the whole range of chlorophyll studied compared with SPAD and other leaf image processing based models when applied to quinoa and amaranth.
NASA Astrophysics Data System (ADS)
Fauzi, Ilham; Muharram Hasby, Fariz; Irianto, Dradjad
2018-03-01
Although government is able to make mandatory standards that must be obeyed by the industry, the respective industries themselves often have difficulties to fulfil the requirements described in those standards. This is especially true in many small and medium sized enterprises that lack the required capital to invest in standard-compliant equipment and machineries. This study aims to develop a set of measurement tools for evaluating the level of readiness of production technology with respect to the requirements of a product standard based on the quality function deployment (QFD) method. By combining the QFD methodology, UNESCAP Technometric model [9] and Analytic Hierarchy Process (AHP), this model is used to measure a firm’s capability to fulfill government standard in the toy making industry. Expert opinions from both the governmental officers responsible for setting and implementing standards and the industry practitioners responsible for managing manufacturing processes are collected and processed to find out the technological capabilities that should be improved by the firm to fulfill the existing standard. This study showed that the proposed model can be used successfully to measure the gap between the requirements of the standard and the readiness of technoware technological component in a particular firm.
Evaluation and Field Assessment of Bifacial Photovoltaic Module Power Rating Methodologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deline, Chris; MacAlpine, Sara; Marion, Bill
2016-11-21
1-sun power ratings for bifacial modules are currently undefined. This is partly because there is no standard definition of rear irradiance given 1000 Wm-2 on the front. Using field measurements and simulations, we evaluate multiple deployment scenarios for bifacial modules and provide details on the amount of irradiance that could be expected. A simplified case that represents a single module deployed under conditions consistent with existing 1-sun irradiance standards leads to a bifacial reference condition of 1000 Wm-2 Gfront and 130-140 Wm-2 Grear. For fielded systems of bifacial modules, Grear magnitude and spatial uniformity will be affected by self-shade frommore » adjacent modules, varied ground cover, and ground-clearance height. A standard measurement procedure for bifacial modules is also currently undefined. A proposed international standard is under development, which provides the motivation for this work. Here, we compare outdoor field measurements of bifacial modules with irradiance on both sides with proposed indoor test methods where irradiance is only applied to one side at a time. The indoor method has multiple advantages, including controlled and repeatable irradiance and thermal environment, along with allowing the use of conventional single-sided flash test equipment. The comparison results are promising, showing that the indoor and outdoor methods agree within 1%-2% for multiple rear-irradiance conditions and bifacial module types.« less
Evaluation and Field Assessment of Bifacial Photovoltaic Module Power Rating Methodologies: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deline, Chris; MacAlpine, Sara; Marion, Bill
2016-06-16
1-sun power ratings for bifacial modules are currently undefined. This is partly because there is no standard definition of rear irradiance given 1000 Wm-2 on the front. Using field measurements and simulations, we evaluate multiple deployment scenarios for bifacial modules and provide details on the amount of irradiance that could be expected. A simplified case that represents a single module deployed under conditions consistent with existing 1-sun irradiance standards leads to a bifacial reference condition of 1000 Wm-2 Gfront and 130-140 Wm-2 Grear. For fielded systems of bifacial modules, Grear magnitude and spatial uniformity will be affected by self-shade frommore » adjacent modules, varied ground cover, and ground-clearance height. A standard measurement procedure for bifacial modules is also currently undefined. A proposed international standard is under development, which provides the motivation for this work. Here, we compare outdoor field measurements of bifacial modules with irradiance on both sides with proposed indoor test methods where irradiance is only applied to one side at a time. The indoor method has multiple advantages, including controlled and repeatable irradiance and thermal environment, along with allowing the use of conventional single-sided flash test equipment. The comparison results are promising, showing that the indoor and outdoor methods agree within 1%-2% for multiple rear-irradiance conditions and bifacial module types.« less
[Index assessment of airborne VOCs pollution in automobile for transporting passengers].
Chen, Xiao-Kai; Cheng, He-Ming; Luo, Hui-Long
2013-12-01
Car for transporting passenger is the most common means of transport and in-car airborne volatile organic compounds (VOCs) cause harm to health. In order to analyze the pollution levels of benzene, toluene, ethylbenzene, xylenes, styrene and TVOC, index evaluation method was used according to the domestic and international standards of indoor and in-car air quality (IAQ). For Chinese GB/T 18883-2002 IAQ Standard, GB/T 17729-2009 Hygienic Standard for the Air Quality inside Long Distance Coach, GB/T 27630-2011 Guideline for Air Quality Assessment of Passenger Car, IAQ standard of South Korea, Norway, Japan and Germany, the heaviest pollution of VOCs in passenger car was TVOC, TVOC, benzene, benzene, TVOC, toluene and TVOC, respectively, the average pollution grade of automotive IAQ was median pollution, median pollution, clean, light pollution, median pollution, clean and heavy pollution, respectively. Index evaluation can effectively analyze vehicular interior air quality, and the result has a significant difference with different standards; German standard is the most stringent, while Chinese GB/T 18883-2002 standard is the relatively stringent and GB/T 27630-2011 is the most relaxed.
Newlin-Canzone, Elizabeth T; Scerbo, Mark W; Gliva-McConvey, Gayle; Wallace, Amelia M
2013-08-01
This study was designed to look at the challenges of standardized patients while in role and to use the findings to enhance training methods. The study investigated the effect of improvisations and multiple-task performance on the ability of standardized patients to observe and evaluate another's communication behaviors and its associated mental workload. Twenty standardized patients participated in a 2 types of interview (with and without improvisations)-by-2 types of observation (passive and active) within-groups design. The results indicated that both active observations and improvisations had a negative effect on the standardized patients' ability to observe the learner, missing more than 75% of nonverbal behaviors during active improvisational encounters. Moreover, standardized patients experienced the highest mental demand during active improvisational encounters. The findings suggest that the need to simultaneously portray a character and assess a learner may negatively affect the ability of standardized patients to provide accurate evaluations of a learner, particularly when they are required to improvise responses, underscoring the need for specific and targeted training.
NASA Astrophysics Data System (ADS)
Cassette, P.; Bouchard, J.; Chauvenet, B.
1994-01-01
Iodine-129 is a long-lived fission product, with physical and chemical properties that make it a good candidate for evaluating the environmental impact of the nuclear energy fuel cycle. To avoid solid source preparation problems, liquid scintillation has been used to standardize this nuclide for a EUROMET intercomparison. Two methods were used to measure the iodine-129 activity: triple-to-double-coincidence ratio liquid scintillation counting and 4π β-γ coincidence counting; the results are in good agreement.
24 CFR 35.1305 - Definitions and other general requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Housing and Urban Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint Hazard Evaluation and Hazard Reduction Activities § 35.1305...
24 CFR 35.1305 - Definitions and other general requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Housing and Urban Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint Hazard Evaluation and Hazard Reduction Activities § 35.1305...
24 CFR 35.1305 - Definitions and other general requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Housing and Urban Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint Hazard Evaluation and Hazard Reduction Activities § 35.1305...
24 CFR 35.1305 - Definitions and other general requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Housing and Urban Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint Hazard Evaluation and Hazard Reduction Activities § 35.1305...
24 CFR 35.1305 - Definitions and other general requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Housing and Urban Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint Hazard Evaluation and Hazard Reduction Activities § 35.1305...
In 2008, the United States Environmental Protection Agency (USEPA) set a new National Ambient Air Quality Standard (NAAQS) for lead (Pb) in total suspended particulate matter (Pb-TSP) which called for significant decreases in the allowable limits. The Federal Reference Method (FR...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-28
... systems. E. Quantitative Methods for Comparing Capital Frameworks The NPR sought comment on how the... industry while assessing levels of capital. This commenter points out maintaining reliable comparative data over time could make quantitative methods for this purpose difficult. For example, evaluating asset...
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.« less
USDA-ARS?s Scientific Manuscript database
The objective of this study was to evaluate and compare amino acid digestibility of several feedstuffs using 2 commonly accepted methods: the precision-fed cecectomized rooster assay (PFR) and the standardized ileal amino acid assay (SIAAD). Six corn, 6 corn distillers dried grains with or without s...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tenent, Robert C.
2017-12-06
NREL will conduct durability testing of Sage Electrochromics dynamic windows products using American Society for Testing and Materials (ASTM) standard methods and drive parameters as defined by Sage. Window units will be tested and standard analysis performed. Data will be summarized and reported back to Sage at the end of the testing period.
ERIC Educational Resources Information Center
Echon, Roger M.
2014-01-01
Purpose/Objectives: The purpose of this paper is to provide baseline data and characteristics of food served and consumed prior to the recently mandated nutrition standards as authorized by the Healthy, Hunger-Free Kids Act of 2010 (HHFKA). Methods: Over 600,000 school lunch menus with associated food production records from 61 elementary schools…
USDA-ARS?s Scientific Manuscript database
The soybean cyst nematode (SCN) remains the most economically important pathogen of soybean in North America. Most farmers do not sample for SCN believing instead that the use of SCN-resistant varieties is sufficient to avoid yield losses due to the nematode according to surveys conducted in Illino...
Mark Hitchcock; Alan Ager
1992-01-01
National Forests in the Pacific Northwest Region have incorporated elk habitat standards into Forest plans to ensure that elk habitat objectives are met on multiple use land allocations. Many Forests have employed versions of the habitat effectiveness index (HEI) as a standard method to evaluate habitat. Field application of the HEI model unfortunately is a formidable...
Using a fuzzy comprehensive evaluation method to determine product usability: A test case.
Zhou, Ronggang; Chan, Alan H S
2017-01-01
In order to take into account the inherent uncertainties during product usability evaluation, Zhou and Chan [1] proposed a comprehensive method of usability evaluation for products by combining the analytic hierarchy process (AHP) and fuzzy evaluation methods for synthesizing performance data and subjective response data. This method was designed to provide an integrated framework combining the inevitable vague judgments from the multiple stages of the product evaluation process. In order to illustrate the effectiveness of the model, this study used a summative usability test case to assess the application and strength of the general fuzzy usability framework. To test the proposed fuzzy usability evaluation framework [1], a standard summative usability test was conducted to benchmark the overall usability of a specific network management software. Based on the test data, the fuzzy method was applied to incorporate both the usability scores and uncertainties involved in the multiple components of the evaluation. Then, with Monte Carlo simulation procedures, confidence intervals were used to compare the reliabilities among the fuzzy approach and two typical conventional methods combining metrics based on percentages. This case study showed that the fuzzy evaluation technique can be applied successfully for combining summative usability testing data to achieve an overall usability quality for the network software evaluated. Greater differences of confidence interval widths between the method of averaging equally percentage and weighted evaluation method, including the method of weighted percentage averages, verified the strength of the fuzzy method.
Persson, A; Brismar, T B; Lundström, C; Dahlström, N; Othberg, F; Smedby, O
2006-03-01
To compare three methods for standardizing volume rendering technique (VRT) protocols by studying aortic diameter measurements in magnetic resonance angiography (MRA) datasets. Datasets from 20 patients previously examined with gadolinium-enhanced MRA and with digital subtraction angiography (DSA) for abdominal aortic aneurysm were retrospectively evaluated by three independent readers. The MRA datasets were viewed using VRT with three different standardized transfer functions: the percentile method (Pc-VRT), the maximum-likelihood method (ML-VRT), and the partial range histogram method (PRH-VRT). The aortic diameters obtained with these three methods were compared with freely chosen VRT parameters (F-VRT) and with maximum intensity projection (MIP) concerning inter-reader variability and agreement with the reference method DSA. F-VRT parameters and PRH-VRT gave significantly higher diameter values than DSA, whereas Pc-VRT gave significantly lower values than DSA. The highest interobserver variability was found for F-VRT parameters and MIP, and the lowest for Pc-VRT and PRH-VRT. All standardized VRT methods were significantly superior to both MIP and F-VRT in this respect. The agreement with DSA was best for PRH-VRT, which was the only method with a mean error below 1 mm and which also had the narrowest limits of agreement (95% of cases between 2.1 mm below and 3.1 mm above DSA). All the standardized VRT methods compare favorably with MIP and VRT with freely selected parameters as regards interobserver variability. The partial range histogram method, although systematically overestimating vessel diameters, gives results closest to those of DSA.
Developing Carbon Nanotube Standards at NASA
NASA Technical Reports Server (NTRS)
Nikolaev, Pasha; Arepalli, Sivaram; Sosa, Edward; Gorelik, Olga; Yowell, Leonard
2007-01-01
Single wall carbon nanotubes (SWCNTs) are currently being produced and processed by several methods. Many researchers are continuously modifying existing methods and developing new methods to incorporate carbon nanotubes into other materials and utilize the phenomenal properties of SWCNTs. These applications require availability of SWCNTs with known properties and there is a need to characterize these materials in a consistent manner. In order to monitor such progress, it is critical to establish a means by which to define the quality of SWCNT material and develop characterization standards to evaluate of nanotube quality across the board. Such characterization standards should be applicable to as-produced materials as well as processed SWCNT materials. In order to address this issue, NASA Johnson Space Center has developed a protocol for purity and dispersion characterization of SWCNTs (Ref.1). The NASA JSC group is currently working with NIST, ANSI and ISO to establish purity and dispersion standards for SWCNT material. A practice guide for nanotube characterization is being developed in cooperation with NIST (Ref.2). Furthermore, work is in progress to incorporate additional characterization methods for electrical, mechanical, thermal, optical and other properties of SWCNTs.
Developing Carbon Nanotube Standards at NASA
NASA Technical Reports Server (NTRS)
Nikolaev, Pasha; Arepalli, Sivaram; Sosa, Edward; Gorelik, Olga; Yowell, Leonard
2007-01-01
Single wall carbon nanotubes (SWCNTs) are currently being produced and processed by several methods. Many researchers are continuously modifying existing methods and developing new methods to incorporate carbon nanotubes into other materials and utilize the phenomenal properties of SWCNTs. These applications require availability of SWCNTs with known properties and there is a need to characterize these materials in a consistent manner. In order to monitor such progress, it is critical to establish a means by which to define the quality of SWCNT material and develop characterization standards to evaluate of nanotube quality across the board. Such characterization standards should be applicable to as-produced materials as well as processed SWCNT materials. In order to address this issue, NASA Johnson Space Center has developed a protocol for purity and dispersion characterization of SWCNTs. The NASA JSC group is currently working with NIST, ANSI and ISO to establish purity and dispersion standards for SWCNT material. A practice guide for nanotube characterization is being developed in cooperation with NIST. Furthermore, work is in progress to incorporate additional characterization methods for electrical, mechanical, thermal, optical and other properties of SWCNTs.
Natural air leak test without submergence for spontaneous pneumothorax.
Uramoto, Hidetaka; Tanaka, Fumihiro
2011-12-24
Postoperative air leaks are frequent complications after surgery for a spontaneous pneumothorax (SP). We herein describe a new method to test for air leaks by using a transparent film and thoracic tube in a closed system. Between 2005 and 2010, 35 patients underwent a novel method for evaluating air leaks without submergence, and their clinical records were retrospectively reviewed. The data on patient characteristics, surgical details, and perioperative outcomes were analyzed. The differences in the clinical background and intraoperative factors did not reach a statistically significant level between the new and classical methods. The incidence of recurrence was also equivalent to the standard method. However, the length of the operation and drainage periods were significantly shorter in patients evaluated using the new method than the conventional method. Further, no postoperative complications were observed in patients evaluated using the new method. This simple technique is satisfactorily effective and does not result in any complications.
Boers, A M; Marquering, H A; Jochem, J J; Besselink, N J; Berkhemer, O A; van der Lugt, A; Beenen, L F; Majoie, C B
2013-08-01
Cerebral infarct volume as observed in follow-up CT is an important radiologic outcome measure of the effectiveness of treatment of patients with acute ischemic stroke. However, manual measurement of CIV is time-consuming and operator-dependent. The purpose of this study was to develop and evaluate a robust automated measurement of the CIV. The CIV in early follow-up CT images of 34 consecutive patients with acute ischemic stroke was segmented with an automated intensity-based region-growing algorithm, which includes partial volume effect correction near the skull, midline determination, and ventricle and hemorrhage exclusion. Two observers manually delineated the CIV. Interobserver variability of the manual assessments and the accuracy of the automated method were evaluated by using the Pearson correlation, Bland-Altman analysis, and Dice coefficients. The accuracy was defined as the correlation with the manual assessment as a reference standard. The Pearson correlation for the automated method compared with the reference standard was similar to the manual correlation (R = 0.98). The accuracy of the automated method was excellent with a mean difference of 0.5 mL with limits of agreement of -38.0-39.1 mL, which were more consistent than the interobserver variability of the 2 observers (-40.9-44.1 mL). However, the Dice coefficients were higher for the manual delineation. The automated method showed a strong correlation and accuracy with the manual reference measurement. This approach has the potential to become the standard in assessing the infarct volume as a secondary outcome measure for evaluating the effectiveness of treatment.
NASA Astrophysics Data System (ADS)
Bucha, Blažej; Janák, Juraj
2013-07-01
We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariances matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.
Gilliom, Robert J.; Helsel, Dennis R.
1986-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1986-02-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less
Estimation of distributional parameters for censored trace-level water-quality data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1984-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less
Standardisation of costs: the Dutch Manual for Costing in economic evaluations.
Oostenbrink, Jan B; Koopmanschap, Marc A; Rutten, Frans F H
2002-01-01
The lack of a uniform costing methodology is often considered a weakness of economic evaluations that hinders the interpretation and comparison of studies. Standardisation is therefore an important topic within the methodology of economic evaluations and in national guidelines that formulate the formal requirements for studies to be considered when deciding on the reimbursement of new medical therapies. Recently, the Dutch Manual for Costing: Methods and Standard Costs for Economic Evaluations in Health Care (further referred to as "the manual") has been published, in addition to the Dutch guidelines for pharmacoeconomic research. The objectives of this article are to describe the main content of the manual and to discuss some key issues of the manual in relation to the standardisation of costs. The manual introduces a six-step procedure for costing. These steps concern: the scope of the study;the choice of cost categories;the identification of units;the measurement of resource use;the monetary valuation of units; andthe calculation of unit costs. Each step consists of a number of choices and these together define the approach taken. In addition to a description of the costing process, five key issues regarding the standardisation of costs are distinguished. These are the use of basic principles, methods for measurement and valuation, standard costs (average prices of healthcare services), standard values (values that can be used within unit cost calculations), and the reporting of outcomes. The use of the basic principles, standard values and minimal requirements for reporting outcomes, as defined in the manual, are obligatory in studies that support submissions to acquire reimbursement for new pharmaceuticals. Whether to use standard costs, and the choice of a particular method to measure or value costs, is left mainly to the investigator, depending on the specific study setting. In conclusion, several instruments are available to increase standardisation in costing methodology among studies. These instruments have to be used in such a way that a balance is found between standardisation and the specific setting in which a study is performed. The way in which the Dutch manual tries to reach this balance can serve as an illustration for other countries.
Meija, Juris; Chartrand, Michelle M G
2018-01-01
Isotope delta measurements are normalized against international reference standards. Although multi-point normalization is becoming a standard practice, the existing uncertainty evaluation practices are either undocumented or are incomplete. For multi-point normalization, we present errors-in-variables regression models for explicit accounting of the measurement uncertainty of the international standards along with the uncertainty that is attributed to their assigned values. This manuscript presents framework to account for the uncertainty that arises due to a small number of replicate measurements and discusses multi-laboratory data reduction while accounting for inevitable correlations between the laboratories due to the use of identical reference materials for calibration. Both frequentist and Bayesian methods of uncertainty analysis are discussed.
Harrison, Jesse P; Boardman, Carl; O'Callaghan, Kenneth; Delort, Anne-Marie; Song, Jim
2018-05-01
Plastic litter is encountered in aquatic ecosystems across the globe, including polar environments and the deep sea. To mitigate the adverse societal and ecological impacts of this waste, there has been debate on whether 'biodegradable' materials should be granted exemptions from plastic bag bans and levies. However, great care must be exercised when attempting to define this term, due to the broad and complex range of physical and chemical conditions encountered within natural ecosystems. Here, we review existing international industry standards and regional test methods for evaluating the biodegradability of plastics within aquatic environments (wastewater, unmanaged freshwater and marine habitats). We argue that current standards and test methods are insufficient in their ability to realistically predict the biodegradability of carrier bags in these environments, due to several shortcomings in experimental procedures and a paucity of information in the scientific literature. Moreover, existing biodegradability standards and test methods for aquatic environments do not involve toxicity testing or account for the potentially adverse ecological impacts of carrier bags, plastic additives, polymer degradation products or small (microscopic) plastic particles that can arise via fragmentation. Successfully addressing these knowledge gaps is a key requirement for developing new biodegradability standard(s) for lightweight carrier bags.
Perception of Science Standards' Effectiveness and Their Implementation by Science Teachers
NASA Astrophysics Data System (ADS)
Klieger, Aviva; Yakobovitch, Anat
2011-06-01
The introduction of standards into the education system poses numerous challenges and difficulties. As with any change, plans should be made for teachers to understand and implement the standards. This study examined science teachers' perceptions of the effectiveness of the standards for teaching and learning, and the extent and ease/difficulty of implementing science standards in different grades. The research used a mixed methods approach, combining qualitative and quantitative research methods. The research tools were questionnaires that were administered to elementary school science teachers. The majority of the teachers perceived the standards in science as effective for teaching and learning and only a small minority viewed them as restricting their pedagogical autonomy. Differences were found in the extent of implementation of the different standards and between different grades. The teachers perceived a different degree of difficulty in the implementation of the different standards. The standards experienced as easiest to implement were in the field of biology and materials, whereas the standards in earth sciences and the universe and technology were most difficult to implement, and are also those evaluated by the teachers as being implemented to the least extent. Exposure of teachers' perceptions on the effectiveness of standards and the implementation of the standards may aid policymakers in future planning of teachers' professional development for the implementation of standards.
Comparing generalized ensemble methods for sampling of systems with many degrees of freedom
Lincoff, James; Sasmal, Sukanya; Head-Gordon, Teresa
2016-11-03
Here, we compare two standard replica exchange methods using temperature and dielectric constant as the scaling variables for independent replicas against two new corresponding enhanced sampling methods based on non-equilibrium statistical cooling (temperature) or descreening (dielectric). We test the four methods on a rough 1D potential as well as for alanine dipeptide in water, for which their relatively small phase space allows for the ability to define quantitative convergence metrics. We show that both dielectric methods are inferior to the temperature enhanced sampling methods, and in turn show that temperature cool walking (TCW) systematically outperforms the standard temperature replica exchangemore » (TREx) method. We extend our comparisons of the TCW and TREx methods to the 5 residue met-enkephalin peptide, in which we evaluate the Kullback-Leibler divergence metric to show that the rate of convergence between two independent trajectories is faster for TCW compared to TREx. Finally we apply the temperature methods to the 42 residue amyloid-β peptide in which we find non-negligible differences in the disordered ensemble using TCW compared to the standard TREx. All four methods have been made available as software through the OpenMM Omnia software consortium.« less
Comparing generalized ensemble methods for sampling of systems with many degrees of freedom.
Lincoff, James; Sasmal, Sukanya; Head-Gordon, Teresa
2016-11-07
We compare two standard replica exchange methods using temperature and dielectric constant as the scaling variables for independent replicas against two new corresponding enhanced sampling methods based on non-equilibrium statistical cooling (temperature) or descreening (dielectric). We test the four methods on a rough 1D potential as well as for alanine dipeptide in water, for which their relatively small phase space allows for the ability to define quantitative convergence metrics. We show that both dielectric methods are inferior to the temperature enhanced sampling methods, and in turn show that temperature cool walking (TCW) systematically outperforms the standard temperature replica exchange (TREx) method. We extend our comparisons of the TCW and TREx methods to the 5 residue met-enkephalin peptide, in which we evaluate the Kullback-Leibler divergence metric to show that the rate of convergence between two independent trajectories is faster for TCW compared to TREx. Finally we apply the temperature methods to the 42 residue amyloid-β peptide in which we find non-negligible differences in the disordered ensemble using TCW compared to the standard TREx. All four methods have been made available as software through the OpenMM Omnia software consortium (http://www.omnia.md/).
Assessing the reliability of ecotoxicological studies: An overview of current needs and approaches.
Moermond, Caroline; Beasley, Amy; Breton, Roger; Junghans, Marion; Laskowski, Ryszard; Solomon, Keith; Zahner, Holly
2017-07-01
In general, reliable studies are well designed and well performed, and enough details on study design and performance are reported to assess the study. For hazard and risk assessment in various legal frameworks, many different types of ecotoxicity studies need to be evaluated for reliability. These studies vary in study design, methodology, quality, and level of detail reported (e.g., reviews, peer-reviewed research papers, or industry-sponsored studies documented under Good Laboratory Practice [GLP] guidelines). Regulators have the responsibility to make sound and verifiable decisions and should evaluate each study for reliability in accordance with scientific principles regardless of whether they were conducted in accordance with GLP and/or standardized methods. Thus, a systematic and transparent approach is needed to evaluate studies for reliability. In this paper, 8 different methods for reliability assessment were compared using a number of attributes: categorical versus numerical scoring methods, use of exclusion and critical criteria, weighting of criteria, whether methods are tested with case studies, domain of applicability, bias toward GLP studies, incorporation of standard guidelines in the evaluation method, number of criteria used, type of criteria considered, and availability of guidance material. Finally, some considerations are given on how to choose a suitable method for assessing reliability of ecotoxicity studies. Integr Environ Assess Manag 2017;13:640-651. © 2016 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC). © 2016 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC).
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacFarlane, Michael; Battista, Jerry; Chen, Jeff
Purpose: To develop a radiotherapy dose tracking and plan evaluation technique using cone-beam computed tomography (CBCT) images. Methods: We developed a patient-specific method of calibrating CBCT image sets for dose calculation. The planning CT was first registered with the CBCT using deformable image registration (DIR). A scatter plot was generated between the CT numbers of the planning CT and CBCT for each slice. The CBCT calibration curve was obtained by least-square fitting of the data, and applied to each CBCT slice. The calibrated CBCT was then merged with original planning CT to extend the small field of view of CBCT.more » Finally, the treatment plan was copied to the merged CT for dose tracking and plan evaluation. The proposed patient-specific calibration method was also compared to two methods proposed in literature. To evaluate the accuracy of each technique, 15 head-and-neck patients requiring plan adaptation were arbitrarily selected from our institution. The original plan was calculated on each method’s data set, including a second planning CT acquired within 48 hours of the CBCT (serving as gold standard). Clinically relevant dose metrics and 3D gamma analysis of dose distributions were compared between the different techniques. Results: Compared to the gold standard of using planning CTs, the patient-specific CBCT calibration method was shown to provide promising results with gamma pass rates above 95% and average dose metric agreement within 2.5%. Conclusions: The patient-specific CBCT calibration method could potentially be used for on-line dose tracking and plan evaluation, without requiring a re-planning CT session.« less
Performance evaluation of BPM system in SSRF using PCA method
NASA Astrophysics Data System (ADS)
Chen, Zhi-Chu; Leng, Yong-Bin; Yan, Ying-Bing; Yuan, Ren-Xian; Lai, Long-Wei
2014-07-01
The beam position monitor (BPM) system is of most importance in a light source. The capability of the BPM depends on the resolution of the system. The traditional standard deviation on the raw data method merely gives the upper limit of the resolution. Principal component analysis (PCA) had been introduced in the accelerator physics and it could be used to get rid of the actual signals. Beam related information was extracted before the evaluation of the BPM performance. A series of studies had been made in the Shanghai Synchrotron Radiation Facility (SSRF) and PCA was proved to be an effective and robust method in the performance evaluations of our BPM system.
Study on the criteria for assessing skull-face correspondence in craniofacial superimposition.
Ibáñez, Oscar; Valsecchi, Andrea; Cavalli, Fabio; Huete, María Isabel; Campomanes-Alvarez, Blanca Rosario; Campomanes-Alvarez, Carmen; Vicente, Ricardo; Navega, David; Ross, Ann; Wilkinson, Caroline; Jankauskas, Rimantas; Imaizumi, Kazuhiko; Hardiman, Rita; Jayaprakash, Paul Thomas; Ruiz, Elena; Molinero, Francisco; Lestón, Patricio; Veselovskaya, Elizaveta; Abramov, Alexey; Steyn, Maryna; Cardoso, Joao; Humpire, Daniel; Lusnig, Luca; Gibelli, Daniele; Mazzarelli, Debora; Gaudio, Daniel; Collini, Federica; Damas, Sergio
2016-11-01
Craniofacial superimposition has the potential to be used as an identification method when other traditional biological techniques are not applicable due to insufficient quality or absence of ante-mortem and post-mortem data. Despite having been used in many countries as a method of inclusion and exclusion for over a century it lacks standards. Thus, the purpose of this research is to provide forensic practitioners with standard criteria for analysing skull-face relationships. Thirty-seven experts from 16 different institutions participated in this study, which consisted of evaluating 65 criteria for assessing skull-face anatomical consistency on a sample of 24 different skull-face superimpositions. An unbiased statistical analysis established the most objective and discriminative criteria. Results did not show strong associations, however, important insights to address lack of standards were provided. In addition, a novel methodology for understanding and standardizing identification methods based on the observation of morphological patterns has been proposed. Crown Copyright © 2016. Published by Elsevier Ireland Ltd. All rights reserved.
Study on Quality Standard of Processed Curcuma Longa Radix
Zhao, Yongfeng; Quan, Liang; Zhou, Haiting; Cao, Dong; Li, Wenbing; Yang, Zhuo
2017-01-01
To control the quality of Curcuma Longa Radix by establishing quality standards, this paper increased the contents of extract and volatile oil determination. Meanwhile, the curcumin was selected as the internal marker, and the relative correlation factors (RCFs) of demethoxycurcumin and bisdemethoxycurcumin were established by high performance liquid chromatography (HPLC). The contents of multicomponents were calculated based on their RCFs. The rationality and feasibility of the methods were evaluated by comparison of the quantitative results between external standard method (ESM) and quantitative analysis of multicomponents by single-marker (QAMS). Ethanol extracts ranged from 9.749 to 15.644% and the mean value was 13.473%. The volatile oil ranged from 0.45 to 0.90 mL/100 g and the mean value was 0.66 mL/100 g. This method was accurate and feasible and could provide a reference for further comprehensive and effective control of the quality standard of Curcuma Longa Radix and its processed products. PMID:29375640
Developing criteria to establish Trusted Digital Repositories
Faundeen, John L.
2017-01-01
This paper details the drivers, methods, and outcomes of the U.S. Geological Survey’s quest to establish criteria by which to judge its own digital preservation resources as Trusted Digital Repositories. Drivers included recent U.S. legislation focused on data and asset management conducted by federal agencies spending $100M USD or more annually on research activities. The methods entailed seeking existing evaluation criteria from national and international organizations such as International Standards Organization (ISO), U.S. Library of Congress, and Data Seal of Approval upon which to model USGS repository evaluations. Certification, complexity, cost, and usability of existing evaluation models were key considerations. The selected evaluation method was derived to allow the repository evaluation process to be transparent, understandable, and defensible; factors that are critical for judging competing, internal units. Implementing the chosen evaluation criteria involved establishing a cross-agency, multi-disciplinary team that interfaced across the organization.
Acoustic Analysis of Voice in Singers: A Systematic Review
ERIC Educational Resources Information Center
Gunjawate, Dhanshree R.; Ravi, Rohit; Bellur, Rajashekhar
2018-01-01
Purpose: Singers are vocal athletes having specific demands from their voice and require special consideration during voice evaluation. Presently, there is a lack of standards for acoustic evaluation in them. The aim of the present study was to systematically review the available literature on the acoustic analysis of voice in singers. Method: A…
Cost Savings Threshold Analysis of a Capacity-Building Program for HIV Prevention Organizations
ERIC Educational Resources Information Center
Dauner, Kim Nichols; Oglesby, Willie H.; Richter, Donna L.; LaRose, Christopher M.; Holtgrave, David R.
2008-01-01
Although the incidence of HIV each year remains steady, prevention funding is increasingly competitive. Programs need to justify costs in terms of evaluation outcomes, including economic ones. Threshold analyses set performance standards to determine program effectiveness relative to that threshold. This method was used to evaluate the potential…
Methods for evaluating stream, riparian, and biotic conditions
William S. Platts; Walter F. Megahan; G. Wayne Minshall
1983-01-01
This report develops a standard way of measuring stream, riparian, and biotic conditions and evaluates the validity of the measurements recommended. Accuracy and precision of most measurements are defined. This report will be of value to those persons documenting, monitoring, or predicting stream conditions and their biotic resources, especially those related to...
ERIC Educational Resources Information Center
Kane, Michael T.; Mroch, Andrew A.
2010-01-01
In evaluating the relationship between two measures across different groups (i.e., in evaluating "differential validity") it is necessary to examine differences in correlation coefficients and in regression lines. Ordinary least squares (OLS) regression is the standard method for fitting lines to data, but its criterion for optimal fit…
USDA-ARS?s Scientific Manuscript database
Research is needed over a wide geographic range of soil and weather scenarios to evaluate methods and tools for corn N fertilizer applications. The objectives of this research were to conduct standardized corn N rate response field studies to evaluate the performance of multiple public-domain N deci...
Rapid viability tests of the Catagory B agent Escherichia coli O157:H7 were evaluated after disinfection with chlorine. The metabolic activity dyes ChemChrome V6, a modified fluorescein diacetate (FDA) and 5-cyano-2,3-ditolyl tetrazolium chloride (CTC) were compared to standard ...
Evaluating an Objective Structured Clinical Examination (OSCE) Adapted for Social Work
ERIC Educational Resources Information Center
Bogo, Marion; Regehr, Cheryl; Katz, Ellen; Logie, Carmen; Tufford, Lea; Litvack, Andrea
2012-01-01
Objectives: To evaluate an objective structured clinical examination (OSCE) adapted for social work in a lab course and examine the degree to which it predicts competence in the practicum. Method: 125 Masters students participated in a one-scenario OSCE and wrote responses to standardized reflection questions. OSCE performance and reflections were…
Noble, Simon; Pease, Nikki; Sui, Jessica; Davies, James; Lewis, Sarah; Malik, Usman; Alikhan, Raza; Prout, Hayley; Nelson, Annmarie
2016-11-28
Cancer-associated thrombosis (CAT) complex condition, which may present to any healthcare professional and at any point during the cancer journey. As such, patients may be managed by a number of specialties, resulting in inconsistent practice and suboptimal care. We describe the development of a dedicated CAT service and its evaluation. Specialist cancer centre, district general hospital and primary care. Patients with CAT and their referring clinicians. A cross specialty team developed a dedicated CAT service , including clear referral pathways, consistent access to medicines, patient's information and a specialist clinic. The service was evaluated using a mixed-methods evaluation , including audits of clinical practice, clinical outcomes, staff surveys and qualitative interviewing of patients and healthcare professionals. Data from 457 consecutive referrals over an 18-month period were evaluated. The CAT service has led to an 88% increase in safe and consistent community prescribing of low-molecular-weight heparin, with improved access to specialist advice and information. Patients reported improved understanding of their condition, enabling better self-management as well as better access to support and information. Referring clinicians reported better care standards for their patients with improved access to expertise and appropriate management. A dedicated CAT service improves overall standards of care and is viewed positively by patients and clinicians alike. Further health economic evaluation would enhance the case for establishing this as the standard model of care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Garcia Hejl, Carine; Ramirez, Jose Manuel; Vest, Philippe; Chianea, Denis; Renard, Christophe
2014-09-01
Laboratories working towards accreditation by the International Standards Organization (ISO) 15189 standard are required to demonstrate the validity of their analytical methods. The different guidelines set by various accreditation organizations make it difficult to provide objective evidence that an in-house method is fit for the intended purpose. Besides, the required performance characteristics tests and acceptance criteria are not always detailed. The laboratory must choose the most suitable validation protocol and set the acceptance criteria. Therefore, we propose a validation protocol to evaluate the performance of an in-house method. As an example, we validated the process for the detection and quantification of lead in whole blood by electrothermal absorption spectrometry. The fundamental parameters tested were, selectivity, calibration model, precision, accuracy (and uncertainty of measurement), contamination, stability of the sample, reference interval, and analytical interference. We have developed a protocol that has been applied successfully to quantify lead in whole blood by electrothermal atomic absorption spectrometry (ETAAS). In particular, our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics.
An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems.
Glover, Jack L; Hudson, Lawrence T
2016-06-01
The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard.
An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems
Glover, Jack L.; Hudson, Lawrence T.
2016-01-01
The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard. PMID:27499586
An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems
NASA Astrophysics Data System (ADS)
Glover, Jack L.; Hudson, Lawrence T.
2016-06-01
The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in an international aviation security standard.
Review of research designs and statistical methods employed in dental postgraduate dissertations.
Shirahatti, Ravi V; Hegde-Shetiya, Sahana
2015-01-01
There is a need to evaluate the quality of postgraduate dissertations of dentistry submitted to university in the light of the international standards of reporting. We conducted the review with an objective to document the use of sampling methods, measurement standardization, blinding, methods to eliminate bias, appropriate use of statistical tests, appropriate use of data presentation in postgraduate dental research and suggest and recommend modifications. The public access database of the dissertations from Rajiv Gandhi University of Health Sciences was reviewed. Three hundred and thirty-three eligible dissertations underwent preliminary evaluation followed by detailed evaluation of 10% of randomly selected dissertations. The dissertations were assessed based on international reporting guidelines such as strengthening the reporting of observational studies in epidemiology (STROBE), consolidated standards of reporting trials (CONSORT), and other scholarly resources. The data were compiled using MS Excel and SPSS 10.0. Numbers and percentages were used for describing the data. The "in vitro" studies were the most common type of research (39%), followed by observational (32%) and experimental studies (29%). The disciplines conservative dentistry (92%) and prosthodontics (75%) reported high numbers of in vitro research. Disciplines oral surgery (80%) and periodontics (67%) had conducted experimental studies as a major share of their research. Lacunae in the studies included observational studies not following random sampling (70%), experimental studies not following random allocation (75%), not mentioning about blinding, confounding variables and calibrations in measurements, misrepresenting the data by inappropriate data presentation, errors in reporting probability values and not reporting confidence intervals. Few studies showed grossly inappropriate choice of statistical tests and many studies needed additional tests. Overall observations indicated the need to comply with standard guidelines of reporting research.
Methods for calculating dietary energy density in a nationally representative sample
Vernarelli, Jacqueline A.; Mitchell, Diane C.; Rolls, Barbara J.; Hartman, Terryl J.
2013-01-01
There has been a growing interest in examining dietary energy density (ED, kcal/g) as it relates to various health outcomes. Consuming a diet low in ED has been recommended in the 2010 Dietary Guidelines, as well as by other agencies, as a dietary approach for disease prevention. Translating this recommendation into practice; however, is difficult. Currently there is no standardized method for calculating dietary ED; as dietary ED can be calculated with foods alone, or with a combination of foods and beverages. Certain items may be defined as either a food or a beverage (e.g., meal replacement shakes) and require special attention. National survey data are an excellent resource for evaluating factors that are important to dietary ED calculation. The National Health and Nutrition Examination Survey (NHANES) nutrient and food database does not include an ED variable, thus researchers must independently calculate ED. The objective of this study was to provide information that will inform the selection of a standardized ED calculation method by comparing and contrasting methods for ED calculation. The present study evaluates all consumed items and defines foods and beverages based on both USDA food codes and how the item was consumed. Results are presented as mean EDs for the different calculation methods stratified by population demographics (e.g. age, sex). Using United State Department of Agriculture (USDA) food codes in the 2005–2008 NHANES, a standardized method for calculating dietary ED can be derived. This method can then be adapted by other researchers for consistency across studies. PMID:24432201
Checking an integrated model of web accessibility and usability evaluation for disabled people.
Federici, Stefano; Micangeli, Andrea; Ruspantini, Irene; Borgianni, Stefano; Corradi, Fabrizio; Pasqualotto, Emanuele; Olivetti Belardinelli, Marta
2005-07-08
A combined objective-oriented and subjective-oriented method for evaluating accessibility and usability of web pages for students with disability was tested. The objective-oriented approach is devoted to verifying the conformity of interfaces to standard rules stated by national and international organizations responsible for web technology standardization, such as W3C. Conversely, the subjective-oriented approach allows assessing how the final users interact with the artificial system, accessing levels of user satisfaction based on personal factors and environmental barriers. Five kinds of measurements were applied as objective-oriented and subjective-oriented tests. Objective-oriented evaluations were performed on the Help Desk web page for students with disability, included in the website of a large Italian state university. Subjective-oriented tests were administered to 19 students labeled as disabled on the basis of their own declaration at the University enrolment: 13 students were tested by means of the SUMI test and six students by means of the 'Cooperative evaluation'. Objective-oriented and subjective-oriented methods highlighted different and sometimes conflicting results. Both methods have pointed out much more consistency regarding levels of accessibility than of usability. Since usability is largely affected by individual differences in user's own (dis)abilities, subjective-oriented measures underscored the fact that blind students encountered much more web surfing difficulties.
Standard Test Procedures for Evaluating Various Leak Detection Methods
Learn about protocols that testers could use to demonstrate that an individual release detection equipment type could meet the performance requirements noted in the federal UST requirements for detecting leaks.