40 CFR 761.326 - Conducting the comparison study.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Conducting the comparison study. 761...-liquid PCB Remediation Waste Samples § 761.326 Conducting the comparison study. Extract or analyze the comparison study samples using the alternative method. For an alternative extraction method or alternative...
40 CFR 761.326 - Conducting the comparison study.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Conducting the comparison study. 761...-liquid PCB Remediation Waste Samples § 761.326 Conducting the comparison study. Extract or analyze the comparison study samples using the alternative method. For an alternative extraction method or alternative...
Comparative study between EDXRF and ASTM E572 methods using two-way ANOVA
NASA Astrophysics Data System (ADS)
Krummenauer, A.; Veit, H. M.; Zoppas-Ferreira, J.
2018-03-01
Comparison with reference method is one of the necessary requirements for the validation of non-standard methods. This comparison was made using the experiment planning technique with two-way ANOVA. In ANOVA, the results obtained using the EDXRF method, to be validated, were compared with the results obtained using the ASTM E572-13 standard test method. Fisher's tests (F-test) were used to comparative study between of the elements: molybdenum, niobium, copper, nickel, manganese, chromium and vanadium. All F-tests of the elements indicate that the null hypothesis (Ho) has not been rejected. As a result, there is no significant difference between the methods compared. Therefore, according to this study, it is concluded that the EDXRF method was approved in this method comparison requirement.
Differential Item Functioning Detection Across Two Methods of Defining Group Comparisons
Sari, Halil Ibrahim
2014-01-01
This study compares two methods of defining groups for the detection of differential item functioning (DIF): (a) pairwise comparisons and (b) composite group comparisons. We aim to emphasize and empirically support the notion that the choice of pairwise versus composite group definitions in DIF is a reflection of how one defines fairness in DIF studies. In this study, a simulation was conducted based on data from a 60-item ACT Mathematics test (ACT; Hanson & Béguin). The unsigned area measure method (Raju) was used as the DIF detection method. An application to operational data was also completed in the study, as well as a comparison of observed Type I error rates and false discovery rates across the two methods of defining groups. Results indicate that the amount of flagged DIF or interpretations about DIF in all conditions were not the same across the two methods, and there may be some benefits to using composite group approaches. The results are discussed in connection to differing definitions of fairness. Recommendations for practice are made. PMID:29795837
ERIC Educational Resources Information Center
Sari, Halil Ibrahim; Huggins, Anne Corinne
2015-01-01
This study compares two methods of defining groups for the detection of differential item functioning (DIF): (a) pairwise comparisons and (b) composite group comparisons. We aim to emphasize and empirically support the notion that the choice of pairwise versus composite group definitions in DIF is a reflection of how one defines fairness in DIF…
Thom, Howard H Z; Capkun, Gorana; Cerulli, Annamaria; Nixon, Richard M; Howard, Luke S
2015-04-12
Network meta-analysis (NMA) is a methodology for indirectly comparing, and strengthening direct comparisons of two or more treatments for the management of disease by combining evidence from multiple studies. It is sometimes not possible to perform treatment comparisons as evidence networks restricted to randomized controlled trials (RCTs) may be disconnected. We propose a Bayesian NMA model that allows to include single-arm, before-and-after, observational studies to complete these disconnected networks. We illustrate the method with an indirect comparison of treatments for pulmonary arterial hypertension (PAH). Our method uses a random effects model for placebo improvements to include single-arm observational studies into a general NMA. Building on recent research for binary outcomes, we develop a covariate-adjusted continuous-outcome NMA model that combines individual patient data (IPD) and aggregate data from two-arm RCTs with the single-arm observational studies. We apply this model to a complex comparison of therapies for PAH combining IPD from a phase-III RCT of imatinib as add-on therapy for PAH and aggregate data from RCTs and single-arm observational studies, both identified by a systematic review. Through the inclusion of observational studies, our method allowed the comparison of imatinib as add-on therapy for PAH with other treatments. This comparison had not been previously possible due to the limited RCT evidence available. However, the credible intervals of our posterior estimates were wide so the overall results were inconclusive. The comparison should be treated as exploratory and should not be used to guide clinical practice. Our method for the inclusion of single-arm observational studies allows the performance of indirect comparisons that had previously not been possible due to incomplete networks composed solely of available RCTs. We also built on many recent innovations to enable researchers to use both aggregate data and IPD. This method could be used in similar situations where treatment comparisons have not been possible due to restrictions to RCT evidence and where a mixture of aggregate data and IPD are available.
Evaluating Blended and Flipped Instruction in Numerical Methods at Multiple Engineering Schools
ERIC Educational Resources Information Center
Clark, Renee; Kaw, Autar; Lou, Yingyan; Scott, Andrew; Besterfield-Sacre, Mary
2018-01-01
With the literature calling for comparisons among technology-enhanced or active-learning pedagogies, a blended versus flipped instructional comparison was made for numerical methods coursework using three engineering schools with diverse student demographics. This study contributes to needed comparisons of enhanced instructional approaches in STEM…
Galea, Karen S.; McGonagle, Carolyn; Sleeuwenhoek, Anne; Todd, David; Jiménez, Araceli Sánchez
2014-01-01
Dermal exposure to drilling fluids and crude oil is an exposure route of concern. However, there have been no published studies describing sampling methods or reporting dermal exposure measurements. We describe a study that aimed to evaluate a wipe sampling method to assess dermal exposure to an oil-based drilling fluid and crude oil, as well as to investigate the feasibility of using an interception cotton glove sampler for exposure on the hands/wrists. A direct comparison of the wipe and interception methods was also completed using pigs’ trotters as a surrogate for human skin and a direct surface contact exposure scenario. Overall, acceptable recovery and sampling efficiencies were reported for both methods, and both methods had satisfactory storage stability at 1 and 7 days, although there appeared to be some loss over 14 days. The methods’ comparison study revealed significantly higher removal of both fluids from the metal surface with the glove samples compared with the wipe samples (on average 2.5 times higher). Both evaluated sampling methods were found to be suitable for assessing dermal exposure to oil-based drilling fluids and crude oil; however, the comparison study clearly illustrates that glove samplers may overestimate the amount of fluid transferred to the skin. Further comparison of the two dermal sampling methods using additional exposure situations such as immersion or deposition, as well as a field evaluation, is warranted to confirm their appropriateness and suitability in the working environment. PMID:24598941
Galea, Karen S; McGonagle, Carolyn; Sleeuwenhoek, Anne; Todd, David; Jiménez, Araceli Sánchez
2014-06-01
Dermal exposure to drilling fluids and crude oil is an exposure route of concern. However, there have been no published studies describing sampling methods or reporting dermal exposure measurements. We describe a study that aimed to evaluate a wipe sampling method to assess dermal exposure to an oil-based drilling fluid and crude oil, as well as to investigate the feasibility of using an interception cotton glove sampler for exposure on the hands/wrists. A direct comparison of the wipe and interception methods was also completed using pigs' trotters as a surrogate for human skin and a direct surface contact exposure scenario. Overall, acceptable recovery and sampling efficiencies were reported for both methods, and both methods had satisfactory storage stability at 1 and 7 days, although there appeared to be some loss over 14 days. The methods' comparison study revealed significantly higher removal of both fluids from the metal surface with the glove samples compared with the wipe samples (on average 2.5 times higher). Both evaluated sampling methods were found to be suitable for assessing dermal exposure to oil-based drilling fluids and crude oil; however, the comparison study clearly illustrates that glove samplers may overestimate the amount of fluid transferred to the skin. Further comparison of the two dermal sampling methods using additional exposure situations such as immersion or deposition, as well as a field evaluation, is warranted to confirm their appropriateness and suitability in the working environment. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Lefrant, J-Y; Muller, L; de La Coussaye, J Emmanuel; Benbabaali, M; Lebris, C; Zeitoun, N; Mari, C; Saïssi, G; Ripart, J; Eledjam, J-J
2003-03-01
Comparisons of urinary bladder, oesophageal, rectal, axillary, and inguinal temperatures versus pulmonary artery temperature. Prospective cohort study. Intensive Care Unit of a University-Hospital. Forty-two intensive care patients requiring a pulmonary artery catheter (PAC). Patients requiring PAC and without oesophageal, urinary bladder, and/or rectal disease or recent surgery were included in the study. Temperature was simultaneously monitored with PAC, urinary, oesophageal, and rectal electronic thermometers and with axillary and inguinal gallium-in-glass thermometers. Comparisons used a Bland and Altman method. The pulmonary arterial temperature ranged from 33.7 degrees C to 40.2 degrees C. Urinary bladder temperature was assessed in the last 22 patients. A total of 529 temperature measurement comparisons were carried out (252 comparisons of esophageal, rectal, inguinal, axillary, and pulmonary artery temperature measurements in the first 20 patients, and 277 comparisons with overall methods in the last patients). Nine to 18 temperature measurement comparisons were carried out per patient (median = 13). The mean differences between pulmonary artery temperatures and those of the different methods studied were: oesophageal (0.11+/-0.30 degrees C), rectal (-0.07+/-0.40 degrees C), axillary (0.27+/-0.45 degrees C), inguinal (0.17+/-0.48 degrees C), urinary bladder (-0.21+/-0.20 degrees C). In critically ill patients, urinary bladder and oesophageal electronic thermometers are more reliable than the electronic rectal thermometer which is better than inguinal and axillary gallium-in-glass thermometers to measure core temperature.
Krummenauer, Frank; Storkebaum, Kristin; Dick, H Burkhard
2003-01-01
The evaluation of new diagnostic measurement devices allows intraindividual comparison with an established standard method. However, reports in journal articles often omit the adequate incorporation of the intraindividual design into the graphic representation. This article illustrates the drawbacks and the possible erroneous conclusions caused by this misleading practice in terms of recent method comparison data resulting from axial length measurement in 220 consecutive patients by both applanation ultrasound and partial coherence interferometry. Graphic representation of such method comparison data should be based on boxplots for intraindividual differences or on Bland-Altman plots. Otherwise, severe deviations between the measurement devices could be erroneously ignored and false-positive conclusions on the concordance of the instruments could result. Graphic representation of method comparison data should sensitively incorporate the underlying study design for intraindividual comparison.
A Comparison of Methods for Detecting Differential Distractor Functioning
ERIC Educational Resources Information Center
Koon, Sharon
2010-01-01
This study examined the effectiveness of the odds-ratio method (Penfield, 2008) and the multinomial logistic regression method (Kato, Moen, & Thurlow, 2009) for measuring differential distractor functioning (DDF) effects in comparison to the standardized distractor analysis approach (Schmitt & Bleistein, 1987). Students classified as participating…
Characteristics of students in comparative problem solving
NASA Astrophysics Data System (ADS)
Irfan, M.; Sudirman; Rahardi, R.
2018-01-01
Often teachers provided examples and exercised to students with regard to comparative problems consisting of one quantity. In this study, the researchers gave the problem of comparison with the two quantities mixed. It was necessary to have a good understanding to solve this problem. This study aimed to determine whether students understand the comparison in depth and be able to solve the problem of non-routine comparison. This study used qualitative explorative methods, with researchers conducting in-depth interviews on subjects to explore the thinking process when solving comparative problems. The subject of this study was three students selected by purposive sampling of 120 students. From this research, researchers found there were three subjects with different characteristics, namely: subject 1, he did the first and second questions with methods of elimination and substitution (non-comparison); subject 2, he did the first question with the concept of comparison although the answer was wrong, and did the second question with the method of elimination and substitution (non-comparison); and subject 3, he did both questions with the concept of comparison. In the first question, he did wrong because he was unable to understand the problem, while on the second he did correctly. From the characteristics of the answers, the researchers divided into 3 groups based on thinking process, namely: blind-proportion, partial-proportion, and proportion thinking.
Jensen, Scott A; Blumberg, Sean; Browning, Megan
2017-09-01
Although time-out has been demonstrated to be effective across multiple settings, little research exists on effective methods for training others to implement time-out. The present set of studies is an exploratory analysis of a structured feedback method for training time-out using repeated role-plays. The three studies examined (a) a between-subjects comparison to more a traditional didactic/video modeling method of time-out training, (b) a within-subjects comparison to traditional didactic/video modeling training for another skill, and (c) the impact of structured feedback training on in-home time-out implementation. Though findings are only preliminary and more research is needed, the structured feedback method appears across studies to be an efficient, effective method that demonstrates good maintenance of skill up to 3 months post training. Findings suggest, though do not confirm, a benefit of the structured feedback method over a more traditional didactic/video training model. Implications and further research on the method are discussed.
Karell, Mara A; Langstaff, Helen K; Halazonetis, Demetrios J; Minghetti, Caterina; Frelat, Mélanie; Kranioti, Elena F
2016-09-01
The commingling of human remains often hinders forensic/physical anthropologists during the identification process, as there are limited methods to accurately sort these remains. This study investigates a new method for pair-matching, a common individualization technique, which uses digital three-dimensional models of bone: mesh-to-mesh value comparison (MVC). The MVC method digitally compares the entire three-dimensional geometry of two bones at once to produce a single value to indicate their similarity. Two different versions of this method, one manual and the other automated, were created and then tested for how well they accurately pair-matched humeri. Each version was assessed using sensitivity and specificity. The manual mesh-to-mesh value comparison method was 100 % sensitive and 100 % specific. The automated mesh-to-mesh value comparison method was 95 % sensitive and 60 % specific. Our results indicate that the mesh-to-mesh value comparison method overall is a powerful new tool for accurately pair-matching commingled skeletal elements, although the automated version still needs improvement.
An Evaluation of Attitude-Independent Magnetometer-Bias Determination Methods
NASA Technical Reports Server (NTRS)
Hashmall, J. A.; Deutschmann, Julie
1996-01-01
Although several algorithms now exist for determining three-axis magnetometer (TAM) biases without the use of attitude data, there are few studies on the effectiveness of these methods, especially in comparison with attitude dependent methods. This paper presents the results of a comparison of three attitude independent methods and an attitude dependent method for computing TAM biases. The comparisons are based on in-flight data from the Extreme Ultraviolet Explorer (EUVE), the Upper Atmosphere Research Satellite (UARS), and the Compton Gamma Ray Observatory (GRO). The effectiveness of an algorithm is measured by the accuracy of attitudes computed using biases determined with that algorithm. The attitude accuracies are determined by comparison with known, extremely accurate, star-tracker-based attitudes. In addition, the effect of knowledge of calibration parameters other than the biases on the effectiveness of all bias determination methods is examined.
Understanding Foster Youth Outcomes: Is Propensity Scoring Better than Traditional Methods?
ERIC Educational Resources Information Center
Berzin, Stephanie Cosner
2010-01-01
Objectives: This study seeks to examine the relationship between foster care and outcomes using multiple comparison methods to account for factors that put foster youth at risk independent of care. Methods: Using the National Longitudinal Survey of Youth 1997, matching, propensity scoring, and comparisons to the general population are used to…
Song, Fujian; Loke, Yoon K; Walsh, Tanya; Glenny, Anne-Marie; Eastwood, Alison J; Altman, Douglas G
2009-04-03
To investigate basic assumptions and other methodological problems in the application of indirect comparison in systematic reviews of competing healthcare interventions. Survey of published systematic reviews. Inclusion criteria Systematic reviews published between 2000 and 2007 in which an indirect approach had been explicitly used. Identified reviews were assessed for comprehensiveness of the literature search, method for indirect comparison, and whether assumptions about similarity and consistency were explicitly mentioned. The survey included 88 review reports. In 13 reviews, indirect comparison was informal. Results from different trials were naively compared without using a common control in six reviews. Adjusted indirect comparison was usually done using classic frequentist methods (n=49) or more complex methods (n=18). The key assumption of trial similarity was explicitly mentioned in only 40 of the 88 reviews. The consistency assumption was not explicit in most cases where direct and indirect evidence were compared or combined (18/30). Evidence from head to head comparison trials was not systematically searched for or not included in nine cases. Identified methodological problems were an unclear understanding of underlying assumptions, inappropriate search and selection of relevant trials, use of inappropriate or flawed methods, lack of objective and validated methods to assess or improve trial similarity, and inadequate comparison or inappropriate combination of direct and indirect evidence. Adequate understanding of basic assumptions underlying indirect and mixed treatment comparison is crucial to resolve these methodological problems. APPENDIX 1: PubMed search strategy. APPENDIX 2: Characteristics of identified reports. APPENDIX 3: Identified studies. References of included studies.
On assessing bioequivalence and interchangeability between generics based on indirect comparisons.
Zheng, Jiayin; Chow, Shein-Chung; Yuan, Mengdie
2017-08-30
As more and more generics become available in the market place, the safety/efficacy concerns may arise as the result of interchangeably use of approved generics. However, bioequivalence assessment for regulatory approval among generics of the innovative drug product is not required. In practice, approved generics are often used interchangeably without any mechanism of safety monitoring. In this article, based on indirect comparisons, we proposed several methods to assessing bioequivalence and interchangeability between generics. The applicability of the methods and the similarity assumptions were discussed, as well as the inappropriateness of directly adopting adjusted indirect comparison to the field of generics' comparison. Besides, some extensions were given to take into consideration the important topics in clinical trials for bioequivalence assessments, for example, multiple comparisons and simultaneously testing bioequivalence among three generics. Extensive simulation studies were conducted to investigate the performances of the proposed methods. The studies of malaria generics and HIV/AIDS generics prequalified by the WHO were used as real examples to demonstrate the use of the methods. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Effectiveness Comparison of TxDOT Quality Control/Quality Assurance and Method Specifications
DOT National Transportation Integrated Search
1998-12-01
Original Report date: October 1997. This is the first and final report for research project 0-1721, "Effectiveness Comparison of TxDOT Quality Control/Quality Assurance and Method Specifications." This study was established and sponsored by TxDOT to ...
An experimental comparison of various methods of nearfield acoustic holography
Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.
2017-05-19
An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less
An experimental comparison of various methods of nearfield acoustic holography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.
An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less
A scoping review of indirect comparison methods and applications using individual patient data.
Veroniki, Areti Angeliki; Straus, Sharon E; Soobiah, Charlene; Elliott, Meghan J; Tricco, Andrea C
2016-04-27
Several indirect comparison methods, including network meta-analyses (NMAs), using individual patient data (IPD) have been developed to synthesize evidence from a network of trials. Although IPD indirect comparisons are published with increasing frequency in health care literature, there is no guidance on selecting the appropriate methodology and on reporting the methods and results. In this paper we examine the methods and reporting of indirect comparison methods using IPD. We searched MEDLINE, Embase, the Cochrane Library, and CINAHL from inception until October 2014. We included published and unpublished studies reporting a method, application, or review of indirect comparisons using IPD and at least three interventions. We identified 37 papers, including a total of 33 empirical networks. Of these, only 9 (27 %) IPD-NMAs reported the existence of a study protocol, whereas 3 (9 %) studies mentioned that protocols existed without providing a reference. The 33 empirical networks included 24 (73 %) IPD-NMAs and 9 (27 %) matching adjusted indirect comparisons (MAICs). Of the 21 (64 %) networks with at least one closed loop, 19 (90 %) were IPD-NMAs, 13 (68 %) of which evaluated the prerequisite consistency assumption, and only 5 (38 %) of the 13 IPD-NMAs used statistical approaches. The median number of trials included per network was 10 (IQR 4-19) (IPD-NMA: 15 [IQR 8-20]; MAIC: 2 [IQR 3-5]), and the median number of IPD trials included in a network was 3 (IQR 1-9) (IPD-NMA: 6 [IQR 2-11]; MAIC: 2 [IQR 1-2]). Half of the networks (17; 52 %) applied Bayesian hierarchical models (14 one-stage, 1 two-stage, 1 used IPD as an informative prior, 1 unclear-stage), including either IPD alone or with aggregated data (AD). Models for dichotomous and continuous outcomes were available (IPD alone or combined with AD), as were models for time-to-event data (IPD combined with AD). One in three indirect comparison methods modeling IPD adjusted results from different trials to estimate effects as if they had come from the same, randomized, population. Key methodological and reporting elements (e.g., evaluation of consistency, existence of study protocol) were often missing from an indirect comparison paper.
Detection of medication-related problems in hospital practice: a review
Manias, Elizabeth
2013-01-01
This review examines the effectiveness of detection methods in terms of their ability to identify and accurately determine medication-related problems in hospitals. A search was conducted of databases from inception to June 2012. The following keywords were used in combination: medication error or adverse drug event or adverse drug reaction, comparison, detection, hospital and method. Seven detection methods were considered: chart review, claims data review, computer monitoring, direct care observation, interviews, prospective data collection and incident reporting. Forty relevant studies were located. Detection methods that were better able to identify medication-related problems compared with other methods tested in the same study included chart review, computer monitoring, direct care observation and prospective data collection. However, only small numbers of studies were involved in comparisons with direct care observation (n = 5) and prospective data collection (n = 6). There was little focus on detecting medication-related problems during various stages of the medication process, and comparisons associated with the seriousness of medication-related problems were examined in 19 studies. Only 17 studies involved appropriate comparisons with a gold standard, which provided details about sensitivities and specificities. In view of the relatively low identification of medication-related problems with incident reporting, use of this method in tracking trends over time should be met with some scepticism. Greater attention should be placed on combining methods, such as chart review and computer monitoring in examining trends. More research is needed on the use of claims data, direct care observation, interviews and prospective data collection as detection methods. PMID:23194349
Comparison of Field Methods and Models to Estimate Mean Crown Diameter
William A. Bechtold; Manfred E. Mielke; Stanley J. Zarnoch
2002-01-01
The direct measurement of crown diameters with logger's tapes adds significantly to the cost of extensive forest inventories. We undertook a study of 100 trees to compare this measurement method to four alternatives-two field instruments, ocular estimates, and regression models. Using the taping method as the standard of comparison, accuracy of the tested...
A Comparison of the Bounded Derivative and the Normal Mode Initialization Methods Using Real Data
NASA Technical Reports Server (NTRS)
Semazzi, F. H. M.; Navon, I. M.
1985-01-01
Browning et al. (1980) proposed an initialization method called the bounded derivative method (BDI). They used analytical data to test the new method. Kasahara (1982) theoretically demonstrated the equivalence between BDI and the well known nonlinear normal mode initialization method (NMI). The purposes of this study are the extension of the application of BDI to real data and comparison with NMI. The unbalanced initial state (UBD) is data of January, 1979 OOZ which were interpolated from the adjacent sigma levels of the GLAS GCM to the 300 mb surface. The global barotropic model described by Takacs and Balgovind (1983) is used. Orographic forcing is explicitly included in the model. Many comparisons are performed between various quantities. However, we only present a comparison of the time evolution at two grid points A(50 S, 90 E) and B(10 S, 20 E) which represent low and middle latitude locations. To facilitate a more complete comparison an initialization experiment based on the classical balance equation (CBE) was also included.
2007-10-10
Dipartimento di Meccanica Strutturale, Università degli Studi di Pavia cDipartimento di Matematica , Università degli Studi di Pavia dEuropean Centre...for Training and Research in Earthquake Engineering, Pavia eIstituto di Matematica Applicata e Tecnologie Informatiche del CNR, Pavia “Comparisons
Amini, F; Kachuei, R; Noorbakhsh, F; Imani Fooladi, A A
2015-06-01
The aim of this study was the detection of Aspergillus species and Mycobacterium tuberculosis together in bronchoalveolar lavage (BAL) using of multiplex PCR. In this study, from September 2012 until June 2013, 100 bronchoalveolar lavage (BAL) specimens were collected from patients suspected of tuberculosis (TB). After the direct and culture test, multiplex PCR were utilized in order to diagnose Aspergillus species and M. tuberculosis. Phenol-chloroform manual method was used in order to extract DNA from these microorganisms. Aspergillus specific primers, M. tuberculosis designed primers and beta actin primers were used for multiplex PCR. In this study, by multiplex PCR method, Aspergillus species were identified in 12 samples (12%), positive samples in direct and culture test were respectively 11% and 10%. Sensitivity and specificity of this method in comparison to direct test were respectively 100% and 98.8%, also sensitivity and specificity of this method in comparison to culture test were respectively 100% and 97.7%. In this assay, M. tuberculosis was identified in 8 samples (8%). Mycobacterium-positive samples in molecular method, direct and culture test were respectively 6%, 5% and 7%. Sensitivity and specificity of PCR method in comparison to direct test were 80% and 97.8% also sensitivity and specificity of this method in comparison to culture test was 71.4% and 98.9%. In the present study, multiplex PCR method had higher sensitivity than direct and culture test in order to identify and detect Aspergillus, also this method had lower sensitivity for identification of M. tuberculosis, suggesting that the method of DNA extraction was not suitable. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Histological Methods for ex vivo Axon Tracing: A Systematic Review
Heilingoetter, Cassandra L.; Jensen, Matthew B.
2016-01-01
Objectives Axon tracers provide crucial insight into the development, connectivity, and function of neural pathways. A tracer can be characterized as a substance that allows for the visualization of a neuronal pathway. Axon tracers have previously been used exclusively with in vivo studies; however, newer methods of axon tracing can be applied to ex vivo studies. Ex vivo studies involve the examination of cells or tissues retrieved from an organism. These post mortem methods of axon tracing offer several advantages, such as reaching inaccessible tissues and avoiding survival surgeries. Methods In order to evaluate the quality of the ex vivo tracing methods, we performed a systematic review of various experimental and comparison studies to discern the optimal method of axon tracing. Results The most prominent methods for ex vivo tracing involve enzymatic techniques or various dyes. It appears that there are a variety of techniques and conditions that tend to give better fluorescent character, clarity, and distance traveled in the neuronal pathway. We found direct comparison studies that looked at variables such as the type of tracer, time required, effect of temperature, and presence of calcium, however, there are other variables that have not been compared directly. Discussion We conclude there are a variety of promising tracing methods available depending on the experimental goals of the researcher, however, more direct comparison studies are needed to affirm the optimal method. PMID:27098542
Towards Extending Forward Kinematic Models on Hyper-Redundant Manipulator to Cooperative Bionic Arms
NASA Astrophysics Data System (ADS)
Singh, Inderjeet; Lakhal, Othman; Merzouki, Rochdi
2017-01-01
Forward Kinematics is a stepping stone towards finding an inverse solution and subsequently a dynamic model of a robot. Hence a study and comparison of various Forward Kinematic Models (FKMs) is necessary for robot design. This paper deals with comparison of three FKMs on the same hyper-redundant Compact Bionic Handling Assistant (CBHA) manipulator under same conditions. The aim of this study is to project on modeling cooperative bionic manipulators. Two of these methods are quantitative methods, Arc Geometry HTM (Homogeneous Transformation Matrix) Method and Dual Quaternion Method, while the other one is Hybrid Method which uses both quantitative as well as qualitative approach. The methods are compared theoretically and experimental results are discussed to add further insight to the comparison. HTM is the widely used and accepted technique, is taken as reference and trajectory deviation in other techniques are compared with respect to HTM. Which method allows obtaining an accurate kinematic behavior of the CBHA, controlled in the real-time.
NASA Technical Reports Server (NTRS)
Beck, Benjamin; Schiller, Noah
2013-01-01
This paper outlines a direct, experimental comparison between two established active vibration control techniques. Active vibration control methods, many of which rely upon piezoelectric patches as actuators and/or sensors, have been widely studied, showing many advantages over passive techniques. However, few direct comparisons between different active vibration control methods have been made to determine the performance benefit of one method over another. For the comparison here, the first control method, velocity feedback, is implemented using four accelerometers that act as sensors along with an analog control circuit which drives a piezoelectric actuator. The second method, negative capacitance shunt damping, consists of a basic analog circuit which utilizes a single piezoelectric patch as both a sensor and actuator. Both of these control methods are implemented individually using the same piezoelectric actuator attached to a clamped Plexiglas window. To assess the performance of each control method, the spatially averaged velocity of the window is compared to an uncontrolled response.
ERIC Educational Resources Information Center
Sendur, Gulten
2014-01-01
The aim of this study is to determine prospective chemistry teachers' creative comparisons about the basic concepts of inter- and intramolecular forces, and to uncover the relationship between these creative comparisons and prospective teachers' conceptual understanding. Based on a phenomenological research method, this study was conducted with…
Histological methods for ex vivo axon tracing: A systematic review.
Heilingoetter, Cassandra L; Jensen, Matthew B
2016-07-01
Axon tracers provide crucial insight into the development, connectivity, and function of neural pathways. A tracer can be characterized as a substance that allows for the visualization of a neuronal pathway. Axon tracers have previously been used exclusively with in vivo studies; however, newer methods of axon tracing can be applied to ex vivo studies. Ex vivo studies involve the examination of cells or tissues retrieved from an organism. These post mortem methods of axon tracing offer several advantages, such as reaching inaccessible tissues and avoiding survival surgeries. In order to evaluate the quality of the ex vivo tracing methods, we performed a systematic review of various experimental and comparison studies to discern the optimal method of axon tracing. The most prominent methods for ex vivo tracing involve enzymatic techniques or various dyes. It appears that there are a variety of techniques and conditions that tend to give better fluorescent character, clarity, and distance traveled in the neuronal pathway. We found direct comparison studies that looked at variables such as the type of tracer, time required, effect of temperature, and presence of calcium, however, there are other variables that have not been compared directly. We conclude there are a variety of promising tracing methods available depending on the experimental goals of the researcher, however, more direct comparison studies are needed to affirm the optimal method.
NASA Astrophysics Data System (ADS)
Field, S. N.; Glassom, D.; Bythell, J.
2007-06-01
The choice of substrata and the methods of deployment in analyses of settlement in benthic communities are often driven by the cost of materials and their local availability, and comparisons are often made between studies using different methodologies. The effects of varying artificial substratum, size of replicates and method of deployment were determined on a shallow reef in Eilat, Israel, while the effect of size of replicates was also investigated on a shallow reef in Sharm El Sheikh, Egypt. When statistical power was high enough, that is, when sufficient numbers of settlers were recorded, significant differences were found between materials used, tile size and methods of deployment. Significant differences were detected in total coral settlement rates and for the two dominant taxonomic groups, acroporids and pocilloporids. Standardisation of tile materials, dimensions, and method of deployment is needed for comparison between coral and other epibiont settlement studies. However, a greater understanding of the effects of these experimental variables on settlement processes may enable retrospective comparisons between studies utilising a range of materials and methods.
Indirect scaling methods for testing quantitative emotion theories.
Junge, Martin; Reisenzein, Rainer
2013-01-01
Two studies investigated the utility of indirect scaling methods, based on graded pair comparisons, for the testing of quantitative emotion theories. In Study 1, we measured the intensity of relief and disappointment caused by lottery outcomes, and in Study 2, the intensity of disgust evoked by pictures, using both direct intensity ratings and graded pair comparisons. The stimuli were systematically constructed to reflect variables expected to influence the intensity of the emotions according to theoretical models of relief/disappointment and disgust, respectively. Two probabilistic scaling methods were used to estimate scale values from the pair comparison judgements: Additive functional measurement (AFM) and maximum likelihood difference scaling (MLDS). The emotion models were fitted to the direct and indirect intensity measurements using nonlinear regression (Study 1) and analysis of variance (Study 2). Both studies found substantially improved fits of the emotion models for the indirectly determined emotion intensities, with their advantage being evident particularly at the level of individual participants. The results suggest that indirect scaling methods yield more precise measurements of emotion intensity than rating scales and thereby provide stronger tests of emotion theories in general and quantitative emotion theories in particular.
Treatment of transverse patellar fractures: a comparison between metallic and non-metallic implants.
Heusinkveld, Maarten H G; den Hamer, Anniek; Traa, Willeke A; Oomen, Pim J A; Maffulli, Nicola
2013-01-01
Several methods of transverse patellar fixation have been described. This study compares the clinical outcome and the occurrence of complications of various fixation methods. The databases PubMed, Web of Science, Science Direct, Google Scholar and Google were searched. A direct comparison between fixation techniques using mixed or non-metallic implants and metallic K-wire and tension band fixation shows no significant difference in clinical outcome between both groups. Additionally, studies reporting novel operation techniques show good clinical results. Studies describing the treatment of patients using non-metallic or mixed implants are fewer compared with those using metallic fixation. A large variety of clinical scoring systems were used for assessing the results of treatment, which makes direct comparison difficult. More data of fracture treatment using non-metallic or mixed implants is needed to achieve a more balanced comparison.
Agreement of Tracing and Direct Viewing Techniques for Cervical Vertebral Maturation Assessment.
Wiwatworakul, Opas; Manosudprasit, Montian; Pisek, Poonsak; Chatrchaiwiwatana, Supaporn; Wangsrimongkol, Tasanee
2015-08-01
This study aimed to evaluate agreement among three methods for cervical vertebral maturation (CVM) assessment, comprising direct viewing, tracing only, and tracing with digitized points. Two examiners received training and tests of reliability with each CVM method before evaluation of agreement among methods. The subjects were 96 female-cleft lateral cephalometric radiographs (films of eight subjects for each age ranged from seven to 18 years). The examiners interpreted CVM stages of the subjects with four-week interval between uses of each method. The range of weighted kappa values for paired comparisons among the three methods were: 0.96-0.98 for direct viewing and tracing only comparison; 0.93-0.94 for direct viewing and tracing with digitized points comparison; and 0.96-0.97 for tracing only and tracing with digitized points comparison. The intraclass correlation coefficient (ICC) value among the three methods was 0.95. These results indicated very good agreement among methods. Use of direct viewing is suitable for CVM assessment without spending more time for tracing. However, the three methods might be used interchangeably.
Issues in benchmarking human reliability analysis methods : a literature review.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lois, Erasmia; Forester, John Alan; Tran, Tuan Q.
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted,more » reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less
Slotnick, Scott D
2017-07-01
Analysis of functional magnetic resonance imaging (fMRI) data typically involves over one hundred thousand independent statistical tests; therefore, it is necessary to correct for multiple comparisons to control familywise error. In a recent paper, Eklund, Nichols, and Knutsson used resting-state fMRI data to evaluate commonly employed methods to correct for multiple comparisons and reported unacceptable rates of familywise error. Eklund et al.'s analysis was based on the assumption that resting-state fMRI data reflect null data; however, their 'null data' actually reflected default network activity that inflated familywise error. As such, Eklund et al.'s results provide no basis to question the validity of the thousands of published fMRI studies that have corrected for multiple comparisons or the commonly employed methods to correct for multiple comparisons.
Headspace profiling of cocaine samples for intelligence purposes.
Dujourdy, Laurence; Besacier, Fabrice
2008-08-06
A method for determination of residual solvents in illicit hydrochloride cocaine samples using static headspace-gas chromatography (HS-GC) associated with a storage computerized procedure is described for the profiling and comparison of seizures. The system involves a gas chromatographic separation of 18 occluded solvents followed by fully automatic data analysis and transfer to a PHP/MySQL database. First, a fractional factorial design was used to evaluate the main effects of some critical method parameters (salt choice, vial agitation intensity, oven temperature, pressurization and loop equilibration) on the results with a minimum of experiments. The method was then validated for tactical intelligence purposes (batch comparison) via several studies: selection of solvents and mathematical comparison tool, reproducibility and "cutting" influence studies. The decision threshold to determine the similarity of two samples was set and false positives and negatives evaluated. Finally, application of the method to distinguish geographical origins is discussed.
NASA Astrophysics Data System (ADS)
Li, Yanran; Chen, Duo; Li, Li; Zhang, Jiwei; Li, Guang; Liu, Hongxia
2017-11-01
GIS (gas insulated switchgear), is an important equipment in power system. Partial discharge plays an important role in detecting the insulation performance of GIS. UHF method and ultrasonic method frequently used in partial discharge (PD) detection for GIS. However, few studies have been conducted on comparison of this two methods. From the view point of safety, it is necessary to investigate UHF method and ultrasonic method for partial discharge in GIS. This paper presents study aimed at clarifying the effect of UHF method and ultrasonic method for partial discharge caused by free metal particles in GIS. Partial discharge tests were performed in laboratory simulated environment. Obtained results show the ability of anti-interference of signal detection and the accuracy of fault localization for UHF method and ultrasonic method. A new method based on UHF method and ultrasonic method of PD detection for GIS is proposed in order to greatly enhance the ability of anti-interference of signal detection and the accuracy of detection localization.
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-09-21
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
Matched Comparison Group Design Standards in Systematic Reviews of Early Childhood Interventions.
Thomas, Jaime; Avellar, Sarah A; Deke, John; Gleason, Philip
2017-06-01
Systematic reviews assess the quality of research on program effectiveness to help decision makers faced with many intervention options. Study quality standards specify criteria that studies must meet, including accounting for baseline differences between intervention and comparison groups. We explore two issues related to systematic review standards: covariate choice and choice of estimation method. To help systematic reviews develop/refine quality standards and support researchers in using nonexperimental designs to estimate program effects, we address two questions: (1) How well do variables that systematic reviews typically require studies to account for explain variation in key child and family outcomes? (2) What methods should studies use to account for preexisting differences between intervention and comparison groups? We examined correlations between baseline characteristics and key outcomes using Early Childhood Longitudinal Study-Birth Cohort data to address Question 1. For Question 2, we used simulations to compare two methods-matching and regression adjustment-to account for preexisting differences between intervention and comparison groups. A broad range of potential baseline variables explained relatively little of the variation in child and family outcomes. This suggests the potential for bias even after accounting for these variables, highlighting the need for systematic reviews to provide appropriate cautions about interpreting the results of moderately rated, nonexperimental studies. Our simulations showed that regression adjustment can yield unbiased estimates if all relevant covariates are used, even when the model is misspecified, and preexisting differences between the intervention and the comparison groups exist.
de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie
2011-12-14
We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics
Comparing Performances of Multiple Comparison Methods in Commonly Used 2 × C Contingency Tables.
Cangur, Sengul; Ankarali, Handan; Pasin, Ozge
2016-12-01
This study aims at mentioning briefly multiple comparison methods such as Bonferroni, Holm-Bonferroni, Hochberg, Hommel, Marascuilo, Tukey, Benjamini-Hochberg and Gavrilov-Benjamini-Sarkar for contingency tables, through the data obtained from a medical research and examining their performances by simulation study which was constructed as the total 36 scenarios to 2 × 4 contingency table. As results of simulation, it was observed that when the sample size is more than 100, the methods which can preserve the nominal alpha level are Gavrilov-Benjamini-Sarkar, Holm-Bonferroni and Bonferroni. Marascuilo method was found to be a more conservative than Bonferroni. It was found that Type I error rate for Hommel method is around 2 % in all scenarios. Moreover, when the proportions of the three populations are equal and the proportion value of the fourth population is far at a level of ±3 standard deviation from the other populations, the power value for Unadjusted All-Pairwise Comparison approach is at least a bit higher than the ones obtained by Gavrilov-Benjamini-Sarkar, Holm-Bonferroni and Bonferroni. Consequently, Gavrilov-Benjamini-Sarkar and Holm-Bonferroni methods have the best performance according to simulation. Hommel and Marascuilo methods are not recommended to be used because they have medium or lower performance. In addition, we have written a Minitab macro about multiple comparisons for use in scientific research.
A comparison of automated crater detection methods
NASA Astrophysics Data System (ADS)
Bandeira, L.; Barreira, C.; Pina, P.; Saraiva, J.
2008-09-01
Abstract This work presents early results of a comparison between some common methodologies for automated crater detection. The three procedures considered were applied to images of the surface of Mars, thus illustrating some pros and cons of their use. We aim to establish the clear advantages in using this type of methods in the study of planetary surfaces.
ERIC Educational Resources Information Center
Rittle-Johnson, Bethany; Star, Jon R.
2007-01-01
Encouraging students to share and compare solution methods is a key component of reform efforts in mathematics, and comparison is emerging as a fundamental learning mechanism. To experimentally evaluate the effects of comparison for mathematics learning, the authors randomly assigned 70 seventh-grade students to learn about algebra equation…
Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less
Recent archaeomagnetic studies in Slovakia: Comparison of methodological approaches
NASA Astrophysics Data System (ADS)
Kubišová, Lenka
2016-03-01
We review the recent archaeomagnetic studies carried out on the territory of Slovakia, focusing on the comparison of methodological approaches, discussing pros and cons of the individual applied methods from the perspective of our experience. The most widely used methods for the determination of intensity and direction of the archaeomegnetic field by demagnetisation of the sample material are the alternating field (AF) demagnetisation and the Thellier double heating method. These methods are used not only for archaeomagnetic studies but also help to solve some geological problems. The two methods were applied to samples collected recently at several sites of Slovakia, where archaeological prospection invoked by earthwork or reconstruction work of developing projects demanded archaeomagnetic dating. Then we discuss advantages and weaknesses of the investigated methods from different perspectives based on several examples and our recent experience.
NASA Astrophysics Data System (ADS)
Nakhostin, M.
2015-10-01
In this paper, we have compared the performances of the digital zero-crossing and charge-comparison methods for n/γ discrimination with liquid scintillation detectors at low light outputs. The measurements were performed with a 2″×2″ cylindrical liquid scintillation detector of type BC501A whose outputs were sampled by means of a fast waveform digitizer with 10-bit resolution, 4 GS/s sampling rate and one volt input range. Different light output ranges were measured by operating the photomultiplier tube at different voltages and a new recursive algorithm was developed to implement the digital zero-crossing method. The results of our study demonstrate the superior performance of the digital zero-crossing method at low light outputs when a large dynamic range is measured. However, when the input range of the digitizer is used to measure a narrow range of light outputs, the charge-comparison method slightly outperforms the zero-crossing method. The results are discussed in regard to the effects of the quantization noise and the noise filtration performance of the zero-crossing filter.
Fantin, Valentina; Scalbi, Simona; Ottaviano, Giuseppe; Masoni, Paolo
2014-04-01
The purpose of this study is to propose a method for harmonising Life Cycle Assessment (LCA) literature studies on the same product or on different products fulfilling the same function for a reliable and meaningful comparison of their life-cycle environmental impacts. The method is divided in six main steps which aim to rationalize and quicken the efforts needed to carry out the comparison. The steps include: 1) a clear definition of the goal and scope of the review; 2) critical review of the references; 3) identification of significant parameters that have to be harmonised; 4) harmonisation of the parameters; 5) statistical analysis to support the comparison; 6) results and discussion. This approach was then applied to the comparative analysis of the published LCA studies on tap and bottled water production, focussing on Global Warming Potential (GWP) results, with the aim to identify the environmental preferable alternative. A statistical analysis with Wilcoxon's test confirmed that the difference between harmonised GWP values of tap and bottled water was significant. The results obtained from the comparison of the harmonised mean GWP results showed that tap water always has the best environmental performance, even in case of high energy-consuming technologies for drinking water treatments. The strength of the method is that it enables both performing a deep analysis of the LCA literature and obtaining more consistent comparisons across the published LCAs. For these reasons, it can be a valuable tool which provides useful information for both practitioners and decision makers. Finally, its application to the case study allowed both to supply a description of systems variability and to evaluate the importance of several key parameters for tap and bottled water production. The comparative review of LCA studies, with the inclusion of a statistical decision test, can validate and strengthen the final statements of the comparison. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Tipton, Elizabeth
2014-01-01
Replication studies allow for making comparisons and generalizations regarding the effectiveness of an intervention across different populations, versions of a treatment, settings and contexts, and outcomes. One method for making these comparisons across many replication studies is through the use of meta-analysis. A recent innovation in…
Unbiased Causal Inference from an Observational Study: Results of a Within-Study Comparison
ERIC Educational Resources Information Center
Pohl, Steffi; Steiner, Peter M.; Eisermann, Jens; Soellner, Renate; Cook, Thomas D.
2009-01-01
Adjustment methods such as propensity scores and analysis of covariance are often used for estimating treatment effects in nonexperimental data. Shadish, Clark, and Steiner used a within-study comparison to test how well these adjustments work in practice. They randomly assigned participating students to a randomized or nonrandomized experiment.…
Comparison of Quantitative Antifungal Testing Methods for Textile Fabrics.
Imoto, Yasuo; Seino, Satoshi; Nakagawa, Takashi; Yamamoto, Takao A
2017-01-01
Quantitative antifungal testing methods for textile fabrics under growth-supportive conditions were studied. Fungal growth activities on unfinished textile fabrics and textile fabrics modified with Ag nanoparticles were investigated using the colony counting method and the luminescence method. Morphological changes of the fungi during incubation were investigated by microscopic observation. Comparison of the results indicated that the fungal growth activity values obtained with the colony counting method depended on the morphological state of the fungi on textile fabrics, whereas those obtained with the luminescence method did not. Our findings indicated that unique characteristics of each testing method must be taken into account for the proper evaluation of antifungal activity.
A comparison of above-ground dry-biomass estimators for trees in the Northeastern United States
James A. Westfall
2012-01-01
In the northeastern United States, both component and total aboveground tree dry-biomass estimates are available from several sources. In this study, comparisons were made among four methods to promote understanding of the similarities and differences in live-tree biomass estimators. The methods use various equations developed from biomass data collected in the United...
Patricia L. Faulkner; Michele M. Schoeneberger; Kim H. Ludovici
1993-01-01
Foliar tissue was collected from a field study designed to test impacts of atmospheric pollutants on loblolIy pine (Pinus taeda L.) seedlings. Standard enzymatic (ENZ) and high performance liquid chromatography (HPLC) methods were used to analyze the tissue for soluble sugars. A comparison of the methods revealed no significant diffennces in accuracy...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovacik, Meric A.; Androulakis, Ioannis P., E-mail: yannis@rci.rutgers.edu; Biomedical Engineering Department, Rutgers University, Piscataway, NJ 08854
2013-09-15
Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogeneticmore » relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy.« less
Geometric facial comparisons in speed-check photographs.
Buck, Ursula; Naether, Silvio; Kreutz, Kerstin; Thali, Michael
2011-11-01
In many cases, it is not possible to call the motorists to account for their considerable excess in speeding, because they deny being the driver on the speed-check photograph. An anthropological comparison of facial features using a photo-to-photo comparison can be very difficult depending on the quality of the photographs. One difficulty of that analysis method is that the comparison photographs of the presumed driver are taken with a different camera or camera lens and from a different angle than for the speed-check photo. To take a comparison photograph with exactly the same camera setup is almost impossible. Therefore, only an imprecise comparison of the individual facial features is possible. The geometry and position of each facial feature, for example the distances between the eyes or the positions of the ears, etc., cannot be taken into consideration. We applied a new method using 3D laser scanning, optical surface digitalization, and photogrammetric calculation of the speed-check photo, which enables a geometric comparison. Thus, the influence of the focal length and the distortion of the objective lens are eliminated and the precise position and the viewing direction of the speed-check camera are calculated. Even in cases of low-quality images or when the face of the driver is partly hidden, good results are delivered using this method. This new method, Geometric Comparison, is evaluated and validated in a prepared study which is described in this article.
Advantages of high-dose rate (HDR) brachytherapy in treatment of prostate cancer
NASA Astrophysics Data System (ADS)
Molokov, A. A.; Vanina, E. A.; Tseluyko, S. S.
2017-09-01
One of the modern methods of preserving organs radiation treatment is brachytherapy. This article analyzes the results of prostate brachytherapy. These studies of the advantages of high dose brachytherapy lead to the conclusion that this method of radiation treatment for prostate cancer has a favorable advantage in comparison with remote sensing methods, and is competitive, preserving organs in comparison to surgical methods of treatment. The use of the method of polyfocal transperineal biopsy during the brachytherapy session provides information on the volumetric spread of prostate cancer and adjust the dosimetry plan taking into account the obtained data.
Statistics attack on `quantum private comparison with a malicious third party' and its improvement
NASA Astrophysics Data System (ADS)
Gu, Jun; Ho, Chih-Yung; Hwang, Tzonelih
2018-02-01
Recently, Sun et al. (Quantum Inf Process:14:2125-2133, 2015) proposed a quantum private comparison protocol allowing two participants to compare the equality of their secrets via a malicious third party (TP). They designed an interesting trap comparison method to prevent the TP from knowing the final comparison result. However, this study shows that the malicious TP can use the statistics attack to reveal the comparison result. A simple modification is hence proposed to solve this problem.
Dichotomisation using a distributional approach when the outcome is skewed.
Sauzet, Odile; Ofuya, Mercy; Peacock, Janet L
2015-04-24
Dichotomisation of continuous outcomes has been rightly criticised by statisticians because of the loss of information incurred. However to communicate a comparison of risks, dichotomised outcomes may be necessary. Peacock et al. developed a distributional approach to the dichotomisation of normally distributed outcomes allowing the presentation of a comparison of proportions with a measure of precision which reflects the comparison of means. Many common health outcomes are skewed so that the distributional method for the dichotomisation of continuous outcomes may not apply. We present a methodology to obtain dichotomised outcomes for skewed variables illustrated with data from several observational studies. We also report the results of a simulation study which tests the robustness of the method to deviation from normality and assess the validity of the newly developed method. The review showed that the pattern of dichotomisation was varying between outcomes. Birthweight, Blood pressure and BMI can either be transformed to normal so that normal distributional estimates for a comparison of proportions can be obtained or better, the skew-normal method can be used. For gestational age, no satisfactory transformation is available and only the skew-normal method is reliable. The normal distributional method is reliable also when there are small deviations from normality. The distributional method with its applicability for common skewed data allows researchers to provide both continuous and dichotomised estimates without losing information or precision. This will have the effect of providing a practical understanding of the difference in means in terms of proportions.
Computational aspects of real-time simulation of rotary-wing aircraft. M.S. Thesis
NASA Technical Reports Server (NTRS)
Houck, J. A.
1976-01-01
A study was conducted to determine the effects of degrading a rotating blade element rotor mathematical model suitable for real-time simulation of rotorcraft. Three methods of degradation were studied, reduction of number of blades, reduction of number of blade segments, and increasing the integration interval, which has the corresponding effect of increasing blade azimuthal advance angle. The three degradation methods were studied through static trim comparisons, total rotor force and moment comparisons, single blade force and moment comparisons over one complete revolution, and total vehicle dynamic response comparisons. Recommendations are made concerning model degradation which should serve as a guide for future users of this mathematical model, and in general, they are in order of minimum impact on model validity: (1) reduction of number of blade segments; (2) reduction of number of blades; and (3) increase of integration interval and azimuthal advance angle. Extreme limits are specified beyond which a different rotor mathematical model should be used.
Effects of rotor model degradation on the accuracy of rotorcraft real time simulation
NASA Technical Reports Server (NTRS)
Houck, J. A.; Bowles, R. L.
1976-01-01
The effects are studied of degrading a rotating blade element rotor mathematical model to meet various real-time simulation requirements of rotorcraft. Three methods of degradation were studied: reduction of number of blades, reduction of number of blade segments, and increasing the integration interval, which has the corresponding effect of increasing blade azimuthal advance angle. The three degradation methods were studied through static trim comparisons, total rotor force and moment comparisons, single blade force and moment comparisons over one complete revolution, and total vehicle dynamic response comparisons. Recommendations are made concerning model degradation which should serve as a guide for future users of this mathematical model, and in general, they are in order of minimum impact on model validity: (1) reduction of number of blade segments, (2) reduction of number of blades, and (3) increase of integration interval and azimuthal advance angle. Extreme limits are specified beyond which the rotating blade element rotor mathematical model should not be used.
Rapid comparison of properties on protein surface
Sael, Lee; La, David; Li, Bin; Rustamov, Raif; Kihara, Daisuke
2008-01-01
The mapping of physicochemical characteristics onto the surface of a protein provides crucial insights into its function and evolution. This information can be further used in the characterization and identification of similarities within protein surface regions. We propose a novel method which quantitatively compares global and local properties on the protein surface. We have tested the method on comparison of electrostatic potentials and hydrophobicity. The method is based on 3D Zernike descriptors, which provides a compact representation of a given property defined on a protein surface. Compactness and rotational invariance of this descriptor enable fast comparison suitable for database searches. The usefulness of this method is exemplified by studying several protein families including globins, thermophilic and mesophilic proteins, and active sites of TIM β/α barrel proteins. In all the cases studied, the descriptor is able to cluster proteins into functionally relevant groups. The proposed approach can also be easily extended to other surface properties. This protein surface-based approach will add a new way of viewing and comparing proteins to conventional methods, which compare proteins in terms of their primary sequence or tertiary structure. PMID:18618695
Rapid comparison of properties on protein surface.
Sael, Lee; La, David; Li, Bin; Rustamov, Raif; Kihara, Daisuke
2008-10-01
The mapping of physicochemical characteristics onto the surface of a protein provides crucial insights into its function and evolution. This information can be further used in the characterization and identification of similarities within protein surface regions. We propose a novel method which quantitatively compares global and local properties on the protein surface. We have tested the method on comparison of electrostatic potentials and hydrophobicity. The method is based on 3D Zernike descriptors, which provides a compact representation of a given property defined on a protein surface. Compactness and rotational invariance of this descriptor enable fast comparison suitable for database searches. The usefulness of this method is exemplified by studying several protein families including globins, thermophilic and mesophilic proteins, and active sites of TIM beta/alpha barrel proteins. In all the cases studied, the descriptor is able to cluster proteins into functionally relevant groups. The proposed approach can also be easily extended to other surface properties. This protein surface-based approach will add a new way of viewing and comparing proteins to conventional methods, which compare proteins in terms of their primary sequence or tertiary structure.
Comparative analysis of methods for detecting interacting loci
2011-01-01
Background Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. Results We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs. Conclusion This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide, should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list. PMID:21729295
Comparative analysis of methods for detecting interacting loci.
Chen, Li; Yu, Guoqiang; Langefeld, Carl D; Miller, David J; Guy, Richard T; Raghuram, Jayaram; Yuan, Xiguo; Herrington, David M; Wang, Yue
2011-07-05
Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs. This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide, should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list.
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Acceleration of Binding Site Comparisons by Graph Partitioning.
Krotzky, Timo; Klebe, Gerhard
2015-08-01
The comparison of protein binding sites is a prominent task in computational chemistry and has been studied in many different ways. For the automatic detection and comparison of putative binding cavities the Cavbase system has been developed which uses a coarse-grained set of pseudocenters to represent the physicochemical properties of a binding site and employs a graph-based procedure to calculate similarities between two binding sites. However, the comparison of two graphs is computationally quite demanding which makes large-scale studies such as the rapid screening of entire databases hardly feasible. In a recent work, we proposed the method Local Cliques (LC) for the efficient comparison of Cavbase binding sites. It employs a clique heuristic to detect the maximum common subgraph of two binding sites and an extended graph model to additionally compare the shape of individual surface patches. In this study, we present an alternative to further accelerate the LC method by partitioning the binding-site graphs into disjoint components prior to their comparisons. The pseudocenter sets are split with regard to their assigned phyiscochemical type, which leads to seven much smaller graphs than the original one. Applying this approach on the same test scenarios as in the former comprehensive way results in a significant speed-up without sacrificing accuracy. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2017-04-01
Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts compared to simpler methods. It is pointed out that the ML methods do not differ dramatically from the stochastic methods, while it is interesting that the NN, RF and SVM algorithms used in this study offer potentially very good performance in terms of accuracy. It should be noted that, although this study focuses on hydrological processes, the results are of general scientific interest. Another important point in this study is the use of several methods and metrics. Using fewer methods and fewer metrics would have led to a very different overall picture, particularly if those fewer metrics corresponded to fewer criteria. For this reason, we consider that the proposed methodology is appropriate for the evaluation of forecasting methods.
Inventory-based estimates of forest biomass carbon stocks in China: A comparison of three methods
Zhaodi Guo; Jingyun Fang; Yude Pan; Richard Birdsey
2010-01-01
Several studies have reported different estimates for forest biomass carbon (C) stocks in China. The discrepancy among these estimates may be largely attributed to the methods used. In this study, we used three methods [mean biomass density method (MBM), mean ratio method (MRM), and continuous biomass expansion factor (BEF) method (abbreviated as CBM)] applied to...
A Comparison of Two Methods for Recruiting Children with an Intellectual Disability
ERIC Educational Resources Information Center
Adams, Dawn; Handley, Louise; Heald, Mary; Simkiss, Doug; Jones, Alison; Walls, Emily; Oliver, Chris
2017-01-01
Background: Recruitment is a widely cited barrier of representative intellectual disability research, yet it is rarely studied. This study aims to document the rates of recruiting children with intellectual disabilities using two methods and discuss the impact of such methods on sample characteristics. Methods: Questionnaire completion rates are…
ERIC Educational Resources Information Center
Kamarova, Sviatlana; Chatzisarantis, Nikos L. D.; Hagger, Martin S.; Lintunen, Taru; Hassandra, Mary; Papaioannou, Athanasios
2017-01-01
Background: Previous prospective studies have documented that mastery-approach goals are adaptive because they facilitate less negative psychological responses to unfavourable social comparisons than performance-approach goals. Aims: This study aimed to confirm this so-called "mastery goal advantage" effect experimentally. Methods: A…
Smid, Marcel; Coebergh van den Braak, Robert R J; van de Werken, Harmen J G; van Riet, Job; van Galen, Anne; de Weerd, Vanja; van der Vlugt-Daane, Michelle; Bril, Sandra I; Lalmahomed, Zarina S; Kloosterman, Wigard P; Wilting, Saskia M; Foekens, John A; IJzermans, Jan N M; Martens, John W M; Sieuwerts, Anieta M
2018-06-22
Current normalization methods for RNA-sequencing data allow either for intersample comparison to identify differentially expressed (DE) genes or for intrasample comparison for the discovery and validation of gene signatures. Most studies on optimization of normalization methods typically use simulated data to validate methodologies. We describe a new method, GeTMM, which allows for both inter- and intrasample analyses with the same normalized data set. We used actual (i.e. not simulated) RNA-seq data from 263 colon cancers (no biological replicates) and used the same read count data to compare GeTMM with the most commonly used normalization methods (i.e. TMM (used by edgeR), RLE (used by DESeq2) and TPM) with respect to distributions, effect of RNA quality, subtype-classification, recurrence score, recall of DE genes and correlation to RT-qPCR data. We observed a clear benefit for GeTMM and TPM with regard to intrasample comparison while GeTMM performed similar to TMM and RLE normalized data in intersample comparisons. Regarding DE genes, recall was found comparable among the normalization methods, while GeTMM showed the lowest number of false-positive DE genes. Remarkably, we observed limited detrimental effects in samples with low RNA quality. We show that GeTMM outperforms established methods with regard to intrasample comparison while performing equivalent with regard to intersample normalization using the same normalized data. These combined properties enhance the general usefulness of RNA-seq but also the comparability to the many array-based gene expression data in the public domain.
2004-03-01
Narratives Phenomenologies Ethnographies Grounded Theory Case Studies Mixed Methods Sequential Concurrent Transformative Creswell... ethnographies , grounded theory studies and case studies (Creswell, 2003:18). The methods used in qualitative study provide the framework for...Definition Grounded theory provides a structured
A comparative study of different methods for calculating electronic transition rates
NASA Astrophysics Data System (ADS)
Kananenka, Alexei A.; Sun, Xiang; Schubert, Alexander; Dunietz, Barry D.; Geva, Eitan
2018-03-01
We present a comprehensive comparison of the following mixed quantum-classical methods for calculating electronic transition rates: (1) nonequilibrium Fermi's golden rule, (2) mixed quantum-classical Liouville method, (3) mean-field (Ehrenfest) mixed quantum-classical method, and (4) fewest switches surface-hopping method (in diabatic and adiabatic representations). The comparison is performed on the Garg-Onuchic-Ambegaokar benchmark charge-transfer model, over a broad range of temperatures and electronic coupling strengths, with different nonequilibrium initial states, in the normal and inverted regimes. Under weak to moderate electronic coupling, the nonequilibrium Fermi's golden rule rates are found to be in good agreement with the rates obtained via the mixed quantum-classical Liouville method that coincides with the fully quantum-mechanically exact results for the model system under study. Our results suggest that the nonequilibrium Fermi's golden rule can serve as an inexpensive yet accurate alternative to Ehrenfest and the fewest switches surface-hopping methods.
Comparative study on the efficiency of some optical methods for artwork diagnostics
NASA Astrophysics Data System (ADS)
Schirripa Spagnolo, Giuseppe; Ambrosini, Dario; Paoletti, Domenica
2001-10-01
Scientific investigation methods are founding their place besides the stylistic-historical study methods in art research works. In particular, optical techniques, transferred from other fields or developed ad hoc, can make a strong contribution to the safeguarding and exploitation of cultural heritage. This paper describes the use of different optical techniques, such as holographic interferometry, decorrelation, shearography and ESPI, in the diagnostics of works of art. A comparison between different methods is obtained by performing tests on specially designed models, prepared using typical techniques and materials. Inside the model structure, a number of defects of known types, form and extension are inserted. The different features of each technique are outlined and a comparison with IR thermography is also carried out.
Ishwar Dhami; Jinyang. Deng
2012-01-01
Many previous studies have examined ecotourism primarily from the perspective of tourists while largely ignoring ecotourism destinations. This study used geographical information system (GIS) and pairwise comparison to identify forest-based ecotourism areas in Pocahontas County, West Virginia. The study adopted the criteria and scores developed by Boyd and Butler (1994...
Recent advances in the sequencing of relevant water intrusion fungi by the EPA, combined with the development of probes and primers have allowed for the unequivocal quantitative and qualitative identification of fungi in selected matrices.
In this pilot study, quantitative...
A comparison of methods for teaching receptive language to toddlers with autism.
Vedora, Joseph; Grandelski, Katrina
2015-01-01
The use of a simple-conditional discrimination training procedure, in which stimuli are initially taught in isolation with no other comparison stimuli, is common in early intensive behavioral intervention programs. Researchers have suggested that this procedure may encourage the development of faulty stimulus control during training. The current study replicated previous work that compared the simple-conditional and the conditional-only methods to teach receptive labeling of pictures to young children with autism spectrum disorder. Both methods were effective, but the conditional-only method required fewer sessions to mastery. © Society for the Experimental Analysis of Behavior.
ERIC Educational Resources Information Center
Lee, S. Y.; Jennrich, R. I.
1979-01-01
A variety of algorithms for analyzing covariance structures are considered. Additionally, two methods of estimation, maximum likelihood, and weighted least squares are considered. Comparisons are made between these algorithms and factor analysis. (Author/JKS)
Experimental comparison between performance of the PM and LPM methods in computed radiography
NASA Astrophysics Data System (ADS)
Kermani, Aboutaleb; Feghhi, Seyed Amir Hossein; Rokrok, Behrouz
2018-07-01
The scatter downgrades the image quality and reduces its information efficiency in quantitative measurement usages when creating projections with ionizing radiation. Therefore, the variety of methods have been applied for scatter reduction and correction of the undesirable effects. As new approaches, the ordinary and localized primary modulation methods have already been used individually through experiments and simulations in medical and industrial computed tomography, respectively. The aim of this study is the evaluation of capabilities and limitations of these methods in comparison with each other. For this mean, the ordinary primary modulation has been implemented in computed radiography for the first time and the potential of both methods has been assessed in thickness measurement as well as scatter to primary signal ratio determination. The comparison results, based on the experimental outputs which obtained using aluminum specimens and continuous X-ray spectra, are to the benefit of the localized primary modulation method because of improved accuracy and higher performance especially at the edges.
A method of assigning socio-economic status classification to British Armed Forces personnel.
Yoong, S Y; Miles, D; McKinney, P A; Smith, I J; Spencer, N J
1999-10-01
The objective of this paper was to develop and evaluate a socio-economic status classification method for British Armed Forces personnel. Two study groups comprising of civilian and Armed Forces families were identified from livebirths delivered between 1 January-30 June 1996 within the Northallerton Health district which includes Catterick Garrison and RAF Leeming. The participants were the parents of babies delivered at a District General Hospital, comprising of 436 civilian and 162 Armed Forces families. A new classification method was successfully used to assign Registrar General's social classification to Armed Forces personnel. Comparison of the two study groups showed a significant difference in social class distribution (p = 0.0001). This study has devised a new method for classifying occupations within the Armed Forces to categories of social class thus permitting comparison with Registrar General's classification.
Mbao, V; Speybroeck, N; Berkvens, D; Dolan, T; Dorny, P; Madder, M; Mulumba, M; Duchateau, L; Brandt, J; Marcotty, T
2005-07-01
Theileria parva sporozoite stabilates are used in the infection and treatment method of immunization, a widely accepted control option for East Coast fever in cattle. T. parva sporozoites are extracted from infected adult Rhipicephalus appendiculatus ticks either manually, using a pestle and a mortar, or by use of an electric homogenizer. A comparison of the two methods as a function of stabilate infectivity has never been documented. This study was designed to provide a quantitative comparison of stabilates produced by the two methods. The approach was to prepare batches of stabilate by both methods and then subject them to in vitro titration. Equivalence testing was then performed on the average effective doses (ED). The ratio of infective sporozoites yielded by the two methods was found to be 1.14 in favour of the manually ground stabilate with an upper limit of the 95% confidence interval equal to 1.3. We conclude that the choice of method rests more on costs, available infrastructure and standardization than on which method produces a richer sporozoite stabilate.
Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons
2014-01-01
Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829
Arnold, David; Girling, Alan; Stevens, Andrew; Lilford, Richard
2009-07-22
Utilities (values representing preferences) for healthcare priority setting are typically obtained indirectly by asking patients to fill in a quality of life questionnaire and then converting the results to a utility using population values. We compared such utilities with those obtained directly from patients or the public. Review of studies providing both a direct and indirect utility estimate. Papers reporting comparisons of utilities obtained directly (standard gamble or time tradeoff) or indirectly (European quality of life 5D [EQ-5D], short form 6D [SF-6D], or health utilities index [HUI]) from the same patient. PubMed and Tufts database of utilities. Sign test for paired comparisons between direct and indirect utilities; least squares regression to describe average relations between the different methods. Mean utility scores (or median if means unavailable) for each method, and differences in mean (median) scores between direct and indirect methods. We found 32 studies yielding 83 instances where direct and indirect methods could be compared for health states experienced by adults. The direct methods used were standard gamble in 57 cases and time trade off in 60(34 used both); the indirect methods were EQ-5D (67 cases), SF-6D (13), HUI-2 (5), and HUI-3 (37). Mean utility values were 0.81 (standard gamble) and 0.77 (time tradeoff) for the direct methods; for the indirect methods: 0.59(EQ-5D), 0.63 (SF-6D), 0.75 (HUI-2) and 0.68 (HUI-3). Direct methods of estimating utilities tend to result in higher health ratings than the more widely used indirect methods, and the difference can be substantial.Use of indirect methods could have important implications for decisions about resource allocation: for example, non-lifesaving treatments are relatively more favoured in comparison with lifesaving interventions than when using direct methods.
ERIC Educational Resources Information Center
Alhajri, Salman
2016-01-01
Purpose: this paper investigates the effectiveness of teaching methods used in graphic design pedagogy in both analogue and digital education systems. Methodology and approach: the paper is based on theoretical study using a qualitative, case study approach. Comparison between the digital teaching methods and traditional teaching methods was…
Cost of Living and Taxation Adjustments in Salary Comparisons. AIR 1993 Annual Forum Paper.
ERIC Educational Resources Information Center
Zeglen, Marie E.; Tesfagiorgis, Gebre
This study examined faculty salaries at 50 higher education institutions using methods to adjust salaries for geographic differences, cost of living, and tax burdens so that comparisons were based on real rather than nominal value of salaries. The study sample consisted of one public doctorate granting institution from each state and used salary…
Comparison of Vocal Vibration-Dose Measures for Potential-Damage Risk Criteria
ERIC Educational Resources Information Center
Titze, Ingo R.; Hunter, Eric J.
2015-01-01
Purpose: School-teachers have become a benchmark population for the study of occupational voice use. A decade of vibration-dose studies on the teacher population allows a comparison to be made between specific dose measures for eventual assessment of damage risk. Method: Vibration dosimetry is reformulated with the inclusion of collision stress.…
Comparison of thruster configurations in attitude control systems. M.S. Thesis. Progress Report
NASA Technical Reports Server (NTRS)
Boland, J. S., III; Drinkard, D. M., Jr.; White, L. R.; Chakravarthi, K. R.
1973-01-01
Several aspects concerning reaction control jet systems as used to govern the attitude of a spacecraft were considered. A thruster configuration currently in use was compared to several new configurations developed in this study. The method of determining the error signals which control the firing of the thrusters was also investigated. The current error determination procedure is explained and a new method is presented. Both of these procedures are applied to each of the thruster configurations which are developed and comparisons of the two methods are made.
Yin, Ailing; Han, Zhifeng; Shen, Jie; Guo, Liwei; Cao, Guiping
2011-10-01
To study on the separation from essential oil-in-water emulsion of Citri Reticulatae Pericarpium Viride by ultrafiltration and acetoacetate extraction methods respectively, and the comparison of the oil yields and chemical compositions. Essential oil-in-water emulsion of Citri Reticulatae Pericarpium Viride was separated by ultrafiltration and acetoacetate extraction methods respectively, and the chemical compositions were analyzed and compared by GC-MS. Ultrafiltration method could enrich essential oil more and its chemical compositions were more similar to the essential oil prepared by steam distillation method. Ultrafiltration method is a good medium to separate essential oil from essential oil-in-water emulsion of Citri Reticulatae Pericarpium Viride.
Work Stress Interventions in Hospital Care: Effectiveness of the DISCovery Method
Niks, Irene; Gevers, Josette
2018-01-01
Effective interventions to prevent work stress and to improve health, well-being, and performance of employees are of the utmost importance. This quasi-experimental intervention study presents a specific method for diagnosis of psychosocial risk factors at work and subsequent development and implementation of tailored work stress interventions, the so-called DISCovery method. This method aims at improving employee health, well-being, and performance by optimizing the balance between job demands, job resources, and recovery from work. The aim of the study is to quantitatively assess the effectiveness of the DISCovery method in hospital care. Specifically, we used a three-wave longitudinal, quasi-experimental multiple-case study approach with intervention and comparison groups in health care work. Positive changes were found for members of the intervention groups, relative to members of the corresponding comparison groups, with respect to targeted work-related characteristics and targeted health, well-being, and performance outcomes. Overall, results lend support for the effectiveness of the DISCovery method in hospital care. PMID:29438350
Work Stress Interventions in Hospital Care: Effectiveness of the DISCovery Method.
Niks, Irene; de Jonge, Jan; Gevers, Josette; Houtman, Irene
2018-02-13
Effective interventions to prevent work stress and to improve health, well-being, and performance of employees are of the utmost importance. This quasi-experimental intervention study presents a specific method for diagnosis of psychosocial risk factors at work and subsequent development and implementation of tailored work stress interventions, the so-called DISCovery method. This method aims at improving employee health, well-being, and performance by optimizing the balance between job demands, job resources, and recovery from work. The aim of the study is to quantitatively assess the effectiveness of the DISCovery method in hospital care. Specifically, we used a three-wave longitudinal, quasi-experimental multiple-case study approach with intervention and comparison groups in health care work. Positive changes were found for members of the intervention groups, relative to members of the corresponding comparison groups, with respect to targeted work-related characteristics and targeted health, well-being, and performance outcomes. Overall, results lend support for the effectiveness of the DISCovery method in hospital care.
Comparison of Two Methods for Anthocyanin Quantification
USDA-ARS?s Scientific Manuscript database
The pH differential method (AOAC method 2005.02) by spectrophotometer, and high performance liquid chromatography (HPLC) are methods commonly used by researchers and the food industry for quantifying anthocyanins of samples or products. This study was carried out to establish a relationship between ...
Aguiar, Lorena Andrade de; Melo, Lauro; de Lacerda de Oliveira, Lívia
2018-04-03
A major drawback of conventional descriptive profile (CDP) in sensory evaluation is the long time spent in panel training. Rapid descriptive methods (RDM) have increased significantly. Some of them have been compared with CDP for validation. In Health Sciences, systematic reviews (SR) are performed to evaluate validation of diagnostic tests in relation to a gold standard method. SR present a well-defined protocol to summarize research evidence and to evaluate the quality of the studies with determined criteria. We adapted SR protocol to evaluate the validation of RDM against CDP as satisfactory procedures to obtain food characterization. We used "Population Intervention Comparison Outcome Study - PICOS" framework to design the research in which "Population" was food/ beverages; "intervention" were RDM, "Comparison" was CDP as gold standard, "Outcome" was the ability of RDM to generate similar descriptive profiles in comparison with CDP and "Studies" was sensory descriptive analyses. The proportion of studies concluding for similarity of the RDM with CDP ranged from 0% to 100%. Low and moderate risk of bias were reached by 87% and 13% of the studies, respectively, supporting the conclusions of SR. RDM with semi-trained assessors and evaluation of individual attributes presented higher percentages of concordance with CDP.
Methods to prefetch comparison images in image management and communication system
NASA Astrophysics Data System (ADS)
Levin, Kenneth; Fielding, Robert
1990-08-01
A high-level description of a system to pre-fetch comparison radiographs in an Image Management and Communication System (IMAC) is presented. This rule based system estimates the relevance of previous examinations for comparison to the current examination arid uses this determination to pre-fetch comparison studies. A machine learning module should allow the system to improve its skill in pre-fetching examinations for each individual radiologist. This system could be tailored to fit the desires of individual radiologists.
Kaur, Inderjeet; Kumar, Raj; Sharma, Neelam
2010-10-13
Functionalization of rayon fibre has been carried out by grafting acrylic acid (AAC) both by a chemical method using a Ce(4+)-HNO(3) redox initiator and by a mutual irradiation (γ-rays) method. The reaction conditions affecting the grafting percentage have been optimized for both methods, and the results are compared. The maximum percentage of grafting (50%) by the chemical method was obtained utilizing 18.24 × 10(-3) moles/L of ceric ammonium nitrate (CAN), 39.68 × 10(-2) moles/L of HNO(3), and 104.08 × 10(-2) moles/L of AAc in 20 mL of water at 45°C for 120 min. For the radiation method, the maximum grafting percentage (60%) was higher, and the product was obtained under milder reaction conditions using a lower concentration of AAc (69.38 × 10(-2) moles/L) in 10 mL of water at an optimum total dose of 0.932 kGy. Swelling studies showed higher swelling for the grafted rayon fibre in water (854.54%) as compared to the pristine fibre (407%), while dye uptake studies revealed poor uptake of the dye (crystal violet) by the grafted fibre in comparison with the pristine fibre. The graft copolymers were characterized by IR, TGA, and scanning electron micrographic methods. Grafted fibre, prepared by the radiation-induced method, showed better thermal behaviour. Comparison of the two methods revealed that the radiation method of grafting of acrylic acid onto rayon fibre is a better method of grafting in comparison with the chemical method. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Mccain, W. E.
1984-01-01
The unsteady aerodynamic lifting surface theory, the Doublet Lattice method, with experimental steady and unsteady pressure measurements of a high aspect ratio supercritical wing model at a Mach number of 0.78 were compared. The steady pressure data comparisons were made for incremental changes in angle of attack and control surface deflection. The unsteady pressure data comparisons were made at set angle of attack positions with oscillating control surface deflections. Significant viscous and transonic effects in the experimental aerodynamics which cannot be predicted by the Doublet Lattice method are shown. This study should assist development of empirical correction methods that may be applied to improve Doublet Lattice calculations of lifting surface aerodynamics.
Investigating Test Equating Methods in Small Samples through Various Factors
ERIC Educational Resources Information Center
Asiret, Semih; Sünbül, Seçil Ömür
2016-01-01
In this study, equating methods for random group design using small samples through factors such as sample size, difference in difficulty between forms, and guessing parameter was aimed for comparison. Moreover, which method gives better results under which conditions was also investigated. In this study, 5,000 dichotomous simulated data…
Yabe, Tetsuji; Tsuda, Tomoyuki; Hirose, Shunsuke; Ozawa, Toshiyuki
2012-05-01
In this article, a comparison of replantation using microsurgical replantation (replantation) and the Brent method and its modification (pocket principle) in the treatment of fingertip amputation is reported. As a classification of amputation level, we used Ishikawa's subzone classification of fingertip amputation, and the cases of amputations only in subzone 2 were included in this study. Between these two groups, there was no statistical difference in survival rate, postoperative atrophy, or postoperative range of motion. In terms of sensory recovery, some records were lost and exact study was difficult. But there was no obvious difference between these cases. In our comparison of microsurgical replantation versus the pocket principle in treatment of subzone 2 fingertip amputation, there was no difference in postoperative results. Each method has pros and cons, and the surgeon should choose which technique to use based on his or her understanding of the characteristics of both methods. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Comparison of Manual Refraction Versus Autorefraction in 60 Diabetic Retinopathy Patients
Shirzadi, Keyvan; Shahraki, Kourosh; Yahaghi, Emad; Makateb, Ali; Khosravifard, Keivan
2016-01-01
Aim: The purpose of the study was to evaluate the comparison of manual refraction versus autorefraction in diabetic retinopathy patients. Material and Methods: The study was conducted at the Be’sat Army Hospital from 2013-2015. In the present study differences between two common refractometry methods (manual refractometry and Auto refractometry) in diagnosis and follow up of retinopathy in patients affected with diabetes is investigated. Results: Our results showed that there is a significant difference in visual acuity score of patients between manual and auto refractometry. Despite this fact, spherical equivalent scores of two methods of refractometry did not show a significant statistical difference in the patients. Conclusion: Although use of manual refraction is comparable with autorefraction in evaluating spherical equivalent scores in diabetic patients affected with retinopathy, but in the case of visual acuity results from these two methods are not comparable. PMID:27703289
A Method for the Comparison of Item Selection Rules in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Barrada, Juan Ramon; Olea, Julio; Ponsoda, Vicente; Abad, Francisco Jose
2010-01-01
In a typical study comparing the relative efficiency of two item selection rules in computerized adaptive testing, the common result is that they simultaneously differ in accuracy and security, making it difficult to reach a conclusion on which is the more appropriate rule. This study proposes a strategy to conduct a global comparison of two or…
Han, Hyemin; Glenn, Andrea L
2018-06-01
In fMRI research, the goal of correcting for multiple comparisons is to identify areas of activity that reflect true effects, and thus would be expected to replicate in future studies. Finding an appropriate balance between trying to minimize false positives (Type I error) while not being too stringent and omitting true effects (Type II error) can be challenging. Furthermore, the advantages and disadvantages of these types of errors may differ for different areas of study. In many areas of social neuroscience that involve complex processes and considerable individual differences, such as the study of moral judgment, effects are typically smaller and statistical power weaker, leading to the suggestion that less stringent corrections that allow for more sensitivity may be beneficial and also result in more false positives. Using moral judgment fMRI data, we evaluated four commonly used methods for multiple comparison correction implemented in Statistical Parametric Mapping 12 by examining which method produced the most precise overlap with results from a meta-analysis of relevant studies and with results from nonparametric permutation analyses. We found that voxelwise thresholding with familywise error correction based on Random Field Theory provides a more precise overlap (i.e., without omitting too few regions or encompassing too many additional regions) than either clusterwise thresholding, Bonferroni correction, or false discovery rate correction methods.
Varabyova, Yauheniya; Müller, Julia-Maria
2016-03-01
There has been an ongoing interest in the analysis and comparison of the efficiency of health care systems using nonparametric and parametric applications. The objective of this study was to review the current state of the literature and to synthesize the findings on health system efficiency in OECD countries. We systematically searched five electronic databases through August 2014 and identified 22 studies that analyzed the efficiency of health care production at the country level. We summarized these studies with view on their sample, methods, and utilized variables. We developed and applied a checklist of 14 items to assess the quality of the reviewed studies along four dimensions: reporting, external validity, bias, and power. Moreover, to examine the internal validity of findings we meta-analyzed the efficiency estimates reported in 35 models from ten studies. The qualitative synthesis of the literature indicated large differences in study designs and methods. The meta-analysis revealed low correlations between country rankings suggesting a lack of internal validity of the efficiency estimates. In conclusion, methodological problems of existing cross-country comparisons of the efficiency of health care systems draw into question the ability of these comparisons to provide meaningful guidance to policy-makers. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ismanto, A. W.; Kusuma, H. S.; Mahfud, M.
2017-12-01
The comparison of solvent-free microwave extraction (SFME) and microwave hydrodistillation (MHD) in the extraction of essential oil from Melaleuca leucadendra Linn. was examined. Dry cajuput leaves were used in this study. The purpose of this study is also to determine optimal condition (microwave power). The relative electric consumption of SFME and MHD methods are both showing 0,1627 kWh/g and 0,3279 kWh/g. The results showed that solvent-free microwave extraction methods able to reduce energy consumption and can be regarded as a green technique for extraction of cajuput oil.
(PRESENTED NAQC SAN FRANCISCO, CA) COARSE PM METHODS STUDY: STUDY DESIGN AND RESULTS
Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discrete ...
A METHODS COMPARISON FOR COLLECTING MACROINVERTEBRATES IN THE OHIO RIVER
Collection of representative benthic macroinvertebrate samples from large rivers has been challenging researchers for many years. The objective of our study was to develop an appropriate method(s) for sampling macroinvertebrates from the Ohio River. Four existing sampling metho...
ERIC Educational Resources Information Center
Cornish, Richard D.; Dilley, Josiah S.
1973-01-01
Systematic desensitization, implosive therapy, and study counseling have all been effective in reducing test anxiety. In addition, systematic desensitization has been compared to study counseling for effectiveness. This study compares all three methods and suggests that systematic desentization is more effective than the others, and that implosive…
KEY COMPARISON: Final report of the CCQM-K56: Ca, Fe, Zn and Cu in whole fat soybean powder
NASA Astrophysics Data System (ADS)
Liandi, Ma; Qian, Wang
2010-01-01
The CCQM-K56 key comparison was organized by the Inorganic Analysis Working Group (IAWG) of CCQM as a follow-up to completed pilot study CCQM-P64 to test the abilities of national metrology institutes to measure the amount content of nutritious elements in whole fat soybean powder. A pilot study CCQM-P64.1 was conducted in parallel with this key comparison. The National Institute of Metrology (NIM), P. R. China, acted as the coordinating laboratory. Eleven NIMs participated in CCQM-K56. Four elements - Ca, Fe, Zn and Cu - in different concentration levels have been studied. Different measurement methods (IDMS, ICP-MS, ICP-OES, AAS and INAA) and the microwave digestion method were used. The agreement of the results of CCQM-K56 is very good, and obviously better than that of the original P64. It shows that the capability of all of the participants had been promoted from the original pilot study to this key comparison. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Review and discussion of homogenisation methods for climate data
NASA Astrophysics Data System (ADS)
Ribeiro, S.; Caineta, J.; Costa, A. C.
2016-08-01
The quality of climate data is of extreme relevance, since these data are used in many different contexts. However, few climate time series are free from non-natural irregularities. These inhomogeneities are related to the process of collecting, digitising, processing, transferring, storing and transmitting climate data series. For instance, they can be caused by changes of measuring instrumentation, observing practices or relocation of weather stations. In order to avoid errors and bias in the results of analysis that use those data, it is particularly important to detect and remove those non-natural irregularities prior to their use. Moreover, due to the increase of storage capacity, the recent gathering of massive amounts of weather data implies also a toilsome effort to guarantee its quality. The process of detection and correction of irregularities is named homogenisation. A comprehensive summary and description of the available homogenisation methods is critical to climatologists and other experts, who are looking for a homogenisation method wholly considered as the best. The effectiveness of homogenisation methods depends on the type, temporal resolution and spatial variability of the climatic variable. Several comparison studies have been published so far. However, due to the absence of time series where irregularities are known, only a few of those comparisons indicate the level of success of the homogenisation methods. This article reviews the characteristics of the most important procedures used in the homogenisation of climatic variables based on a thorough literature research. It also summarises many methods applications in order to illustrate their applicability, which may help climatologists and other experts to identify adequate method(s) for their particular needs. This review study also describes comparison studies, which evaluated the efficiency of homogenisation methods, and provides a summary of conclusions and lessons learned regarding good practices for the use of homogenisation methods.
Comparison of ALE and SPH Methods for Simulating Mine Blast Effects on Structures
2010-12-01
Comparison of ALE and SPH methods for simulating mine blast effects on struc- tures Geneviève Toussaint Amal Bouamoul DRDC Valcartier Defence R&D...Canada – Valcartier Technical Report DRDC Valcartier TR 2010-326 December 2010 Comparison of ALE and SPH methods for simulating mine blast...Valcartier TR 2010-326 iii Executive summary Comparison of ALE and SPH methods for simulating mine blast effects on structures
High-resolution digital holography with the aid of coherent diffraction imaging.
Jiang, Zhilong; Veetil, Suhas P; Cheng, Jun; Liu, Cheng; Wang, Ling; Zhu, Jianqiang
2015-08-10
The image reconstructed in ordinary digital holography was unable to bring out desired resolution in comparison to photographic materials; thus making it less preferable for many interesting applications. A method is proposed to enhance the resolution of digital holography in all directions by placing a random phase plate between the specimen and the electronic camera and then using an iterative approach to do the reconstruction. With this method, the resolution is improved remarkably in comparison to ordinary digital holography. Theoretical analysis is supported by numerical simulation. The feasibility of the method is also studied experimentally.
Detection of urban expansion in an urban-rural landscape with multitemporal QuickBird images
Lu, Dengsheng; Hetrick, Scott; Moran, Emilio; Li, Guiying
2011-01-01
Accurately detecting urban expansion with remote sensing techniques is a challenge due to the complexity of urban landscapes. This paper explored methods for detecting urban expansion with multitemporal QuickBird images in Lucas do Rio Verde, Mato Grosso, Brazil. Different techniques, including image differencing, principal component analysis (PCA), and comparison of classified impervious surface images with the matched filtering method, were used to examine urbanization detection. An impervious surface image classified with the hybrid method was used to modify the urbanization detection results. As a comparison, the original multispectral image and segmentation-based mean-spectral images were used during the detection of urbanization. This research indicates that the comparison of classified impervious surface images with matched filtering method provides the best change detection performance, followed by the image differencing method based on segmentation-based mean spectral images. The PCA is not a good method for urban change detection in this study. Shadows and high spectral variation within the impervious surfaces represent major challenges to the detection of urban expansion when high spatial resolution images are used. PMID:21799706
Assessment study of lichenometric methods for dating surfaces
NASA Astrophysics Data System (ADS)
Jomelli, Vincent; Grancher, Delphine; Naveau, Philippe; Cooley, Daniel; Brunstein, Daniel
2007-04-01
In this paper, we discuss the advantages and drawbacks of the most classical approaches used in lichenometry. In particular, we perform a detailed comparison among methods based on the statistical analysis of either the largest lichen diameters recorded on geomorphic features or the frequency of all lichens. To assess the performance of each method, a careful comparison design with well-defined criteria is proposed and applied to two distinct data sets. First, we study 350 tombstones. This represents an ideal test bed because tombstone dates are known and, therefore, the quality of the estimated lichen growth curve can be easily tested for the different techniques. Secondly, 37 moraines from two tropical glaciers are investigated. This analysis corresponds to our real case study. For both data sets, we apply our list of criteria that reflects precision, error measurements and their theoretical foundations when proposing estimated ages and their associated confidence intervals. From this comparison, it clearly appears that two methods, the mean of the n largest lichen diameters and the recent Bayesian method based on extreme value theory, offer the most reliable estimates of moraine and tombstones dates. Concerning the spread of the error, the latter approach provides the smallest uncertainty and it is the only one that takes advantage of the statistical nature of the observations by fitting an extreme value distribution to the largest diameters.
Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems
NASA Astrophysics Data System (ADS)
Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding
2007-09-01
In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.
Comparison of conventional therapies for dentin hypersensitivity versus medical hypnosis.
Eitner, Stephan; Bittner, Christian; Wichmann, Manfred; Nickenig, Hans-Joachim; Sokol, Biljana
2010-10-01
This study compared the efficacy of conventional treatments for dentin hypersensitivity (DHS) and hypnotherapy. During a 1-month period at an urban practice in a service area of approximately 22,000 inhabitants, all patients were examined. A total of 102 individuals were included in the evaluation. Values of 186 teeth were analyzed. The comparison of the different treatment methods (desensitizer, fluoridation, and hypnotherapy) did not show significant differences in success rates. However, a noticeable difference was observed in terms of onset and duration of effect. For both desensitizer and hypnotherapy treatments, onset of effect was very rapid. Compared to the other methods studied, hypnotherapy effects had the longest duration. In conclusion, hypnotherapy was as effective as other methods in the treatment of DHS.
Stuberg, W A; Colerick, V L; Blanke, D J; Bruce, W
1988-08-01
The purpose of this study was to compare a clinical gait analysis method using videography and temporal-distance measures with 16-mm cinematography in a gait analysis laboratory. Ten children with a diagnosis of cerebral palsy (means age = 8.8 +/- 2.7 years) and 9 healthy children (means age = 8.9 +/- 2.4 years) participated in the study. Stride length, walking velocity, and goniometric measurements of the hip, knee, and ankle were recorded using the two gait analysis methods. A multivariate analysis of variance was used to determine significant differences between the data collected using the two methods. Pearson product-moment correlation coefficients were determined to examine the relationship between the measurements recorded by the two methods. The consistency of performance of the subjects during walking was examined by intraclass correlation coefficients. No significant differences were found between the methods for the variables studied. Pearson product-moment correlation coefficients ranged from .79 to .95, and intraclass coefficients ranged from .89 to .97. The clinical gait analysis method was found to be a valid tool in comparison with 16-mm cinematography for the variables that were studied.
Benloucif, Susan; Burgess, Helen J.; Klerman, Elizabeth B.; Lewy, Alfred J.; Middleton, Benita; Murphy, Patricia J.; Parry, Barbara L.; Revell, Victoria L.
2008-01-01
Study Objectives: To provide guidelines for collecting and analyzing urinary, salivary, and plasma melatonin, thereby assisting clinicians and researchers in determining which method of measuring melatonin is most appropriate for their particular needs and facilitating the comparison of data between laboratories. Methods: A modified RAND process was utilized to derive recommendations for methods of measuring melatonin in humans. Results: Consensus-based guidelines are presented for collecting and analyzing melatonin for studies that are conducted in the natural living environment, the clinical setting, and in-patient research facilities under controlled conditions. Conclusions: The benefits and disadvantages of current methods of collecting and analyzing melatonin are summarized. Although a single method of analysis would be the most effective way to compare studies, limitations of current methods preclude this possibility. Given that the best analysis method for use under multiple conditions is not established, it is recommended to include, in any published report, one of the established low threshold measures of dim light melatonin onset to facilitate comparison between studies. Citation: Benloucif S; Burgess HJ; Klerman EB; Lewy AJ; Middleton B; Murphy PJ; Parry BL; Revell VL. Measuring melatonin in humans. J Clin Sleep Med 2008;4(1):66-69. PMID:18350967
Evaluating fungal contamination indoors is complicated because of the many different sampling methods utilized. In this study, fungal contamination was evaluated using five sampling methods and four matrices for results. The five sampling methods were a 48 hour indoor air sample ...
A COMPARISON OF SIX BENTHIC MACROINVERTEBRATE SAMPLING METHODS IN FOUR LARGE RIVERS
In 1999, a study was conducted to compare six macroinvertebrate sampling methods in four large (boatable) rivers that drain into the Ohio River. Two methods each were adapted from existing methods used by the USEPA, USGS and Ohio EPA. Drift nets were unable to collect a suffici...
Comparison of micrometeorological techniques in measuring gas emissions from waste lagoons
USDA-ARS?s Scientific Manuscript database
In this study, we evaluated and compared the accuracies of two micrometeorological methods using open-path tunable diode laser absorption spectrometers; vertical radial plume mapping method and the inverse dispersion model method. The accuracy of these two methods was evaluated using a 45m x 45m p...
Comparison of micrometeorological techniques in measuring gas emissions from waste lagoons
USDA-ARS?s Scientific Manuscript database
In this study, we evaluated and compared the accuracies of two micrometeorological methods using open-path tunable diode laser absorption spectrometers; vertical radial plume mapping method (US EPA OTM-10) and the inverse dispersion model method. The accuracy of these two methods was evaluated usin...
School District Enrollment Projections: A Comparison of Three Methods.
ERIC Educational Resources Information Center
Pettibone, Timothy J.; Bushan, Latha
This study assesses three methods of forecasting school enrollments: the cohort-sruvival method (grade progression), the statistical forecasting procedure developed by the Statistical Analysis System (SAS) Institute, and a simple ratio computation. The three methods were used to forecast school enrollments for kindergarten through grade 12 in a…
A DUST-SETTLING CHAMBER FOR SAMPLING-INSTRUMENT COMPARISON STUDIES
Introduction: Few methods exist that can evenly and reproducibly deposit dusts onto surfaces for surface-sampling methodological studies. A dust-deposition chamber was designed for that purpose.
Methods: A 1-m3 Rochester-type chamber was modified to produce high airborne d...
ERIC Educational Resources Information Center
Rakap, Salih; Snyder, Patricia; Pasia, Cathleen
2014-01-01
Debate is occurring about which result interpretation aides focused on examining the experimental effect should be used in single-subject experimental research. In this study, we examined seven nonoverlap methods and compared results using each method to judgments of two visual analysts. The data sources for the present study were 36 studies…
Comparison of Three Tobacco Survey Methods with College Students: A Case Study
ERIC Educational Resources Information Center
James, Delores C. S.; Chen, W. William; Sheu, Jiunn-Jye
2005-01-01
The goals of this case study were to: (1) determine the efficiency and effectiveness of three survey methods--postal mail survey, web-based survey, and random in-class administration survey--in assessing tobacco-related attitudes and behaviors among college students and (2) compare the response rate and procedures of these three methods. There was…
Solid Propellant Test Motor Scaling
2001-09-01
50 Figure 40. Comparison of Measured and Calculated Strand and Small Motor Burning Rates for Fundamental Studies of HTPB /AP Smokeless...Propellants...................................... 51 Figure 41. Agreement Between 2x4 Motor and Strand Burning Rate Data for Non-aluminized HTPB /AP...58 Figure 51. Comparison Between Results Obtained with Ultrasonic Method and Standard
Research report on: Specialized physiological studies in support of manned space flight
NASA Technical Reports Server (NTRS)
Luft, U. C.
1975-01-01
An investigation of the role of 02 fluctuations in oxygen uptake observed with changing posture is reported. A comparison of the closing volume test with other pulmonary function measurements is presented along with a comparison of hydrostatic weighing, and a stereophotogrammetric method for determining body volume.
ERIC Educational Resources Information Center
Harik, Polina; Baldwin, Peter; Clauser, Brian
2013-01-01
Growing reliance on complex constructed response items has generated considerable interest in automated scoring solutions. Many of these solutions are described in the literature; however, relatively few studies have been published that "compare" automated scoring strategies. Here, comparisons are made among five strategies for…
Why Singaporean 8th Grade Students Gain Highest Mathematics Ranking in TIMSS (1999-2011)
ERIC Educational Resources Information Center
Lessani, Abdolreza; Yunus, Aida Suraya Md; Tarmiz, Rohani Ahmad; Mahmud, Rosnaini
2014-01-01
The international comparison of students' mathematics knowledge and competencies is an effective method of evaluating students' mathematics performance and developing policies to improve their achievements in mathematics. Trends in International Mathematics and Science Study (TIMSS) are among the most well-recognized international comparisons that…
Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P
2011-05-19
There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.
Chhapola, Viswas; Kanwal, Sandeep Kumar; Brar, Rekha
2015-05-01
To carry out a cross-sectional survey of the medical literature on laboratory research papers published later than 2012 and available in the common search engines (PubMed, Google Scholar) on the quality of statistical reporting of method comparison studies using Bland-Altman (B-A) analysis. Fifty clinical studies were identified which had undertaken method comparison of laboratory analytes using B-A. The reporting of B-A was evaluated using a predesigned checklist with following six items: (1) correct representation of x-axis on B-A plot, (2) representation and correct definition of limits of agreement (LOA), (3) reporting of confidence interval (CI) of LOA, (4) comparison of LOA with a priori defined clinical criteria, (5) evaluation of the pattern of the relationship between difference (y-axis) and average (x-axis) and (6) measures of repeatability. The x-axis and LOA were presented correctly in 94%, comparison with a priori clinical criteria in 74%, CI reporting in 6%, evaluation of pattern in 28% and repeatability assessment in 38% of studies. There is incomplete reporting of B-A in published clinical studies. Despite its simplicity, B-A appears not to be completely understood by researchers, reviewers and editors of journals. There appear to be differences in the reporting of B-A between laboratory medicine journals and other clinical journals. A uniform reporting of B-A method will enhance the generalizability of results. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Link, Manuela; Schmid, Christina; Pleus, Stefan; Baumstark, Annette; Rittmeyer, Delia; Haug, Cornelia; Freckmann, Guido
2015-04-14
The standard ISO (International Organization for Standardization) 15197 is widely accepted for the accuracy evaluation of systems for self-monitoring of blood glucose (SMBG). Accuracy evaluation was performed for 4 SMBG systems (Accu-Chek Aviva, ContourXT, GlucoCheck XL, GlucoMen LX PLUS) with 3 test strip lots each. To investigate a possible impact of the comparison method on system accuracy data, 2 different established methods were used. The evaluation was performed in a standardized manner following test procedures described in ISO 15197:2003 (section 7.3). System accuracy was assessed by applying ISO 15197:2003 and in addition ISO 15197:2013 criteria (section 6.3.3). For each system, comparison measurements were performed with a glucose oxidase (YSI 2300 STAT Plus glucose analyzer) and a hexokinase (cobas c111) method. All 4 systems fulfilled the accuracy requirements of ISO 15197:2003 with the tested lots. More stringent accuracy criteria of ISO 15197:2013 were fulfilled by 3 systems (Accu-Chek Aviva, ContourXT, GlucoMen LX PLUS) when compared to the manufacturer's comparison method and by 2 systems (Accu-Chek Aviva, ContourXT) when compared to the alternative comparison method. All systems showed lot-to-lot variability to a certain degree; 2 systems (Accu-Chek Aviva, ContourXT), however, showed only minimal differences in relative bias between the 3 evaluated lots. In this study, all 4 systems complied with the evaluated test strip lots with accuracy criteria of ISO 15197:2003. Applying ISO 15197:2013 accuracy limits, differences in the accuracy of the tested systems were observed, also demonstrating that the applied comparison method/system and the lot-to-lot variability can have a decisive influence on accuracy data obtained for a SMBG system. © 2015 Diabetes Technology Society.
COMPARISON OF MACROINVERTEBRATE SAMPLING METHODS FOR NONWADEABLE STREAMS
The bioassessment of nonwadeable streams in the United States is increasing, but methods for these systems are not as well developed as for wadeable streams. In this study, we compared six benthic macroinvertebrate field sampling methods for nonwadeable streams based on those us...
Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.
Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C
2015-02-01
Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Hamar, Brent; Bradley, Chastity; Gandy, William M.; Harrison, Patricia L.; Sidney, James A.; Coberley, Carter R.; Rula, Elizabeth Y.; Pope, James E.
2013-01-01
Abstract Evaluation of chronic care management (CCM) programs is necessary to determine the behavioral, clinical, and financial value of the programs. Financial outcomes of members who are exposed to interventions (treatment group) typically are compared to those not exposed (comparison group) in a quasi-experimental study design. However, because member assignment is not randomized, outcomes reported from these designs may be biased or inefficient if study groups are not comparable or balanced prior to analysis. Two matching techniques used to achieve balanced groups are Propensity Score Matching (PSM) and Coarsened Exact Matching (CEM). Unlike PSM, CEM has been shown to yield estimates of causal (program) effects that are lowest in variance and bias for any given sample size. The objective of this case study was to provide a comprehensive comparison of these 2 matching methods within an evaluation of a CCM program administered to a large health plan during a 2-year time period. Descriptive and statistical methods were used to assess the level of balance between comparison and treatment members pre matching. Compared with PSM, CEM retained more members, achieved better balance between matched members, and resulted in a statistically insignificant Wald test statistic for group aggregation. In terms of program performance, the results showed an overall higher medical cost savings among treatment members matched using CEM compared with those matched using PSM (-$25.57 versus -$19.78, respectively). Collectively, the results suggest CEM is a viable alternative, if not the most appropriate matching method, to apply when evaluating CCM program performance. (Population Health Management 2013;16:35–45) PMID:22788834
Wells, Aaron R; Hamar, Brent; Bradley, Chastity; Gandy, William M; Harrison, Patricia L; Sidney, James A; Coberley, Carter R; Rula, Elizabeth Y; Pope, James E
2013-02-01
Evaluation of chronic care management (CCM) programs is necessary to determine the behavioral, clinical, and financial value of the programs. Financial outcomes of members who are exposed to interventions (treatment group) typically are compared to those not exposed (comparison group) in a quasi-experimental study design. However, because member assignment is not randomized, outcomes reported from these designs may be biased or inefficient if study groups are not comparable or balanced prior to analysis. Two matching techniques used to achieve balanced groups are Propensity Score Matching (PSM) and Coarsened Exact Matching (CEM). Unlike PSM, CEM has been shown to yield estimates of causal (program) effects that are lowest in variance and bias for any given sample size. The objective of this case study was to provide a comprehensive comparison of these 2 matching methods within an evaluation of a CCM program administered to a large health plan during a 2-year time period. Descriptive and statistical methods were used to assess the level of balance between comparison and treatment members pre matching. Compared with PSM, CEM retained more members, achieved better balance between matched members, and resulted in a statistically insignificant Wald test statistic for group aggregation. In terms of program performance, the results showed an overall higher medical cost savings among treatment members matched using CEM compared with those matched using PSM (-$25.57 versus -$19.78, respectively). Collectively, the results suggest CEM is a viable alternative, if not the most appropriate matching method, to apply when evaluating CCM program performance.
Numerical Implementation of the Cohesive Soil Bounding Surface Plasticity Model. Volume I.
1983-02-01
AD-R24 866 NUMERICAL IMPLEMENTATION OF THE COHESIVE SOIL BOUNDING 1/2 SURFACE PLASTICITY ..(U) CALIFORNIA UNIV DAVIS DEPT OF CIVIL ENGINEERING L R...a study of various numerical means for implementing the bounding surface plasticity model for cohesive soils is presented. A comparison is made of... Plasticity Models 17 3.4 Selection Of Methods For Comparison 17 3.5 Theory 20 3.5.1 Solution Methods 20 3.5.2 Reduction Of The Number Of Equation
Comparisons of several aerodynamic methods for application to dynamic loads analyses
NASA Technical Reports Server (NTRS)
Kroll, R. I.; Miller, R. D.
1976-01-01
The results of a study are presented in which the applicability at subsonic speeds of several aerodynamic methods for predicting dynamic gust loads on aircraft, including active control systems, was examined and compared. These aerodynamic methods varied from steady state to an advanced unsteady aerodynamic formulation. Brief descriptions of the structural and aerodynamic representations and of the motion and load equations are presented. Comparisons of numerical results achieved using the various aerodynamic methods are shown in detail. From these results, aerodynamic representations for dynamic gust analyses are identified. It was concluded that several aerodynamic methods are satisfactory for dynamic gust analyses of configurations having either controls fixed or active control systems that primarily affect the low frequency rigid body aircraft response.
Sport fishing: a comparison of three indirect methods for estimating benefits.
Darrell L. Hueth; Elizabeth J. Strong; Roger D. Fight
1988-01-01
Three market-based methods for estimating values of sport fishing were compared by using a common data base. The three approaches were the travel-cost method, the hedonic travel-cost method, and the household-production method. A theoretical comparison of the resulting values showed that the results were not fully comparable in several ways. The comparison of empirical...
Wong, M S; Cheng, J C Y; Lo, K H
2005-04-01
The treatment effectiveness of the CAD/CAM method and the manual method in managing adolescent idiopathic scoliosis (AIS) was compared. Forty subjects were recruited with twenty subjects for each method. The clinical parameters namely Cobb's angle and apical vertebral rotation were evaluated at the pre-brace and the immediate in-brace visits. The results demonstrated that orthotic treatments rendered by the CAD/CAM method and the conventional manual method were effective in providing initial control of Cobb's angle. Significant decreases (p < 0.05) were found between the pre-brace and immediate in-brace visits for both methods. The mean reductions of Cobb's angle were 12.8 degrees (41.9%) for the CAD/CAM method and 9.8 degrees (32.1%) for the manual method. An initial control of the apical vertebral rotation was not shown in this study. In the comparison between the CAD/CAM method and the manual method, no significant difference was found in the control of Cobb's angle and apical vertebral rotation. The current study demonstrated that the CAD/CAM method can provide similar result in the initial stage of treatment as compared with the manual method.
Comparison of Manual Refraction Versus Autorefraction in 60 Diabetic Retinopathy Patients.
Shirzadi, Keyvan; Shahraki, Kourosh; Yahaghi, Emad; Makateb, Ali; Khosravifard, Keivan
2016-07-27
The purpose of the study was to evaluate the comparison of manual refraction versus autorefraction in diabetic retinopathy patients. The study was conducted at the Be'sat Army Hospital from 2013-2015. In the present study differences between two common refractometry methods (manual refractometry and Auto refractometry) in diagnosis and follow up of retinopathy in patients affected with diabetes is investigated. Our results showed that there is a significant difference in visual acuity score of patients between manual and auto refractometry. Despite this fact, spherical equivalent scores of two methods of refractometry did not show a significant statistical difference in the patients. Although use of manual refraction is comparable with autorefraction in evaluating spherical equivalent scores in diabetic patients affected with retinopathy, but in the case of visual acuity results from these two methods are not comparable.
NASA Astrophysics Data System (ADS)
Borodinov, A. A.; Myasnikov, V. V.
2018-04-01
The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.
Comparison of Observational Methods and Their Relation to Ratings of Engagement in Young Children
ERIC Educational Resources Information Center
Wood, Brenna K.; Hojnoski, Robin L.; Laracy, Seth D.; Olson, Christopher L.
2016-01-01
Although, collectively, results of earlier direct observation studies suggest momentary time sampling (MTS) may offer certain technical advantages over whole-interval (WIR) and partial-interval (PIR) recording, no study has compared these methods for measuring engagement in young children in naturalistic environments. This study compared direct…
A comparison of three fiber tract delineation methods and their impact on white matter analysis.
Sydnor, Valerie J; Rivas-Grajales, Ana María; Lyall, Amanda E; Zhang, Fan; Bouix, Sylvain; Karmacharya, Sarina; Shenton, Martha E; Westin, Carl-Fredrik; Makris, Nikos; Wassermann, Demian; O'Donnell, Lauren J; Kubicki, Marek
2018-05-19
Diffusion magnetic resonance imaging (dMRI) is an important method for studying white matter connectivity in the brain in vivo in both healthy and clinical populations. Improvements in dMRI tractography algorithms, which reconstruct macroscopic three-dimensional white matter fiber pathways, have allowed for methodological advances in the study of white matter; however, insufficient attention has been paid to comparing post-tractography methods that extract white matter fiber tracts of interest from whole-brain tractography. Here we conduct a comparison of three representative and conceptually distinct approaches to fiber tract delineation: 1) a manual multiple region of interest-based approach, 2) an atlas-based approach, and 3) a groupwise fiber clustering approach, by employing methods that exemplify these approaches to delineate the arcuate fasciculus, the middle longitudinal fasciculus, and the uncinate fasciculus in 10 healthy male subjects. We enable qualitative comparisons across methods, conduct quantitative evaluations of tract volume, tract length, mean fractional anisotropy, and true positive and true negative rates, and report measures of intra-method and inter-method agreement. We discuss methodological similarities and differences between the three approaches and the major advantages and drawbacks of each, and review research and clinical contexts for which each method may be most apposite. Emphasis is given to the means by which different white matter fiber tract delineation approaches may systematically produce variable results, despite utilizing the same input tractography and reliance on similar anatomical knowledge. Copyright © 2018. Published by Elsevier Inc.
Grant, Caroline A; Schuetz, Michael; Epari, Devakar
2015-11-26
Successful healing of long bone fractures is dependent on the mechanical environment created within the fracture, which in turn is dependent on the fixation strategy. Recent literature reports have suggested that locked plating devices are too stiff to reliably promote healing. However, in vitro testing of these devices has been inconsistent in both method of constraint and reported outcomes, making comparisons between studies and the assessment of construct stiffness problematic. Each of the methods previously used in the literature were assessed for their effect on the bending of the sample and concordant stiffness. The choice of outcome measures used in in vitro fracture studies was also assessed. Mechanical testing was conducted on seven hole locked plated constructs in each method for comparison. Based on the assessment of each method the use of spherical bearings, ball joints or similar is suggested at both ends of the sample. The use of near and far cortex movement was found to be more comprehensive and more accurate than traditional centrally calculated interfragmentary movement values; stiffness was found to be highly susceptible to the accuracy of deformation measurements and constraint method, and should only be used as a within study comparison method. The reported stiffness values of locked plate constructs from in vitro mechanical testing is highly susceptible to testing constraints and output measures, with many standard techniques overestimating the stiffness of the construct. This raises the need for further investigation into the actual mechanical behaviour within the fracture gap of these devices. Copyright © 2015 Elsevier Ltd. All rights reserved.
Finch, Peter
2017-06-01
Intra-abdominal fat is an important factor in determining the metabolic syndrome/insulin resistance, and thus the risk of diabetes and ischaemic heart disease. Computed Tomography (CT) fat segmentation represents a defined method of quantifying intra-abdominal fat, with attendant radiation risks. Bioimpedance spectroscopy may offer a method of assessment without any risks to the patients. A comparison is made of these two methods. This was a preliminary study of the utility of multifrequency bioimpedance spectroscopy of the mid abdomen as a measure of intra-abdominal fat, by comparison with fat segmentation of an abdominal CT scan in the -30 to -190 HU range. There was a significant (P < 0.01) correlation between intra-abdominal fat and mid-upper arm circumference, as well as the bioimpedance parameter, the R/S ratio. Multivariate analysis showed that these were the only independant variables and allowed the derivation of a formula to estimate intra-abdominal fat: IAF = 0.02 × MAC - 0.757 × R/S + 0.036. Circumabdominal bioimpedance spectroscopy may prove a useful method of assessing intra-abdominal fat, and may be suitable for use in studies to enhance other measures of body composition, such as mid-upper arm circumference.
Whole brain fiber-based comparison (FBC)-A tool for diffusion tensor imaging-based cohort studies.
Zimmerman-Moreno, Gali; Ben Bashat, Dafna; Artzi, Moran; Nefussy, Beatrice; Drory, Vivian; Aizenstein, Orna; Greenspan, Hayit
2016-02-01
We present a novel method for fiber-based comparison of diffusion tensor imaging (DTI) scans of groups of subjects. The method entails initial preprocessing and fiber reconstruction by tractography of each brain in its native coordinate system. Several diffusion parameters are sampled along each fiber and used in subsequent comparisons. A spatial correspondence between subjects is established based on geometric similarity between fibers in a template set (several choices for template are explored), and fibers in all other subjects. Diffusion parameters between groups are compared statistically for each template fiber. Results are presented at single fiber resolution. As an initial exploratory step in neurological population studies this method points to the locations affected by the pathology of interest, without requiring a hypothesis. It does not make any grouping assumptions on the fibers and no manual intervention is needed. The framework was applied here to 18 healthy subjects and 23 amyotrophic lateral sclerosis (ALS) patients. The results are compatible with previous findings and with the tract based spatial statistics (TBSS) method. Hum Brain Mapp 37:477-490, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Critical evaluations of vegetation cover measurement techniques: a response to Thacker et al. (2015)
USDA-ARS?s Scientific Manuscript database
Comparison studies are necessary to reconcile methods that have arisen among disparate rangeland monitoring programs. However, Thacker et al.'s study comparing Daubenmire frame (DF) and line-point intercept (LPI) methods for estimating vegetation cover ignores definitional differences between what t...
Valx: A system for extracting and structuring numeric lab test comparison statements from text
Hao, Tianyong; Liu, Hongfang; Weng, Chunhua
2017-01-01
Objectives To develop an automated method for extracting and structuring numeric lab test comparison statements from text and evaluate the method using clinical trial eligibility criteria text. Methods Leveraging semantic knowledge from the Unified Medical Language System (UMLS) and domain knowledge acquired from the Internet, Valx takes 7 steps to extract and normalize numeric lab test expressions: 1) text preprocessing, 2) numeric, unit, and comparison operator extraction, 3) variable identification using hybrid knowledge, 4) variable - numeric association, 5) context-based association filtering, 6) measurement unit normalization, and 7) heuristic rule-based comparison statements verification. Our reference standard was the consensus-based annotation among three raters for all comparison statements for two variables, i.e., HbA1c and glucose, identified from all of Type 1 and Type 2 diabetes trials in ClinicalTrials.gov. Results The precision, recall, and F-measure for structuring HbA1c comparison statements were 99.6%, 98.1%, 98.8% for Type 1 diabetes trials, and 98.8%, 96.9%, 97.8% for Type 2 Diabetes trials, respectively. The precision, recall, and F-measure for structuring glucose comparison statements were 97.3%, 94.8%, 96.1% for Type 1 diabetes trials, and 92.3%, 92.3%, 92.3% for Type 2 diabetes trials, respectively. Conclusions Valx is effective at extracting and structuring free-text lab test comparison statements in clinical trial summaries. Future studies are warranted to test its generalizability beyond eligibility criteria text. The open-source Valx enables its further evaluation and continued improvement among the collaborative scientific community. PMID:26940748
NASA Astrophysics Data System (ADS)
Leviandier, Thierry; Alber, A.; Le Ber, F.; Piégay, H.
2012-02-01
Seven methods designed to delineate homogeneous river segments, belonging to four families, namely — tests of homogeneity, contrast enhancing, spatially constrained classification, and hidden Markov models — are compared, firstly on their principles, then on a case study, and on theoretical templates. These templates contain patterns found in the case study but not considered in the standard assumptions of statistical methods, such as gradients and curvilinear structures. The influence of data resolution, noise and weak satisfaction of the assumptions underlying the methods is investigated. The control of the number of reaches obtained in order to achieve meaningful comparisons is discussed. No method is found that outperforms all the others on all trials. However, the methods with sequential algorithms (keeping at order n + 1 all breakpoints found at order n) fail more often than those running complete optimisation at any order. The Hubert-Kehagias method and Hidden Markov Models are the most successful at identifying subpatterns encapsulated within the templates. Ergodic Hidden Markov Models are, moreover, liable to exhibit transition areas.
Farer, Leslie J; Hayes, John M
2005-01-01
A new method has been developed for the determination of emamectin benzoate in fish feed. The method uses a wet extraction, cleanup by solid-phase extraction, and quantitation and separation by liquid chromatography (LC). In this paper, we compare the performance of this method with that of a previously reported LC assay for the determination of emamectin benzoate in fish feed. Although similar to the previous method, the new procedure uses a different sample pretreatment, wet extraction, and quantitation method. The performance of the new method was compared with that of the previously reported method by analyses of 22 medicated feed samples from various commercial sources. A comparison of the results presented here reveals slightly lower assay values obtained with the new method. Although a paired sample t-test indicates the difference in results is significant, this difference is within the method precision of either procedure.
A Comparison of Fuzzy Models in Similarity Assessment of Misregistered Area Class Maps
NASA Astrophysics Data System (ADS)
Brown, Scott
Spatial uncertainty refers to unknown error and vagueness in geographic data. It is relevant to land change and urban growth modelers, soil and biome scientists, geological surveyors and others, who must assess thematic maps for similarity, or categorical agreement. In this paper I build upon prior map comparison research, testing the effectiveness of similarity measures on misregistered data. Though several methods compare uncertain thematic maps, few methods have been tested on misregistration. My objective is to test five map comparison methods for sensitivity to misregistration, including sub-pixel errors in both position and rotation. Methods included four fuzzy categorical models: fuzzy kappa's model, fuzzy inference, cell aggregation, and the epsilon band. The fifth method used conventional crisp classification. I applied these methods to a case study map and simulated data in two sets: a test set with misregistration error, and a control set with equivalent uniform random error. For all five methods, I used raw accuracy or the kappa statistic to measure similarity. Rough-set epsilon bands report the most similarity increase in test maps relative to control data. Conversely, the fuzzy inference model reports a decrease in test map similarity.
Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo
2005-10-01
An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.
Estimation of Disability Weights in the General Population of South Korea Using a Paired Comparison
Ock, Minsu; Ahn, Jeonghoon; Yoon, Seok-Jun; Jo, Min-Woo
2016-01-01
We estimated the disability weights in the South Korean population by using a paired comparison-only model wherein ‘full health’ and ‘being dead’ were included as anchor points, without resorting to a cardinal method, such as person trade-off. The study was conducted via 2 types of survey: a household survey involving computer-assisted face-to-face interviews and a web-based survey (similar to that of the GBD 2010 disability weight study). With regard to the valuation methods, paired comparison, visual analogue scale (VAS), and standard gamble (SG) were used in the household survey, whereas paired comparison and population health equivalence (PHE) were used in the web-based survey. Accordingly, we described a total of 258 health states, with ‘full health’ and ‘being dead’ designated as anchor points. In the analysis, 4 models were considered: a paired comparison-only model; hybrid model between paired comparison and PHE; VAS model; and SG model. A total of 2,728 and 3,188 individuals participated in the household and web-based survey, respectively. The Pearson correlation coefficients of the disability weights of health states between the GBD 2010 study and the current models were 0.802 for Model 2, 0.796 for Model 1, 0.681 for Model 3, and 0.574 for Model 4 (all P-values<0.001). The discrimination of values according to health state severity was most suitable in Model 1. Based on these results, the paired comparison-only model was selected as the best model for estimating disability weights in South Korea, and for maintaining simplicity in the analysis. Thus, disability weights can be more easily estimated by using paired comparison alone, with ‘full health’ and ‘being dead’ as one of the health states. As noted in our study, we believe that additional evidence regarding the universality of disability weight can be observed by using a simplified methodology of estimating disability weights. PMID:27606626
Boehm, A.B.; Griffith, J.; McGee, C.; Edge, T.A.; Solo-Gabriele, H. M.; Whitman, R.; Cao, Y.; Getrich, M.; Jay, J.A.; Ferguson, D.; Goodwin, K.D.; Lee, C.M.; Madison, M.; Weisberg, S.B.
2009-01-01
Aims: The absence of standardized methods for quantifying faecal indicator bacteria (FIB) in sand hinders comparison of results across studies. The purpose of the study was to compare methods for extraction of faecal bacteria from sands and recommend a standardized extraction technique. Methods and Results: Twenty-two methods of extracting enterococci and Escherichia coli from sand were evaluated, including multiple permutations of hand shaking, mechanical shaking, blending, sonication, number of rinses, settling time, eluant-to-sand ratio, eluant composition, prefiltration and type of decantation. Tests were performed on sands from California, Florida and Lake Michigan. Most extraction parameters did not significantly affect bacterial enumeration. anova revealed significant effects of eluant composition and blending; with both sodium metaphosphate buffer and blending producing reduced counts. Conclusions: The simplest extraction method that produced the highest FIB recoveries consisted of 2 min of hand shaking in phosphate-buffered saline or deionized water, a 30-s settling time, one-rinse step and a 10 : 1 eluant volume to sand weight ratio. This result was consistent across the sand compositions tested in this study but could vary for other sand types. Significance and Impact of the Study: Method standardization will improve the understanding of how sands affect surface water quality. ?? 2009 The Society for Applied Microbiology.
Wong, Anthony F; Pielmeier, Ulrike; Haug, Peter J; Andreassen, Steen
2016-01-01
Objective Develop an efficient non-clinical method for identifying promising computer-based protocols for clinical study. An in silico comparison can provide information that informs the decision to proceed to a clinical trial. The authors compared two existing computer-based insulin infusion protocols: eProtocol-insulin from Utah, USA, and Glucosafe from Denmark. Materials and Methods The authors used eProtocol-insulin to manage intensive care unit (ICU) hyperglycemia with intravenous (IV) insulin from 2004 to 2010. Recommendations accepted by the bedside clinicians directly link the subsequent blood glucose values to eProtocol-insulin recommendations and provide a unique clinical database. The authors retrospectively compared in silico 18 984 eProtocol-insulin continuous IV insulin infusion rate recommendations from 408 ICU patients with those of Glucosafe, the candidate computer-based protocol. The subsequent blood glucose measurement value (low, on target, high) was used to identify if the insulin recommendation was too high, on target, or too low. Results Glucosafe consistently provided more favorable continuous IV insulin infusion rate recommendations than eProtocol-insulin for on target (64% of comparisons), low (80% of comparisons), or high (70% of comparisons) blood glucose. Aggregated eProtocol-insulin and Glucosafe continuous IV insulin infusion rates were clinically similar though statistically significantly different (Wilcoxon signed rank test P = .01). In contrast, when stratified by low, on target, or high subsequent blood glucose measurement, insulin infusion rates from eProtocol-insulin and Glucosafe were statistically significantly different (Wilcoxon signed rank test, P < .001), and clinically different. Discussion This in silico comparison appears to be an efficient nonclinical method for identifying promising computer-based protocols. Conclusion Preclinical in silico comparison analytical framework allows rapid and inexpensive identification of computer-based protocol care strategies that justify expensive and burdensome clinical trials. PMID:26228765
Scientific study of data analysis
NASA Technical Reports Server (NTRS)
Wu, S. T.
1990-01-01
We present a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized and the accuracy and numerical instability are discussed. On the basis of this investigation, we claim that the two methods do resemble each other qualitatively.
ERIC Educational Resources Information Center
Pijl, Sip Jan; Koster, Marloes; Hannink, Anne; Stratingh, Anna
2011-01-01
One of the methods used most often to assess students' friendships and friendship networks is the reciprocal nomination method. However, an often heard complaint is that this technique produces rather negative outcomes. This study compares the reciprocal nomination method with another method to assess students' friendships and friendship networks:…
Identifying duplicate content using statistically improbable phrases
Errami, Mounir; Sun, Zhaohui; George, Angela C.; Long, Tara C.; Skinner, Michael A.; Wren, Jonathan D.; Garner, Harold R.
2010-01-01
Motivation: Document similarity metrics such as PubMed's ‘Find related articles’ feature, which have been primarily used to identify studies with similar topics, can now also be used to detect duplicated or potentially plagiarized papers within literature reference databases. However, the CPU-intensive nature of document comparison has limited MEDLINE text similarity studies to the comparison of abstracts, which constitute only a small fraction of a publication's total text. Extending searches to include text archived by online search engines would drastically increase comparison ability. For large-scale studies, submitting short phrases encased in direct quotes to search engines for exact matches would be optimal for both individual queries and programmatic interfaces. We have derived a method of analyzing statistically improbable phrases (SIPs) for assistance in identifying duplicate content. Results: When applied to MEDLINE citations, this method substantially improves upon previous algorithms in the detection of duplication citations, yielding a precision and recall of 78.9% (versus 50.3% for eTBLAST) and 99.6% (versus 99.8% for eTBLAST), respectively. Availability: Similar citations identified by this work are freely accessible in the Déjà vu database, under the SIP discovery method category at http://dejavu.vbi.vt.edu/dejavu/ Contact: merrami@collin.edu PMID:20472545
ERIC Educational Resources Information Center
Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E.
2009-01-01
A common question in test evaluation is whether an a priori assignment of items to subtests is supported by empirical data. If the analysis results indicate the assignment of items to subtests under study is not supported by data, the assignment is often adjusted. In this study the authors compare two methods on the quality of their suggestions to…
A Comparison of Cut Scores Using Multiple Standard Setting Methods.
ERIC Educational Resources Information Center
Impara, James C.; Plake, Barbara S.
This paper reports the results of using several alternative methods of setting cut scores. The methods used were: (1) a variation of the Angoff method (1971); (2) a variation of the borderline group method; and (3) an advanced impact method (G. Dillon, 1996). The results discussed are from studies undertaken to set the cut scores for fourth grade…
ERIC Educational Resources Information Center
Wootton-Gorges, Sandra L.; Stein-Wexler, Rebecca; Walton, John W.; Rosas, Angela J.; Coulter, Kevin P.; Rogers, Kristen K.
2008-01-01
Purpose: Chest radiographs (CXR) are the standard method for evaluating rib fractures in abused infants. Computed tomography (CT) is a sensitive method to detect rib fractures. The purpose of this study was to compare CT and CXR in the evaluation of rib fractures in abused infants. Methods: This retrospective study included all 12 abused infants…
ERIC Educational Resources Information Center
Shaffer, Anne; Huston, Lisa; Egeland, Byron
2008-01-01
Objectives: One of the greatest methodological problems in the study of childhood maltreatment is the discrepancy in methods by which cases of child maltreatment are identified. The current study compared incidents of maltreatment identified prospectively, retrospectively, or through a combination of both methods. Method: Within a cohort of 170…
Boehm, A B; Griffith, J; McGee, C; Edge, T A; Solo-Gabriele, H M; Whitman, R; Cao, Y; Getrich, M; Jay, J A; Ferguson, D; Goodwin, K D; Lee, C M; Madison, M; Weisberg, S B
2009-11-01
The absence of standardized methods for quantifying faecal indicator bacteria (FIB) in sand hinders comparison of results across studies. The purpose of the study was to compare methods for extraction of faecal bacteria from sands and recommend a standardized extraction technique. Twenty-two methods of extracting enterococci and Escherichia coli from sand were evaluated, including multiple permutations of hand shaking, mechanical shaking, blending, sonication, number of rinses, settling time, eluant-to-sand ratio, eluant composition, prefiltration and type of decantation. Tests were performed on sands from California, Florida and Lake Michigan. Most extraction parameters did not significantly affect bacterial enumeration. anova revealed significant effects of eluant composition and blending; with both sodium metaphosphate buffer and blending producing reduced counts. The simplest extraction method that produced the highest FIB recoveries consisted of 2 min of hand shaking in phosphate-buffered saline or deionized water, a 30-s settling time, one-rinse step and a 10 : 1 eluant volume to sand weight ratio. This result was consistent across the sand compositions tested in this study but could vary for other sand types. Method standardization will improve the understanding of how sands affect surface water quality.
USDA-ARS?s Scientific Manuscript database
This study compared the utility of three sampling methods for ecological monitoring based on: interchangeability of data (rank correlations), precision (coefficient of variation), cost (minutes/transect), and potential of each method to generate multiple indicators. Species richness and foliar cover...
USDA-ARS?s Scientific Manuscript database
In this study, we evaluated the accuracies of two relatively new micrometeorological methods using open-path tunable diode laser absorption spectrometers: vertical radial plume mapping method (US EPA OTM-10) and the backward Lagragian stochastic method (Wintrax®). We have evaluated the accuracy of t...
Testing Multilateral Comparisons in Africa.
ERIC Educational Resources Information Center
Bender, M. Lionel
In this paper, the multilateral comparison method of classifying languages is described and analyzed. It is suggested that while it is espoused as a simple and reasonable approach to language classification, the method has serious flaws. "Multilateral" or "mass" comparison (MC) is not a method of genetic language…
The Multifaceted Variable Approach: Selection of Method in Solving Simple Linear Equations
ERIC Educational Resources Information Center
Tahir, Salma; Cavanagh, Michael
2010-01-01
This paper presents a comparison of the solution strategies used by two groups of Year 8 students as they solved linear equations. The experimental group studied algebra following a multifaceted variable approach, while the comparison group used a traditional approach. Students in the experimental group employed different solution strategies,…
Toward a Taxonomy Linking Game Attributes to Learning: An Empirical Study
ERIC Educational Resources Information Center
Bedwell, Wendy L.; Pavlas, Davin; Heyne, Kyle; Lazzara, Elizabeth H.; Salas, Eduardo
2012-01-01
The serious games community is moving toward research focusing on direct comparisons between learning outcomes of serious games and those of more traditional training methods. Such comparisons are difficult, however, due to the lack of a consistent taxonomy of game attributes for serious games. Without a clear understanding of what truly…
ERIC Educational Resources Information Center
Shin, Jongho; Lee, Hyunjoo; Kim, Yongnam
2009-01-01
The purpose of the study was to comparatively investigate student- and school-level factors affecting mathematics achievement of Korean, Japanese and American students. For international comparisons, the PISA 2003 data were analysed by using the Hierarchical Linear Modeling method. The variables of competitive-learning preference, instrumental…
Comparison of the mucoadhesive properties of thiolated polyacrylic acid to thiolated polyallylamine.
Duggan, Sarah; O'Donovan, Orla; Owens, Eleanor; Duggan, Elaine; Hughes, Helen; Cummins, Wayne
2016-02-10
Synthetic polymers, polyacrylic acid (PAA) and polyallylamine (PAAm), were thiolated using different methods of thiolation. Both polymers resulted in comparable thiol contents, thus allowing for the direct comparison of mucoadhesive and cohesive properties between the well-established thiolated PAA and the more novel thiolated PAAm. Thiolation of both polymers improved the swelling ability and the cohesive and mucoadhesive properties in comparison to unmodified control samples. In this study, it was shown that the swelling abilities of the thiolated PAAm sample were far greater than that of the thiolated PAA sample which, in turn, affected the drug release profile of the thiolated PAAm sample. Importantly, however, the mucoadhesive properties of thiolated PAAm were equivalent to that of the thiolated PAA sample as demonstrated by both the adhesion times on porcine intestinal tissue as measured by the rotating cylinder method and by rheological studies with a mucin solution. This study demonstrates the potential thiolated polyallylamine has as a mucoadhesive drug delivery device. Copyright © 2015 Elsevier B.V. All rights reserved.
Problem Solving Techniques for the Design of Algorithms.
ERIC Educational Resources Information Center
Kant, Elaine; Newell, Allen
1984-01-01
Presents model of algorithm design (activity in software development) based on analysis of protocols of two subjects designing three convex hull algorithms. Automation methods, methods for studying algorithm design, role of discovery in problem solving, and comparison of different designs of case study according to model are highlighted.…
Perata, E; Ferrari, P; Tarsitani, G
2005-01-01
We studied patient's satisfaction rate for hospital dishes comparing "cook & chill" method with "cook & serve". As principal instrument we used a comparative questionnaire, anonymous and self-compiled, which is able to evaluate the differences of customer satisfaction's rate between the two methods.
Assessing Geographic Knowledge with Sketch Maps.
ERIC Educational Resources Information Center
Wise, Naomi; Kon, Jane Heckley
1990-01-01
Maintains that comparison of students' sketch maps at the beginning and end of the year can provide information on how student's representations of the world changes. Describes a study from the California International Studies Project (CISP) that provides an easy method for sorting and summarizing sketch map data. Illustrates the method with…
ERIC Educational Resources Information Center
Rivas, Rodolfo R.
2009-01-01
This exploratory study centered its investigation in the participants' responses provided in 2 different instructional teaching delivery methods (traditional and online) that utilized active-like teaching learning techniques (case studies, group projects, threaded discussions, class discussions, office hours, lectures, computerized assignments,…
NASA Astrophysics Data System (ADS)
Hinckley, Sarah; Parada, Carolina; Horne, John K.; Mazur, Michael; Woillez, Mathieu
2016-10-01
Biophysical individual-based models (IBMs) have been used to study aspects of early life history of marine fishes such as recruitment, connectivity of spawning and nursery areas, and marine reserve design. However, there is no consistent approach to validating the spatial outputs of these models. In this study, we hope to rectify this gap. We document additions to an existing individual-based biophysical model for Alaska walleye pollock (Gadus chalcogrammus), some simulations made with this model and methods that were used to describe and compare spatial output of the model versus field data derived from ichthyoplankton surveys in the Gulf of Alaska. We used visual methods (e.g. distributional centroids with directional ellipses), several indices (such as a Normalized Difference Index (NDI), and an Overlap Coefficient (OC), and several statistical methods: the Syrjala method, the Getis-Ord Gi* statistic, and a geostatistical method for comparing spatial indices. We assess the utility of these different methods in analyzing spatial output and comparing model output to data, and give recommendations for their appropriate use. Visual methods are useful for initial comparisons of model and data distributions. Metrics such as the NDI and OC give useful measures of co-location and overlap, but care must be taken in discretizing the fields into bins. The Getis-Ord Gi* statistic is useful to determine the patchiness of the fields. The Syrjala method is an easily implemented statistical measure of the difference between the fields, but does not give information on the details of the distributions. Finally, the geostatistical comparison of spatial indices gives good information of details of the distributions and whether they differ significantly between the model and the data. We conclude that each technique gives quite different information about the model-data distribution comparison, and that some are easy to apply and some more complex. We also give recommendations for a multistep process to validate spatial output from IBMs.
7 CFR 28.179 - Methods of cotton classification and comparison.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...
7 CFR 28.179 - Methods of cotton classification and comparison.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...
7 CFR 28.179 - Methods of cotton classification and comparison.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...
7 CFR 28.179 - Methods of cotton classification and comparison.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...
7 CFR 28.179 - Methods of cotton classification and comparison.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...
Delayed reward discounting and addictive behavior: a meta-analysis
Amlung, Michael T.; Few, Lauren R.; Ray, Lara A.; Sweet, Lawrence H.; Munafò, Marcus R.
2011-01-01
Rationale Delayed reward discounting (DRD) is a behavioral economic index of impulsivity and numerous studies have examined DRD in relation to addictive behavior. To synthesize the findings across the literature, the current review is a meta-analysis of studies comparing DRD between criterion groups exhibiting addictive behavior and control groups. Objectives The meta-analysis sought to characterize the overall patterns of findings, systematic variability by sample and study type, and possible small study (publication) bias. Methods Literature reviews identified 310 candidate articles from which 46 studies reporting 64 comparisons were identified (total N=56,013). Results From the total comparisons identified, a small magnitude effect was evident (d=.15; p<.00001) with very high heterogeneity of effect size. Based on systematic observed differences, large studies assessing DRD with a small number of self-report items were removed and an analysis of 57 comparisons (n=3,329) using equivalent methods and exhibiting acceptable heterogeneity revealed a medium magnitude effect (d=.58; p<.00001). Further analyses revealed significantly larger effect sizes for studies using clinical samples (d=.61) compared with studies using nonclinical samples (d=.45). Indices of small study bias among the various comparisons suggested varying levels of influence by unpublished findings, ranging from minimal to moderate. Conclusions These results provide strong evidence of greater DRD in individuals exhibiting addictive behavior in general and particularly in individuals who meet criteria for an addictive disorder. Implications for the assessment of DRD and research priorities are discussed. PMID:21373791
ERIC Educational Resources Information Center
Merey, Zihni; Kus, Zafer; Karatekin, Kadir
2012-01-01
The purpose of this study is to compare the social studies teaching curricula of Turkey and the United States in terms of values education. The study is a model case study that relies upon one of the qualitative research methods. The data come from the elementary social studies curricula of both countries through the documents analysis method. The…
Thomas C. Brown; George L. Peterson
2009-01-01
The method of paired comparisons is used to measure individuals' preference orderings of items presented to them as discrete binary choices. This paper reviews the theory and application of the paired comparison method, describes a new computer program available for eliciting the choices, and presents an analysis of methods for scaling paired choice data to...
Comparison of microstickies measurement methods. Part I, sample preparation and measurement methods
Mahendra R. Doshi; Angeles Blanco; Carlos Negro; Gilles M. Dorris; Carlos C. Castro; Axel Hamann; R. Daniel Haynes; Carl Houtman; Karen Scallon; Hans-Joachim Putz; Hans Johansson; R.A. Venditti; K. Copeland; H.-M. Chang
2003-01-01
Recently, we completed a project on the comparison of macrostickies measurement methods. Based on the success of the project, we decided to embark on this new project on comparison of microstickies measurement methods. When we started this project, there were some concerns and doubts principally due to the lack of an accepted definition of microstickies. However, we...
Comparison of Science Process Skills with STEM Career Interests of Middle School Students
ERIC Educational Resources Information Center
Zorlu, Fulya; Zorlu, Yusuf
2017-01-01
This study was aimed to examine the relation between the STEM (Science, Technology, Engineering and Mathematics) career interests and science process skills of middle school seventh grade students. Method of this study was the relational survey method. The study was conducted on the basis of voluntariness and participants were 133 seventh grade…
Lecture and Workshop Modes Comparison on Rangeland Developments: The Case of Iran
ERIC Educational Resources Information Center
Shahvali, M.; Poursaeed, A.; Sharifzadeh, M.
2009-01-01
This study investigated the effects of workshop and lecture methods on pastoralists' learning in Ilam Province, west of Iran. A quasi-experimental research method and non-equivalent control group design was used. Sixty pastoralists participated in this study. An open-ended questionnaire was used as the instrument of the study and found to have…
A Comparison of Artificial Intelligence Methods on Determining Coronary Artery Disease
NASA Astrophysics Data System (ADS)
Babaoğlu, Ismail; Baykan, Ömer Kaan; Aygül, Nazif; Özdemir, Kurtuluş; Bayrak, Mehmet
The aim of this study is to show a comparison of multi-layered perceptron neural network (MLPNN) and support vector machine (SVM) on determination of coronary artery disease existence upon exercise stress testing (EST) data. EST and coronary angiography were performed on 480 patients with acquiring 23 verifying features from each. The robustness of the proposed methods is examined using classification accuracy, k-fold cross-validation method and Cohen's kappa coefficient. The obtained classification accuracies are approximately 78% and 79% for MLPNN and SVM respectively. Both MLPNN and SVM methods are rather satisfactory than human-based method looking to Cohen's kappa coefficients. Besides, SVM is slightly better than MLPNN when looking to the diagnostic accuracy, average of sensitivity and specificity, and also Cohen's kappa coefficient.
Feldsine, Philip T; Leung, Stephanie C; Lienau, Andrew H; Mui, Linda A; Townsend, David E
2003-01-01
The relative efficacy of the SimPlate Total Plate Count-Color Indicator (TPC-CI) method (SimPlate 35 degrees C) was compared with the AOAC Official Method 966.23 (AOAC 35 degrees C) for enumeration of total aerobic microorganisms in foods. The SimPlate TPC-CI method, incubated at 30 degrees C (SimPlate 30 degrees C), was also compared with the International Organization for Standardization (ISO) 4833 method (ISO 30 degrees C). Six food types were analyzed: ground black pepper, flour, nut meats, frozen hamburger patties, frozen fruits, and fresh vegetables. All foods tested were naturally contaminated. Nineteen laboratories throughout North America and Europe participated in the study. Three method comparisons were conducted. In general, there was <0.3 mean log count difference in recovery among the SimPlate methods and their corresponding reference methods. Mean log counts between the 2 reference methods were also very similar. Repeatability (Sr) and reproducibility (SR) standard deviations were similar among the 3 method comparisons. The SimPlate method (35 degrees C) and the AOAC method were comparable for enumerating total aerobic microorganisms in foods. Similarly, the SimPlate method (30 degrees C) was comparable to the ISO method when samples were prepared and incubated according to the ISO method.
Quantitative comparison of in situ soil CO2 flux measurement methods
Jennifer D. Knoepp; James M. Vose
2002-01-01
Development of reliable regional or global carbon budgets requires accurate measurement of soil CO2 flux. We conducted laboratory and field studies to determine the accuracy and comparability of methods commonly used to measure in situ soil CO2 fluxes. Methods compared included CO2...
ERIC Educational Resources Information Center
Eshleman, Winston Hull
Compared were programed materials and conventional methods for teaching two units of eighth grade science. Programed materials used were linear programed books requiring constructed responses. The conventional methods included textbook study, written exercises, lectures, discussions, demonstrations, experiments, chalkboard drawings, films,…
Calibration of Clinical Audio Recording and Analysis Systems for Sound Intensity Measurement.
Maryn, Youri; Zarowski, Andrzej
2015-11-01
Sound intensity is an important acoustic feature of voice/speech signals. Yet recordings are performed with different microphone, amplifier, and computer configurations, and it is therefore crucial to calibrate sound intensity measures of clinical audio recording and analysis systems on the basis of output of a sound-level meter. This study was designed to evaluate feasibility, validity, and accuracy of calibration methods, including audiometric speech noise signals and human voice signals under typical speech conditions. Calibration consisted of 3 comparisons between data from 29 measurement microphone-and-computer systems and data from the sound-level meter: signal-specific comparison with audiometric speech noise at 5 levels, signal-specific comparison with natural voice at 3 levels, and cross-signal comparison with natural voice at 3 levels. Intensity measures from recording systems were then linearly converted into calibrated data on the basis of these comparisons, and validity and accuracy of calibrated sound intensity were investigated. Very strong correlations and quasisimilarity were found between calibrated data and sound-level meter data across calibration methods and recording systems. Calibration of clinical sound intensity measures according to this method is feasible, valid, accurate, and representative for a heterogeneous set of microphones and data acquisition systems in real-life circumstances with distinct noise contexts.
Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods
NASA Astrophysics Data System (ADS)
Garbanzo-Salas, Marcial; Hocking, Wayne. K.
2015-09-01
In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.
Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario
2014-01-01
Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565
Kelley, Walter E; Lockwood, Christina M; Cervelli, Denise R; Sterner, Jamie; Scott, Mitchell G; Duh, Show-Hong; Christenson, Robert H
2009-09-01
Performance characteristics of the LOCI cTnI, CK-MB, MYO, NTproBNP and hsCRP methods on the Dimension Vista System were evaluated. Imprecision (following CLSI EP05-A2 guidelines), limit of quantitation (cTnI), limit of blank, linearity on dilution, serum versus plasma matrix studies (cTnI), and method comparison studies were conducted. Method imprecision of 1.8 to 9.7% (cTnI), 1.8 to 5.7% (CK-MB), 2.1 to 2.2% (MYO), 1.6 to 3.3% (NTproBNP), and 3.5 to 4.2% (hsCRP) were demonstrated. The manufacturer's claimed imprecision, detection limits and upper measurement limits were met. Limit of Quantitation was 0.040 ng/mL for the cTnI assay. Agreement of serum and plasma values for cTnI (r=0.99) was shown. Method comparison study results were acceptable. The Dimension Vista cTnI, CK-MB, MYO, NTproBNP, and hsCRP methods demonstrate acceptable performance characteristics for use as an aid in the diagnosis and risk assessment of patients presenting with suspected acute coronary syndromes.
ERIC Educational Resources Information Center
Cui, Zhongmin; Kolen, Michael J.
2008-01-01
This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…
ERIC Educational Resources Information Center
Omari, Deena Rae
Several teaching methods aid young children in learning foreign languages, all of which include continuous repetition and review of learned information. The two methods used in this study were Total Physical Response (TPR) and songs/chants. The TPR method used a gesture for each vocabulary card, and the songs/chants method incorporated Spanish…
Time Series Imputation via L1 Norm-Based Singular Spectrum Analysis
NASA Astrophysics Data System (ADS)
Kalantari, Mahdi; Yarmohammadi, Masoud; Hassani, Hossein; Silva, Emmanuel Sirimal
Missing values in time series data is a well-known and important problem which many researchers have studied extensively in various fields. In this paper, a new nonparametric approach for missing value imputation in time series is proposed. The main novelty of this research is applying the L1 norm-based version of Singular Spectrum Analysis (SSA), namely L1-SSA which is robust against outliers. The performance of the new imputation method has been compared with many other established methods. The comparison is done by applying them to various real and simulated time series. The obtained results confirm that the SSA-based methods, especially L1-SSA can provide better imputation in comparison to other methods.
A Comparison of Signal Enhancement Methods for Extracting Tonal Acoustic Signals
NASA Technical Reports Server (NTRS)
Jones, Michael G.
1998-01-01
The measurement of pure tone acoustic pressure signals in the presence of masking noise, often generated by mean flow, is a continual problem in the field of passive liner duct acoustics research. In support of the Advanced Subsonic Technology Noise Reduction Program, methods were investigated for conducting measurements of advanced duct liner concepts in harsh, aeroacoustic environments. This report presents the results of a comparison study of three signal extraction methods for acquiring quality acoustic pressure measurements in the presence of broadband noise (used to simulate the effects of mean flow). The performance of each method was compared to a baseline measurement of a pure tone acoustic pressure 3 dB above a uniform, broadband noise background.
Comparison of methods of evaluating hearing benefit of middle ear surgery.
Toner, J G; Smyth, G D
1993-01-01
The objective of this paper is to compare two methods of predicting the level of subjective patient benefit following reconstructive middle ear surgery. This should have always been an important consideration in advising patients regarding surgery, but assumes even more relevance in these days of clinical audit and cost benefit analysis. The two methods studied were the '15/30 dB rule of thumb' (Smyth and Patterson, 1985) and the 'Glasgow plot' (Browning et al., 1991). The predictions of benefit for each of the two methods were compared to the assessment of actual benefits by the patient post-operatively. The results of this comparison in 153 patients were analysed, the rule of thumb was found to be somewhat more sensitive in predicting patient benefit.
Comparison of several asphalt design methods.
DOT National Transportation Integrated Search
1998-01-01
This laboratory study compared several methods of selecting the optimum asphalt content of surface mixes. Six surface mixes were tested using the 50-blow Marshall design, the 75-blow Marshall design, two brands of SHRP gyratory compactors, and the U....
Käppler, Andrea; Fischer, Marten; Scholz-Böttcher, Barbara M; Oberbeckmann, Sonja; Labrenz, Matthias; Fischer, Dieter; Eichhorn, Klaus-Jochen; Voit, Brigitte
2018-06-16
In recent years, many studies on the analysis of microplastics (MP) in environmental samples have been published. These studies are hardly comparable due to different sampling, sample preparation, as well as identification and quantification techniques. Here, MP identification is one of the crucial pitfalls. Visual identification approaches using morphological criteria alone often lead to significant errors, being especially true for MP fibers. Reliable, chemical structure-based identification methods are indispensable. In this context, the frequently used vibrational spectroscopic techniques but also thermoanalytical methods are established. However, no critical comparison of these fundamentally different approaches has ever been carried out with regard to analyzing MP in environmental samples. In this blind study, we investigated 27 single MP particles and fibers of unknown material isolated from river sediments. Successively micro-attenuated total reflection Fourier transform infrared spectroscopy (μ-ATR-FTIR) and pyrolysis gas chromatography-mass spectrometry (py-GCMS) in combination with thermochemolysis were applied. Both methods differentiated between plastic vs. non-plastic in the same way in 26 cases, with 19 particles and fibers (22 after re-evaluation) identified as the same polymer type. To illustrate the different approaches and emphasize the complementarity of their information content, we exemplarily provide a detailed comparison of four particles and three fibers and a critical discussion of advantages and disadvantages of both methods.
A comparison of food crispness based on the cloud model.
Wang, Minghui; Sun, Yonghai; Hou, Jumin; Wang, Xia; Bai, Xue; Wu, Chunhui; Yu, Libo; Yang, Jie
2018-02-01
The cloud model is a typical model which transforms the qualitative concept into the quantitative description. The cloud model has been used less extensively in texture studies before. The purpose of this study was to apply the cloud model in food crispness comparison. The acoustic signals of carrots, white radishes, potatoes, Fuji apples, and crystal pears were recorded during compression. And three time-domain signal characteristics were extracted, including sound intensity, maximum short-time frame energy, and waveform index. The three signal characteristics and the cloud model were used to compare the crispness of the samples mentioned above. The crispness based on the Ex value of the cloud model, in a descending order, was carrot > potato > white radish > Fuji apple > crystal pear. To verify the results of the acoustic signals, mechanical measurement and sensory evaluation were conducted. The results of the two verification experiments confirmed the feasibility of the cloud model. The microstructures of the five samples were also analyzed. The microstructure parameters were negatively related with crispness (p < .01). The cloud model method can be used for crispness comparison of different kinds of foods. The method is more accurate than the traditional methods such as mechanical measurement and sensory evaluation. The cloud model method can also be applied to other texture studies extensively. © 2017 Wiley Periodicals, Inc.
An interactive website for analytical method comparison and bias estimation.
Bahar, Burak; Tuncel, Ayse F; Holmes, Earle W; Holmes, Daniel T
2017-12-01
Regulatory standards mandate laboratories to perform studies to ensure accuracy and reliability of their test results. Method comparison and bias estimation are important components of these studies. We developed an interactive website for evaluating the relative performance of two analytical methods using R programming language tools. The website can be accessed at https://bahar.shinyapps.io/method_compare/. The site has an easy-to-use interface that allows both copy-pasting and manual entry of data. It also allows selection of a regression model and creation of regression and difference plots. Available regression models include Ordinary Least Squares, Weighted-Ordinary Least Squares, Deming, Weighted-Deming, Passing-Bablok and Passing-Bablok for large datasets. The server processes the data and generates downloadable reports in PDF or HTML format. Our website provides clinical laboratories a practical way to assess the relative performance of two analytical methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Boar taint detection: A comparison of three sensory protocols.
Trautmann, Johanna; Meier-Dinkel, Lisa; Gertheiss, Jan; Mörlein, Daniel
2016-01-01
While recent studies state an important role of human sensory methods for daily routine control of so-called boar taint, the evaluation of different heating methods is still incomplete. This study investigated three common heating methods (microwave (MW), hot-water (HW), hot-iron (HI)) for boar fat evaluation. The comparison was carried out on 72 samples with a 10-person sensory panel. The heating method significantly affected the probability of a deviant rating. Compared to an assumed 'gold standard' (chemical analysis), the performance was best for HI when both sensitivity and specificity were considered. The results show the superiority of the panel result compared to individual assessors. However, the consistency of the individual sensory ratings was not significantly different between MW, HW, and HI. The three protocols showed only fair to moderate agreement. Concluding from the present results, the hot-iron method appears to be advantageous for boar taint evaluation as compared to microwave and hot-water. Copyright © 2015. Published by Elsevier Ltd.
A Comparison of Two Sampling Strategies to Assess Discomycete Diversity in Wet Tropical Forests
SHARON A. CANTRELL
2004-01-01
Most of the fungal diversity studies that have used a systematic collecting scheme have not included the discomycetes, so optimal sampling methods are not available for this group. In this study, I tested two sampling methods at each sites in the Caribbean National Forest, Puerto Rico and Ebano Verde Reserve, Dominican Republic. For a plot-based sampling method, 10 Ã...
Communicating Patient Status: Comparison of Teaching Strategies in Prelicensure Nursing Education.
Lanz, Amelia S; Wood, Felecia G
Research indicates that nurses lack adequate preparation for reporting patient status. This study compared 2 instructional methods focused on patient status reporting in the clinical setting using a randomized posttest-only comparison group design. Reporting performance using a standardized communication framework and student perceptions of satisfaction and confidence with learning were measured in a simulated event that followed the instruction. Between the instructional methods, there was no statistical difference in student reporting performance or perceptions of learning. Performance evaluations provided helpful insights for the nurse educator.
ERIC Educational Resources Information Center
Hopwood, Christopher J.; Morey, Leslie C.; Edelen, Maria Orlando; Shea, M. Tracie; Grilo, Carlos M.; Sanislow, Charles A.; McGlashan, Thomas H.; Daversa, Maria T.; Gunderson, John G.; Zanarini, Mary C.; Markowitz, John C.; Skodol, Andrew E.
2008-01-01
Interview methods are widely regarded as the standard for the diagnosis of borderline personality disorder (BPD), whereas self-report methods are considered a time-efficient alternative. However, the relative validity of these methods has not been sufficiently tested. The current study used data from the Collaborative Longitudinal Personality…
Stan T. Lebow; Patricia K. Lebow; Kolby C. Hirth
2017-01-01
Current standardized methods are not well-suited for estimating in-service preservative leaching from treated wood products. This study compared several alternative leaching methods to a commonly used standard method, and to leaching under natural exposure conditions. Small blocks or lumber specimens were pressure treated with a wood preservative containing borax and...
Shirasaki, Osamu; Asou, Yosuke; Takahashi, Yukio
2007-12-01
Owing to fast or stepwise cuff deflation, or measuring at places other than the upper arm, the clinical accuracy of most recent automated sphygmomanometers (auto-BPMs) cannot be validated by one-arm simultaneous comparison, which would be the only accurate validation method based on auscultation. Two main alternative methods are provided by current standards, that is, two-arm simultaneous comparison (method 1) and one-arm sequential comparison (method 2); however, the accuracy of these validation methods might not be sufficient to compensate for the suspicious accuracy in lateral blood pressure (BP) differences (LD) and/or BP variations (BPV) between the device and reference readings. Thus, the Japan ISO-WG for sphygmomanometer standards has been studying a new method that might improve validation accuracy (method 3). The purpose of this study is to determine the appropriateness of method 3 by comparing immunity to LD and BPV with those of the current validation methods (methods 1 and 2). The validation accuracy of the above three methods was assessed in human participants [N=120, 45+/-15.3 years (mean+/-SD)]. An oscillometric automated monitor, Omron HEM-762, was used as the tested device. When compared with the others, methods 1 and 3 showed a smaller intra-individual standard deviation of device error (SD1), suggesting their higher reproducibility of validation. The SD1 by method 2 (P=0.004) significantly correlated with the participant's BP, supporting our hypothesis that the increased SD of device error by method 2 is at least partially caused by essential BPV. Method 3 showed a significantly (P=0.0044) smaller interparticipant SD of device error (SD2), suggesting its higher interparticipant consistency of validation. Among the methods of validation of the clinical accuracy of auto-BPMs, method 3, which showed the highest reproducibility and highest interparticipant consistency, can be proposed as being the most appropriate.
Olstein, Alan; Griffith, Leena; Feirtag, Joellen; Pearson, Nicole
2013-01-01
The Paradigm Diagnostics Salmonella Indicator Broth (PDX-SIB) is intended as a single-step selective enrichment indicator broth to be used as a simple screening test for the presence of Salmonella spp. in environmental samples. This method permits the end user to avoid multistep sample processing to identify presumptively positive samples, as exemplified by standard U.S. reference methods. PDX-SIB permits the outgrowth of Salmonella while inhibiting the growth of competitive Gram-negative and -positive microflora. Growth of Salmonella-positive cultures results in a visual color change of the medium from purple to yellow when the sample is grown at 37 +/- 1 degree C. Performance of PDX-SIB has been evaluated in five different categories: inclusivity-exclusivity, methods comparison, ruggedness, lot-to-lot variability, and shelf stability. The inclusivity panel included 100 different Salmonella serovars, 98 of which were SIB-positive during the 30 to 48 h incubation period. The exclusivity panel included 33 different non-Salmonella microorganisms, 31 of which were SIB-negative during the incubation period. Methods comparison studies included four different surfaces: S. Newport on plastic, S. Anatum on sealed concrete, S. Abaetetuba on ceramic tile, and S. Typhimurium in the presence of 1 log excess of Citrobacter freundii. Results of the methods comparison studies demonstrated no statistical difference between the SIB method and the U.S. Food and Drug Administration-Bacteriological Analytical Manual reference method, as measured by the Mantel-Haenszel Chi-square test. Ruggedness studies demonstrated little variation in test results when SIB incubation temperatures were varied over a 34-40 degrees C range. Lot-to-lot consistency results suggest no detectable differences in manufactured goods using two reference Salmonella serovars and one non-Salmonella microorganism.
Comparison of two stand-alone CADe systems at multiple operating points
NASA Astrophysics Data System (ADS)
Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas
2015-03-01
Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.
Quasi-experimental evaluation without regression analysis.
Rohrer, James E
2009-01-01
Evaluators of public health programs in field settings cannot always randomize subjects into experimental or control groups. By default, they may choose to employ the weakest study design available: the pretest, posttest approach without a comparison group. This essay argues that natural experiments involving comparison groups are within reach of public health program managers. Methods for analyzing natural experiments are discussed.
ERIC Educational Resources Information Center
Jaffery, Rose; Johnson, Austin H.; Bowler, Mark C.; Riley-Tillman, T. Chris; Chafouleas, Sandra M.; Harrison, Sayward E.
2015-01-01
To date, rater accuracy when using Direct Behavior Rating (DBR) has been evaluated by comparing DBR-derived data to scores yielded through systematic direct observation. The purpose of this study was to evaluate an alternative method for establishing comparison scores using expert-completed DBR alongside best practices in consensus building…
ERIC Educational Resources Information Center
Köksal, Mustafa Serdar
2013-01-01
In this study, comparison of academically advanced science students and gifted students in terms of attitude toward science and motivation toward science learning is aimed. The survey method was used for the data collection by the help of two different instruments: "Attitude Toward Science" scale and "motivation toward science…
Health and Sleep Problems in Cornelia de Lange Syndrome: A Case Control Study
ERIC Educational Resources Information Center
Hall, S. S.; Arron, K.; Sloneem, J.; Oliver, C.
2008-01-01
Background: Self-injury, sleep problems and health problems are commonly reported in Cornelia de Lange Syndrome (CdLS) but there are no comparisons with appropriately matched participants. The relationship between these areas and comparison to a control group is warranted. Method: 54 individuals with CdLS were compared with 46 participants with…
Model comparisons for estimating carbon emissions from North American wildland fire
Nancy H.F. French; William J. de Groot; Liza K. Jenkins; Brendan M. Rogers; Ernesto Alvarado; Brian Amiro; Bernardus De Jong; Scott Goetz; Elizabeth Hoy; Edward Hyer; Robert Keane; B.E. Law; Donald McKenzie; Steven G. McNulty; Roger Ottmar; Diego R. Perez-Salicrup; James Randerson; Kevin M. Robertson; Merritt Turetsky
2011-01-01
Research activities focused on estimating the direct emissions of carbon from wildland fires across North America are reviewed as part of the North American Carbon Program disturbance synthesis. A comparison of methods to estimate the loss of carbon from the terrestrial biosphere to the atmosphere from wildland fires is presented. Published studies on emissions from...
ERIC Educational Resources Information Center
St. Louis, Kenneth O.
2011-01-01
Purpose: The "Public Opinion Survey of Human Attributes-Stuttering" ("POSHA-S") was developed to make available worldwide a standard measure of public attitudes toward stuttering that is practical, reliable, valid, and translatable. Mean data from past field studies as comparisons for interpretation of "POSHA-S" results are reported. Method: Means…
Nookaew, Intawat; Papini, Marta; Pornputtapong, Natapol; Scalcinati, Gionata; Fagerberg, Linn; Uhlén, Matthias; Nielsen, Jens
2012-01-01
RNA-seq, has recently become an attractive method of choice in the studies of transcriptomes, promising several advantages compared with microarrays. In this study, we sought to assess the contribution of the different analytical steps involved in the analysis of RNA-seq data generated with the Illumina platform, and to perform a cross-platform comparison based on the results obtained through Affymetrix microarray. As a case study for our work we, used the Saccharomyces cerevisiae strain CEN.PK 113-7D, grown under two different conditions (batch and chemostat). Here, we asses the influence of genetic variation on the estimation of gene expression level using three different aligners for read-mapping (Gsnap, Stampy and TopHat) on S288c genome, the capabilities of five different statistical methods to detect differential gene expression (baySeq, Cuffdiff, DESeq, edgeR and NOISeq) and we explored the consistency between RNA-seq analysis using reference genome and de novo assembly approach. High reproducibility among biological replicates (correlation ≥0.99) and high consistency between the two platforms for analysis of gene expression levels (correlation ≥0.91) are reported. The results from differential gene expression identification derived from the different statistical methods, as well as their integrated analysis results based on gene ontology annotation are in good agreement. Overall, our study provides a useful and comprehensive comparison between the two platforms (RNA-seq and microrrays) for gene expression analysis and addresses the contribution of the different steps involved in the analysis of RNA-seq data. PMID:22965124
A Comparison of Assessment Methods and Raters in Product Creativity
ERIC Educational Resources Information Center
Lu, Chia-Chen; Luh, Ding-Bang
2012-01-01
Although previous studies have attempted to use different experiences of raters to rate product creativity by adopting the Consensus Assessment Method (CAT) approach, the validity of replacing CAT with another measurement tool has not been adequately tested. This study aimed to compare raters with different levels of experience (expert ves.…
Three Interaction Patterns on Asynchronous Online Discussion Behaviours: A Methodological Comparison
ERIC Educational Resources Information Center
Jo, I.; Park, Y.; Lee, H.
2017-01-01
An asynchronous online discussion (AOD) is one format of instructional methods that facilitate student-centered learning. In the wealth of AOD research, this study evaluated how students' behavior on AOD influences their academic outcomes. This case study compared the differential analytic methods including web log mining, social network analysis…
Comparing physiographic maps with different categorisations
NASA Astrophysics Data System (ADS)
Zawadzka, J.; Mayr, T.; Bellamy, P.; Corstanje, R.
2015-02-01
This paper addresses the need for a robust map comparison method suitable for finding similarities between thematic maps with different forms of categorisations. In our case, the requirement was to establish the information content of newly derived physiographic maps with regards to set of reference maps for a study area in England and Wales. Physiographic maps were derived from the 90 m resolution SRTM DEM, using a suite of existing and new digital landform mapping methods with the overarching purpose of enhancing the physiographic unit component of the Soil and Terrain database (SOTER). Reference maps were seven soil and landscape datasets mapped at scales ranging from 1:200,000 to 1:5,000,000. A review of commonly used statistical methods for categorical comparisons was performed and of these, the Cramer's V statistic was identified as the most appropriate for comparison of maps with different legends. Interpretation of multiple Cramer's V values resulting from one-by-one comparisons of the physiographic and baseline maps was facilitated by multi-dimensional scaling and calculation of average distances between the maps. The method allowed for finding similarities and dissimilarities amongst physiographic maps and baseline maps and informed the recommendation of the most suitable methodology for terrain analysis in the context of soil mapping.
Elzanfaly, Eman S; Hassan, Said A; Salem, Maissa Y; El-Zeany, Badr A
2015-12-05
A comparative study was established between two signal processing techniques showing the theoretical algorithm for each method and making a comparison between them to indicate the advantages and limitations. The methods under study are Numerical Differentiation (ND) and Continuous Wavelet Transform (CWT). These methods were studied as spectrophotometric resolution tools for simultaneous analysis of binary and ternary mixtures. To present the comparison, the two methods were applied for the resolution of Bisoprolol (BIS) and Hydrochlorothiazide (HCT) in their binary mixture and for the analysis of Amlodipine (AML), Aliskiren (ALI) and Hydrochlorothiazide (HCT) as an example for ternary mixtures. By comparing the results in laboratory prepared mixtures, it was proven that CWT technique is more efficient and advantageous in analysis of mixtures with severe overlapped spectra than ND. The CWT was applied for quantitative determination of the drugs in their pharmaceutical formulations and validated according to the ICH guidelines where accuracy, precision, repeatability and robustness were found to be within the acceptable limit. Copyright © 2015 Elsevier B.V. All rights reserved.
Møller, M; Wedderkopp, N; Myklebust, G; Lind, M; Sørensen, H; Hebert, J J; Emery, C A; Attermann, J
2018-01-01
The accurate measurement of sport exposure time and injury occurrence is key to effective injury prevention and management. Current measures are limited by their inability to identify all types of sport-related injury, narrow scope of injury information, or lack the perspective of the injured athlete. The aims of the study were to evaluate the proportion of injuries and the agreement between sport exposures reported by the SMS messaging and follow-up telephone part of the SMS, Phone, and medical staff Examination (SPEx) sports injury surveillance system when compared to measures obtained by trained on-field observers and medical staff (comparison method). We followed 24 elite adolescent handball players over 12 consecutive weeks. Eighty-six injury registrations were obtained by the SPEx and comparison methods. Of them, 35 injury registrations (41%) were captured by SPEx only, 10 injury registrations (12%) by the comparison method only, and 41 injury registrations (48%) by both methods. Weekly exposure time differences (95% limits of agreement) between SPEx and the comparison method ranged from -4.2 to 6.3 hours (training) and -1.5 to 1.0 hours (match) with systematic differences being 1.1 hours (95% CI 0.7 to 1.4) and -0.2 (95% CI -0.3 to -0.2), respectively. These results support the ability of the SPEx system to measure training and match exposures and injury occurrence among young athletes. High weekly response proportions (mean 83%) indicate that SMS messaging can be used for player measures of injury consequences beyond time-loss from sport. However, this needs to be further evaluated in large-scale studies. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Integrating evidence-based teaching into to clinical practice should improve outcomes.
Richards, Derek
2005-01-01
Sources used were Medline, Embase, the Education Resources Information Centre , Cochrane Controlled Trials Register, Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effects, Health Technology Assessment database, Best Evidence, Best Evidence Medical Education and Science Citation Index, along with reference lists of known systematic reviews. Studies were chosen for inclusion if they evaluated the effects of postgraduate evidence-based medicine (EBM) or critical appraisal teaching in comparison with a control group or baseline before teaching, using a measure of participants' learning achievements or patients' health gains as outcomes. Articles were graded as either level 1 (randomised controlled trials (RCT)) or level 2 (non-randomised studies that either had a comparison with a control group), or a before and after comparison without a control group. Learning achievement was assessed separately for knowledge, critical appraisal skills, attitudes and behaviour. Because of obvious heterogeneity in the features of individual studies, their quality and assessment tools used, a meta-analysis could not be carried out. Conclusions were weighted by methodological quality. Twenty-three relevant studies were identified, comprising four RCT, seven non-RCT, and 12 before and after comparison studies. Eighteen studies (including two RCT) evaluated a standalone teaching method and five studies (including two RCT) evaluated a clinically integrated teaching method. Standalone teaching improved knowledge but not skills, attitudes or behaviour. Clinically integrated teaching improved knowledge, skills, attitudes and behaviour. Teaching of EBM should be moved from classrooms to clinical practice to achieve improvements in substantial outcomes.
Song, Fujian; Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G
2011-08-16
To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. The study included 112 independent trial networks (including 1552 trials with 478,775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence.
Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G
2011-01-01
Objective To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Design Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Main outcome measure Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. Results The study included 112 independent trial networks (including 1552 trials with 478 775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Conclusions Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence. PMID:21846695
Crown, William H
2014-02-01
This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.
Kolacsek, Orsolya; Pergel, Enikő; Varga, Nóra; Apáti, Ágota; Orbán, Tamás I
2017-01-20
There are numerous applications of quantitative PCR for both diagnostic and basic research. As in many other techniques the basis of quantification is that comparisons are made between different (unknown and known or reference) specimens of the same entity. When the aim is to compare real quantities of different species in samples, one cannot escape their separate precise absolute quantification. We have established a simple and reliable method for this purpose (Ct shift method) which combines the absolute and the relative approach. It requires a plasmid standard containing both sequences of amplicons to be compared (e.g. the target of interest and the endogenous control). It can serve as a reference sample with equal copies of templates for both targets. Using the ΔΔCt formula we can quantify the exact ratio of the two templates in each unknown sample. The Ct shift method has been successfully applied for transposon gene copy measurements, as well as for comparison of different mRNAs in cDNA samples. This study provides the proof of concept and introduces some potential applications of the method; the absolute nature of results even without the need for real reference samples can contribute to the universality of the method and comparability of different studies. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Mcgowan, David M.; Bostic, Susan W.; Camarda, Charles J.
1993-01-01
The development of two advanced reduced-basis methods, the force derivative method and the Lanczos method, and two widely used modal methods, the mode displacement method and the mode acceleration method, for transient structural analysis of unconstrained structures is presented. Two example structural problems are studied: an undamped, unconstrained beam subject to a uniformly distributed load which varies as a sinusoidal function of time and an undamped high-speed civil transport aircraft subject to a normal wing tip load which varies as a sinusoidal function of time. These example problems are used to verify the methods and to compare the relative effectiveness of each of the four reduced-basis methods for performing transient structural analyses on unconstrained structures. The methods are verified with a solution obtained by integrating directly the full system of equations of motion, and they are compared using the number of basis vectors required to obtain a desired level of accuracy and the associated computational times as comparison criteria.
Probability of Detection (POD) as a statistical model for the validation of qualitative methods.
Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T
2011-01-01
A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.
Vertebral rotation measurement: a summary and comparison of common radiographic and CT methods
Lam, Gabrielle C; Hill, Doug L; Le, Lawrence H; Raso, Jim V; Lou, Edmond H
2008-01-01
Current research has provided a more comprehensive understanding of Adolescent Idiopathic Scoliosis (AIS) as a three-dimensional spinal deformity, encompassing both lateral and rotational components. Apart from quantifying curve severity using the Cobb angle, vertebral rotation has become increasingly prominent in the study of scoliosis. It demonstrates significance in both preoperative and postoperative assessment, providing better appreciation of the impact of bracing or surgical interventions. In the past, the need for computer resources, digitizers and custom software limited studies of rotation to research performed after a patient left the scoliosis clinic. With advanced technology, however, rotation measurements are now more feasible. While numerous vertebral rotation measurement methods have been developed and tested, thorough comparisons of these are still relatively unexplored. This review discusses the advantages and disadvantages of six common measurement techniques based on technology most pertinent in clinical settings: radiography (Cobb, Nash-Moe, Perdriolle and Stokes' method) and computer tomography (CT) imaging (Aaro-Dahlborn and Ho's method). Better insight into the clinical suitability of rotation measurement methods currently available is presented, along with a discussion of critical concerns that should be addressed in future studies and development of new methods. PMID:18976498
Comparison of bulk sediment and sediment elutriate toxicity testing methods
Elutriate bioassays are among numerous methods that exist for assessing the potential toxicity of sediments in aquatic systems. In this study, interlaboratory results were compared from 96-hour Ceriodaphnia dubia and Pimephales promelas static-renewal acute toxicity tests conduct...
Vavalle, Nicholas A; Jelen, Benjamin C; Moreno, Daniel P; Stitzel, Joel D; Gayzik, F Scott
2013-01-01
Objective evaluation methods of time history signals are used to quantify how well simulated human body responses match experimental data. As the use of simulations grows in the field of biomechanics, there is a need to establish standard approaches for comparisons. There are 2 aims of this study. The first is to apply 3 objective evaluation methods found in the literature to a set of data from a human body finite element model. The second is to compare the results of each method, examining how they are correlated to each other and the relative strengths and weaknesses of the algorithms. In this study, the methods proposed by Sprague and Geers (magnitude and phase error, SGM and SGP), Rhule et al. (cumulative standard deviation, CSD), and Gehre et al. (CORrelation and Analysis, or CORA, size, phase, shape, corridor) were compared. A 40 kph frontal sled test presented by Shaw et al. was simulated using the Global Human Body Models Consortium midsized male full-body finite element model (v. 3.5). Mean and standard deviation experimental data (n = 5) from Shaw et al. were used as the benchmark. Simulated data were output from the model at the appropriate anatomical locations for kinematic comparison. Force data were output at the seat belts, seat pan, knee, and foot restraints. Objective comparisons from 53 time history data channels were compared to the experimental results. To compare the different methods, all objective comparison metrics were cross-plotted and linear regressions were calculated. The following ratings were found to be statistically significantly correlated (P < .01): SGM and CORrelation and Analysis (CORA) size, R (2) = 0.73; SGP and CORA shape, R (2) = 0.82; and CSD and CORA's corridor factor, R (2) = 0.59. Relative strengths of the correlated ratings were then investigated. For example, though correlated to CORA size, SGM carries a sign to indicate whether the simulated response is greater than or less than the benchmark signal. A further analysis of the advantages and drawbacks of each method is discussed. The results demonstrate that a single metric is insufficient to provide a complete assessment of how well the simulated results match the experiments. The CORA method provided the most comprehensive evaluation of the signal. Regardless of the method selected, one primary recommendation of this work is that for any comparison, the results should be reported to provide separate assessments of a signal's match to experimental variance, magnitude, phase, and shape. Future work planned includes implementing any forthcoming International Organization for Standardization standards for objective evaluations. Supplemental materials are available for this article. Go to the publisher's online edition of Traffic Injury Prevention to view the supplemental file.
ERIC Educational Resources Information Center
Noblitt, Lynnette; Vance, Diane E.; Smith, Michelle L. DePoy
2010-01-01
This study compares a traditional paper presentation approach and a case study method for the development and improvement of oral communication skills and critical-thinking skills in a class of junior forensic science majors. A rubric for rating performance in these skills was designed on the basis of the oral communication competencies developed…
ERIC Educational Resources Information Center
Stevens, Olinger; Leigh, Erika
2012-01-01
Scope and Method of Study: The purpose of the study is to use an empirical approach to identify a simple, economical, efficient, and technically adequate performance measure that teachers can use to assess student growth in mathematics. The current study has been designed to expand the body of research for math CBM to further examine technical…
Comparison of Floseal(r) and electrocautery in hemostasis after total knee arthroplasty
Helito, Camilo Partezani; Gobbi, Riccardo Gomes; Castrillon, Lucas Machado; Hinkel, Betina Bremer; Pécora, José Ricardo; Camanho, Gilberto Luis
2013-01-01
Objective To evaluate whether hemostasis with eletrocauterization in comparison with Floseal(r) leads to different bleeding rates during total knee arthroplasty. Methods A comparative study was performed between two groups: group with ten consecutive total knee arthroplasties with Floseal(r) used as hemostatic method and control group with ten consecutive total knee arthroplasties with eletrocauterization as hemostatic method. Bleeding parameters such as debit of the drain, liquid infusion and blood transfusion rate were recorded. Results Floseal(r) group received less blood transfusion, less liquid infusion and lower drainage in absolute numbers compared to the control group. However, no parameter was statistically significant. Conclusion Hemostasis with Floseal(r) is as effective as hemostasis with eletrocauterization, what makes it a viable alternative to patients with contraindication to electric scalpel use. Level of Evidence II, Prospective Comparative Study. PMID:24453689
Roldan-Valadez, Ernesto; Garcia-Ulloa, Ana Cristina; Gonzalez-Gutierrez, Omar; Martinez-Lopez, Manuel
2011-01-01
Computed-assisted three-dimensional data (3D) allows for an accurate evaluation of volumes compared with traditional measurements. An in vitro method comparison between geometric volume and 3D volumetry to obtain reference data for pituitary volumes in normal pituitary glands (PGs) and PGs containing adenomas. Prospective, transverse, analytical study. Forty-eight subjects underwent brain magnetic resonance imaging (MRI) with 3D sequencing for computer-aided volumetry. PG phantom volumes by both methods were compared. Using the best volumetric method, volumes of normal PGs and PGs with adenoma were compared. Statistical analysis used the Bland-Altman method, t-statistics, effect size and linear regression analysis. Method comparison between 3D volumetry and geometric volume revealed a lower bias and precision for 3D volumetry. A total of 27 patients exhibited normal PGs (mean age, 42.07 ± 16.17 years), although length, height, width, geometric volume and 3D volumetry were greater in women than in men. A total of 21 patients exhibited adenomas (mean age 39.62 ± 10.79 years), and length, height, width, geometric volume and 3D volumetry were greater in men than in women, with significant volumetric differences. Age did not influence pituitary volumes on linear regression analysis. Results from the present study showed that 3D volumetry was more accurate than the geometric method. In addition, the upper normal limits of PGs overlapped with lower volume limits during early stage microadenomas.
ERIC Educational Resources Information Center
Choi, Sae Il
2009-01-01
This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…
Ito, Atsuo; Sogo, Yu; Yamazaki, Atsushi; Aizawa, Mamoru; Osaka, Akiyoshi; Hayakawa, Satoshi; Kikuchi, Masanori; Yamashita, Kimihiro; Tanaka, Yumi; Tadokoro, Mika; de Sena, Lídia Ágata; Buchanan, Fraser; Ohgushi, Hajime; Bohner, Marc
2015-10-01
A potential standard method for measuring the relative dissolution rate to estimate the resorbability of calcium-phosphate-based ceramics is proposed. Tricalcium phosphate (TCP), magnesium-substituted TCP (MgTCP) and zinc-substituted TCP (ZnTCP) were dissolved in a buffer solution free of calcium and phosphate ions at pH 4.0, 5.5 or 7.3 at nine research centers. Relative values of the initial dissolution rate (relative dissolution rates) were in good agreement among the centers. The relative dissolution rate coincided with the relative volume of resorption pits of ZnTCP in vitro. The relative dissolution rate coincided with the relative resorbed volume in vivo in the case of comparison between microporous MgTCPs with different Mg contents and similar porosity. However, the relative dissolution rate was in poor agreement with the relative resorbed volume in vivo in the case of comparison between microporous TCP and MgTCP due to the superimposition of the Mg-mediated decrease in TCP solubility on the Mg-mediated increase in the amount of resorption. An unambiguous conclusion could not be made as to whether the relative dissolution rate is predictive of the relative resorbed volume in vivo in the case of comparison between TCPs with different porosity. The relative dissolution rate may be useful for predicting the relative amount of resorption for calcium-phosphate-based ceramics having different solubility under the condition that the differences in the materials compared have little impact on the resorption process such as the number and activity of resorbing cells. The evaluation and subsequent optimization of the resorbability of calcium phosphate are crucial in the use of resorbable calcium phosphates. Although the resorbability of calcium phosphates has usually been evaluated in vivo, establishment of a standard in vitro method that can predict in vivo resorption is beneficial for accelerating development and commercialization of new resorbable calcium phosphate materials as well as reducing use of animals. However, there are only a few studies to propose such an in vitro method within which direct comparison was carried out between in vitro and in vivo resorption. We propose here an in vitro method based on measuring dissolution rate. The efficacy and limitations of the method were evaluated by international round-robin tests as well as comparison with in vivo resorption studies for future standardization. This study was carried out as one of Versailles Projects on Advanced Materials and Standards (VAMAS). Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
A Comparison of Two Flashcard Drill Methods Targeting Word Recognition
ERIC Educational Resources Information Center
Volpe, Robert J.; Mule, Christina M.; Briesch, Amy M.; Joseph, Laurice M.; Burns, Matthew K.
2011-01-01
Traditional drill and practice (TD) and incremental rehearsal (IR) are two flashcard drill instructional methods previously noted to improve word recognition. The current study sought to compare the effectiveness and efficiency of these two methods, as assessed by next day retention assessments, under 2 conditions (i.e., opportunities to respond…
Comparison of English Language Rhythm and Kalhori Kurdish Language Rhythm
ERIC Educational Resources Information Center
Taghva, Nafiseh; Zadeh, Vahideh Abolhasani
2016-01-01
Interval-based method is a method of studying the rhythmic quantitative features of languages. This method use Pairwise Variability Index (PVI) to consider the variability of vocalic duration and inter-vocalic duration of sentences which leads to classification of languages rhythm into stress-timed languages and syllable-timed ones. This study…
Analyzing Empirical Evaluations of Non-Experimental Methods in Field Settings
ERIC Educational Resources Information Center
Steiner, Peter M.; Wong, Vivian
2016-01-01
Despite recent emphasis on the use of randomized control trials (RCTs) for evaluating education interventions, in most areas of education research, observational methods remain the dominant approach for assessing program effects. Over the last three decades, the within-study comparison (WSC) design has emerged as a method for evaluating the…
Examining Classification Criteria: A Comparison of Three Cut Score Methods
ERIC Educational Resources Information Center
DiStefano, Christine; Morgan, Grant
2011-01-01
This study compared 3 different methods of creating cut scores for a screening instrument, T scores, receiver operating characteristic curve (ROC) analysis, and the Rasch rating scale method (RSM), for use with the Behavioral and Emotional Screening System (BESS) Teacher Rating Scale for Children and Adolescents (Kamphaus & Reynolds, 2007).…
ERIC Educational Resources Information Center
Roid, Gale; And Others
Several measurement theorists have convincingly argued that methods of writing test questions, particularly for criterion-referenced tests, should be based on operationally defined rules. This study was designed to examine and further refine a method for objectively generating multiple-choice questions for prose instructional materials. Important…
A Comparison of Treatment Integrity Assessment Methods for Behavioral Intervention
ERIC Educational Resources Information Center
Koh, Seong A.
2010-01-01
The purpose of this study was to examine the similarity of outcomes from three different treatment integrity (TI) methods, and to identify the method which best corresponded to the assessment of a child's behavior. Six raters were recruited through individual contact via snowball sampling. A modified intervention component list and 19 video clips…
A Comparison of Methods for Estimating Quadratic Effects in Nonlinear Structural Equation Models
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Weiss, Brandi A.; Hsu, Jui-Chen
2012-01-01
Two Monte Carlo simulations were performed to compare methods for estimating and testing hypotheses of quadratic effects in latent variable regression models. The methods considered in the current study were (a) a 2-stage moderated regression approach using latent variable scores, (b) an unconstrained product indicator approach, (c) a latent…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xiao; Gao, Wenzhong; Wang, Jianhui
The frequency regulation capability of a wind power plant plays an important role in enhancing frequency reliability especially in an isolated power system with high wind power penetration levels. A comparison of two types of inertial control methods, namely frequency-based inertial control (FBIC) and stepwise inertial control (SIC), is presented in this paper. Comprehensive case studies are carried out to reveal features of the different inertial control methods, simulated in a modified Western System Coordination Council (WSCC) nine-bus power grid using real-time digital simulator (RTDS) platform. The simulation results provide an insight into the inertial control methods under various scenarios.
Comparison of deterministic and stochastic methods for time-dependent Wigner simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Sihong, E-mail: sihong@math.pku.edu.cn; Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg
2015-11-01
Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution ofmore » a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.« less
A Comparison of Methods for Decoupling Tongue and Lower Lip from Jaw Movements in 3D Articulography
ERIC Educational Resources Information Center
Henriques, Rafael Neto; van Lieshout, Pascal
2013-01-01
Purpose: One popular method to study the motion of oral articulators is 3D electromagnetic articulography. For many studies, it is important to use an algorithm to decouple the motion of the tongue and the lower lip from the motion of the mandible. In this article, the authors describe and compare 4 methods for decoupling jaw motion by using 3D…
USDA-ARS?s Scientific Manuscript database
This study compared the BAX Polymerase Chain Reaction method (BAX PCR) with the Standard Culture Method (SCM) for detection of L. monocytogenes in blue crab meat and crab processing plants. The aim of this study was to address this data gap. Raw crabs, finished products and environmental sponge samp...
An analysis of methods for the selection of trees from wild stands
F. Thomas Ledig
1976-01-01
The commonly applied comparison-tree method of selection is analyzed as a form of within-family selection. If environmental variarion among comparison- and select-tree groups, c2, is a relatively small proportion (17 percent or less with 5 comparison trees) of the total variation, comparison-tree selection will result in less...
ERIC Educational Resources Information Center
Meyers, Coby V.; Wan, Yinmei
2016-01-01
The Regional Educational Laboratory Northeast and Islands conducted this study using data on public high schools in Puerto Rico from national and territory databases to compare methods for identifying beating-the-odds schools. Schools were identified by two methods, a status method that ranked high-poverty schools based on their current observed…
Rechenchoski, Daniele Zendrini; Dambrozio, Angélica Marim Lopes; Vivan, Ana Carolina Polano; Schuroff, Paulo Alfonso; Burgos, Tatiane das Neves; Pelisson, Marsileni; Perugini, Marcia Regina Eches; Vespero, Eliana Carolina
The production of KPC (Klebsiella pneumoniae carbapenemase) is the major mechanism of resistance to carbapenem agents in enterobacterias. In this context, forty KPC-producing Enterobacter spp. clinical isolates were studied. It was evaluated the activity of antimicrobial agents: polymyxin B, tigecycline, ertapenem, imipenem and meropenem, and was performed a comparison of the methodologies used to determine the susceptibility: broth microdilution, Etest ® (bioMérieux), Vitek 2 ® automated system (bioMérieux) and disc diffusion. It was calculated the minimum inhibitory concentration (MIC) for each antimicrobial and polymyxin B showed the lowest concentrations for broth microdilution. Errors also were calculated among the techniques, tigecycline and ertapenem were the antibiotics with the largest and the lower number of discrepancies, respectively. Moreover, Vitek 2 ® automated system was the method most similar compared to the broth microdilution. Therefore, is important to evaluate the performance of new methods in comparison to the reference method, broth microdilution. Copyright © 2017 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.
Modification of a successive corrections objective analysis for improved higher order calculations
NASA Technical Reports Server (NTRS)
Achtemeier, Gary L.
1988-01-01
The use of objectively analyzed fields of meteorological data for the initialization of numerical prediction models and for complex diagnostic studies places the requirements upon the objective method that derivatives of the gridded fields be accurate and free from interpolation error. A modification was proposed for an objective analysis developed by Barnes that provides improvements in analysis of both the field and its derivatives. Theoretical comparisons, comparisons between analyses of analytical monochromatic waves, and comparisons between analyses of actual weather data are used to show the potential of the new method. The new method restores more of the amplitudes of desired wavelengths while simultaneously filtering more of the amplitudes of undesired wavelengths. These results also hold for the first and second derivatives calculated from the gridded fields. Greatest improvements were for the Laplacian of the height field; the new method reduced the variance of undesirable very short wavelengths by 72 percent. Other improvements were found in the divergence of the gridded wind field and near the boundaries of the field of data.
A COMPARISON OF TWO RAPID BIOLOGICAL ASSESSMENT SAMPLING METHODS FOR MACROINVERTEBRATES
In 2003, the Office of Research and Developments (ORD's) National Exposure Research Laboratory initiated a collaborative research effort with U.S. EPA Region 3 to conduct a study comparing two rapid biological assessment methods for collecting stream macroinvertebrates. One metho...
A comparison of radiometric normalization methods when filling cloud gaps in Lansat imagery.
E. H. Helmer
2007-01-01
Mapping persistently cloudy tropical landscapes with optical satellite imagenery usually requires assembling the clear imagery from several dates. this study compares methods for normalizing image data when filling cloud gaps in Landsat imagery with imagery from other dates.
WOODSTOVE EMISSION MEASUREMENT METHODS COMPARISON AND EMISSION FACTORS UPDATE
This paper compares various field and laboratory woodstove emission measurement methods. n 1988, the U.S. EPA promulgated performance standards for residential wood heaters (woodstoves). ver the past several years, a number of field studies have been undertaken to determine the a...
An interlaboratory comparison of sediment elutriate preparation and toxicity test methods
Elutriate bioassays are among numerous methods that exist for assessing the potential toxicity of sediments in aquatic systems. In this study, interlaboratory results were compared from 96-hour Ceriodaphnia dubia and Pimephales promelas static-renewal acute toxicity tests conduct...
AUPress: A Comparison of an Open Access University Press with Traditional Presses
ERIC Educational Resources Information Center
McGreal, Rory; Chen, Nian-Shing
2011-01-01
This study is a comparison of AUPress with three other traditional (non-open access) Canadian university presses. The analysis is based on the rankings that are correlated with book sales on Amazon.com and Amazon.ca. Statistical methods include the sampling of the sales ranking of randomly selected books from each press. The results of one-way…
ERIC Educational Resources Information Center
Bader, Shannon M.; Scalora, Mario J.; Casady, Thomas K.; Black, Shannon
2008-01-01
Objective: The current study compared a sample of female perpetrators reported to Child Protective Services (CPS) to a sample of women from the criminal justice system. Instead of examining a clinical or criminal justice sample in isolation, this comparison allows a more accurate description of female sexual offending. Methods: Cases were drawn…
ERIC Educational Resources Information Center
Sebre, Sandra; Sprugevica, Ieva; Novotni, Antoni; Bonevski, Dimitar; Pakalniskiene, Vilmante; Popescu, Daniela; Turchina, Tatiana; Friedrich, William; Lewis, Owen
2004-01-01
Objectives: This study was designed to assess the incidence of child emotional and physical abuse, associated risk factors and psychosocial symptoms in a cross-cultural comparison between post-communist bloc countries. Method: One-thousand one-hundred forty-five children ages 10-14 from Latvia (N=297), Lithuania (N=300), Macedonia (N=302), and…
ERIC Educational Resources Information Center
Lindberg, Lene; Fransson, Mari; Forslund, Tommie; Springer, Lydia; Granqvist, Pehr
2017-01-01
Background: Scientific knowledge on the quality of caregiving/maternal sensitivity among mothers with mild intellectual disabilities (ID) is limited and subject to many methodological shortcomings, but seems to suggest that these mothers are less sensitive than mothers without intellectual disabilities. Methods: In this matched-comparison study…
Trajectories of Early Brain Volume Development in Fragile X Syndrome and Autism
ERIC Educational Resources Information Center
Hazlett, Heather Cody; Poe, Michele D.; Lightbody, Amy A.; Styner, Martin; MacFall, James R.; Reiss, Allan L.; Piven, Joseph
2012-01-01
Objective: To examine patterns of early brain growth in young children with fragile X syndrome (FXS) compared with a comparison group (controls) and a group with idiopathic autism. Method: The study included 53 boys 18 to 42 months of age with FXS, 68 boys with idiopathic autism (autism spectrum disorder), and a comparison group of 50 typically…
ERIC Educational Resources Information Center
Abbas, Andrea; McLean, Monica
2007-01-01
Systems designed to ensure that teaching and student learning are of a suitable quality are a feature of universities globally. Quality assurance systems are central to attempts to internationalise higher education, motivated in part by a concern for greater global equality. Yet, if such systems incorporate comparisons, the tendency is to reflect…
Phiri, Sam; Rothenbacher, Dietrich; Neuhann, Florian
2015-01-01
Background Chronic kidney disease (CKD) is a probably underrated public health problem in Sub-Saharan-Africa, in particular in combination with HIV-infection. Knowledge about the CKD prevalence is scarce and in the available literature different methods to classify CKD are used impeding comparison and general prevalence estimates. Methods This study assessed different serum-creatinine based equations for glomerular filtration rates (eGFR) and compared them to a cystatin C based equation. The study was conducted in Lilongwe, Malawi enrolling a population of 363 adults of which 32% were HIV-positive. Results Comparison of formulae based on Bland-Altman-plots and accuracy revealed best performance for the CKD-EPI equation without the correction factor for black Americans. Analyzing the differences between HIV-positive and –negative individuals CKD-EPI systematically overestimated eGFR in comparison to cystatin C and therefore lead to underestimation of CKD in HIV-positives. Conclusions Our findings underline the importance for standardization of eGFR calculation in a Sub-Saharan African setting, to further investigate the differences with regard to HIV status and to develop potential correction factors as established for age and sex. PMID:26083345
A Comparison of Ultrasound Tomography Methods in Circular Geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leach, R R; Azevedo, S G; Berryman, J G
2002-01-24
Extremely high quality data was acquired using an experimental ultrasound scanner developed at Lawrence Livermore National Laboratory using a 2D ring geometry with up to 720 transmitter/receiver transducer positions. This unique geometry allows reflection and transmission modes and transmission imaging and quantification of a 3D volume using 2D slice data. Standard image reconstruction methods were applied to the data including straight-ray filtered back projection, reflection tomography, and diffraction tomography. Newer approaches were also tested such as full wave, full wave adjoint method, bent-ray filtered back projection, and full-aperture tomography. A variety of data sets were collected including a formalin-fixed humanmore » breast tissue sample, a commercial ultrasound complex breast phantom, and cylindrical objects with and without inclusions. The resulting reconstruction quality of the images ranges from poor to excellent. The method and results of this study are described including like-data reconstructions produced by different algorithms with side-by-side image comparisons. Comparisons to medical B-scan and x-ray CT scan images are also shown. Reconstruction methods with respect to image quality using resolution, noise, and quantitative accuracy, and computational efficiency metrics will also be discussed.« less
How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods
Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José
2015-01-01
The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared. PMID:26413547
Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José
2015-01-01
The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared.
Mickenautsch, Steffen; Yengopal, Veerasamy
2013-01-01
Background Naïve-indirect comparisons are comparisons between competing clinical interventions’ evidence from separate (uncontrolled) trials. Direct comparisons are comparisons within randomised control trials (RCTs). The objective of this empirical study is to test the null-hypothesis that trends and performance differences inferred from naïve-indirect comparisons and from direct comparisons/RCTs regarding the failure rates of amalgam and direct high-viscosity glass-ionomer cement (HVGIC) restorations in permanent posterior teeth have similar direction and magnitude. Methods A total of 896 citations were identified through systematic literature search. From these, ten and two uncontrolled clinical longitudinal studies for HVGIC and amalgam, respectively, were included for naïve-indirect comparison and could be matched with three out twenty RCTs. Summary effects sizes were computed as Odds ratios (OR; 95% Confidence intervals) and compared with those from RCTs. Trend directions were inferred from 95% Confidence interval overlaps and direction of point estimates; magnitudes of performance differences were inferred from the median point estimates (OR) with 25% and 75% percentile range, for both types of comparison. Mann-Whitney U test was applied to test for statistically significant differences between point estimates of both comparison types. Results Trends and performance differences inferred from naïve-indirect comparison based on evidence from uncontrolled clinical longitudinal studies and from direct comparisons based on RCT evidence are not the same. The distributions of the point estimates differed significantly for both comparison types (Mann–Whitney U = 25, nindirect = 26; ndirect = 8; p = 0.0013, two-tailed). Conclusion The null-hypothesis was rejected. Trends and performance differences inferred from either comparison between HVGIC and amalgam restorations failure rates in permanent posterior teeth are not the same. It is recommended that clinical practice guidance regarding HVGICs should rest on direct comparisons via RCTs and not on naïve-indirect comparisons based on uncontrolled longitudinal studies in order to avoid inflation of effect estimates. PMID:24205220
A Comparison of Methods to Test for Mediation in Multisite Experiments
ERIC Educational Resources Information Center
Pituch, Keenan A.; Whittaker, Tiffany A.; Stapleton, Laura M.
2005-01-01
A Monte Carlo study extended the research of MacKinnon, Lockwood, Hoffman, West, and Sheets (2002) for single-level designs by examining the statistical performance of four methods to test for mediation in a multilevel experimental design. The design studied was a two-group experiment that was replicated across several sites, included a single…
Partial volume correction and image analysis methods for intersubject comparison of FDG-PET studies
NASA Astrophysics Data System (ADS)
Yang, Jun
2000-12-01
Partial volume effect is an artifact mainly due to the limited imaging sensor resolution. It creates bias in the measured activity in small structures and around tissue boundaries. In brain FDG-PET studies, especially for Alzheimer's disease study where there is serious gray matter atrophy, accurate estimate of cerebral metabolic rate of glucose is even more problematic due to large amount of partial volume effect. In this dissertation, we developed a framework enabling inter-subject comparison of partial volume corrected brain FDG-PET studies. The framework is composed of the following image processing steps: (1)MRI segmentation, (2)MR-PET registration, (3)MR based PVE correction, (4)MR 3D inter-subject elastic mapping. Through simulation studies, we showed that the newly developed partial volume correction methods, either pixel based or ROI based, performed better than previous methods. By applying this framework to a real Alzheimer's disease study, we demonstrated that the partial volume corrected glucose rates vary significantly among the control, at risk and disease patient groups and this framework is a promising tool useful for assisting early identification of Alzheimer's patients.
A comparison study: image-based vs signal-based retrospective gating on microCT
NASA Astrophysics Data System (ADS)
Liu, Xuan; Salmon, Phil L.; Laperre, Kjell; Sasov, Alexander
2017-09-01
Retrospective gating on animal studies with microCT has gained popularity in recent years. Previously, we use ECG signals for cardiac gating and breathing airflow or video signals of abdominal motion for respiratory gating. This method is adequate and works well for most applications. However, through the years, researchers have noticed some pitfalls in the method. For example, the additional signal acquisition step may increase failure rate in practice. X-Ray image-based gating, on the other hand, does not require any extra step in the scanning. Therefore we investigate imagebased gating techniques. This paper presents a comparison study of the image-based versus signal-based approach to retrospective gating. The two application areas we have studied are respiratory and cardiac imaging for both rats and mice. Image-based respiratory gating on microCT is relatively straightforward and has been done by several other researchers and groups. This method retrieves an intensity curve of a region of interest (ROI) placed in the lung area on all projections. From scans on our systems based on step-and-shoot scanning mode, we confirm that this method is very effective. A detailed comparison between image-based and signal-based gating methods is given. For cardiac gating, breathing motion is not negligible and has to be dealt with. Another difficulty in cardiac gating is the relatively smaller amplitude of cardiac movements comparing to the respirational movements, and the higher heart rate. Higher heart rate requires high speed image acquisition. We have been working on our systems to improve the acquisition speed. A dual gating technique has been developed to achieve adequate cardiac imaging.
Liang, Xiaoyun; Vaughan, David N; Connelly, Alan; Calamante, Fernando
2018-05-01
The conventional way to estimate functional networks is primarily based on Pearson correlation along with classic Fisher Z test. In general, networks are usually calculated at the individual-level and subsequently aggregated to obtain group-level networks. However, such estimated networks are inevitably affected by the inherent large inter-subject variability. A joint graphical model with Stability Selection (JGMSS) method was recently shown to effectively reduce inter-subject variability, mainly caused by confounding variations, by simultaneously estimating individual-level networks from a group. However, its benefits might be compromised when two groups are being compared, given that JGMSS is blinded to other groups when it is applied to estimate networks from a given group. We propose a novel method for robustly estimating networks from two groups by using group-fused multiple graphical-lasso combined with stability selection, named GMGLASS. Specifically, by simultaneously estimating similar within-group networks and between-group difference, it is possible to address inter-subject variability of estimated individual networks inherently related with existing methods such as Fisher Z test, and issues related to JGMSS ignoring between-group information in group comparisons. To evaluate the performance of GMGLASS in terms of a few key network metrics, as well as to compare with JGMSS and Fisher Z test, they are applied to both simulated and in vivo data. As a method aiming for group comparison studies, our study involves two groups for each case, i.e., normal control and patient groups; for in vivo data, we focus on a group of patients with right mesial temporal lobe epilepsy.
Francq, Bernard G; Govaerts, Bernadette
2016-06-30
Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Holm-Alwmark, S.; Ferrière, L.; Alwmark, C.; Poelchau, M. H.
2018-01-01
Planar deformation features (PDFs) in quartz are the most widely used indicator of shock metamorphism in terrestrial rocks. They can also be used for estimating average shock pressures that quartz-bearing rocks have been subjected to. Here we report on a number of observations and problems that we have encountered when performing universal stage measurements and crystallographically indexing of PDF orientations in quartz. These include a comparison between manual and automated methods of indexing PDFs, an evaluation of the new stereographic projection template, and observations regarding the PDF statistics related to the c-axis position and rhombohedral plane symmetry. We further discuss the implications that our findings have for shock barometry studies. Our study shows that the currently used stereographic projection template for indexing PDFs in quartz might induce an overestimation of rhombohedral planes with low Miller-Bravais indices. We suggest, based on a comparison of different shock barometry methods, that a unified method of assigning shock pressures to samples based on PDFs in quartz is necessary to allow comparison of data sets. This method needs to take into account not only the average number of PDF sets/grain but also the number of high Miller-Bravais index planes, both of which are important factors according to our study. Finally, we present a suggestion for such a method (which is valid for nonporous quartz-bearing rock types), which consists of assigning quartz grains into types (A-E) based on the PDF orientation pattern, and then calculation of a mean shock pressure for each sample.
Dutta, Sarmistha; Das, Swarnamoni
2010-01-01
Introduction: The aim is to study the anti-inflammatory effect of the ethanolic extract of the leaves of Psidium guajava(PGE) on experimental animal models. Materials and Methods: Fresh leaves were collected, air-dried, powdered, and percolated in 95% ethanol. Acute toxicity test was done according to OECD guidelines. Four groups of animals of either sex, weighing 150–200g of the species Rattus norvegicus were taken for the study (n = 6). Group A was taken as control (3% gum acacia in 10 mL/kg body weight), Group B as test group (PGE 250 mg/kg body weight), Group C as test group (PGE 500 mg/kg body weight), and Group D as standard (Aspirin 100 mg/kg body weight). The animals were studied for acute inflammation by Carrageenan-induced rat paw edema, subacute inflammation by Granuloma pouch method, and chronic inflammation by Freund’s adjuvant-induced arthritis method. Statistical analysis was done by one-way analysis of variance followed by multiple comparison tests. Results: In acute inflammation, there was significant inhibition of paw edema in Groups B, C, and D in comparison with Group A (P < 0.05). In subacute inflammation, there was significant inhibition of exudate formation in Groups B, C, and D in comparison to Group A (P < 0.05). In chronic inflammation, there was significant inhibition of paw edema and inhibition of weight reduction in Groups B, C, and D compared with Group A. Downregulation of arthritis index was also significant in Groups B, C, and D in comparison with Group A (P < 0.05). Conclusion: The ethanolic extract of PGE has significant anti-inflammatory activity. PMID:21589759
a Comparison of Empirical and Inteligent Methods for Dust Detection Using Modis Satellite Data
NASA Astrophysics Data System (ADS)
Shahrisvand, M.; Akhoondzadeh, M.
2013-09-01
Nowadays, dust storm in one of the most important natural hazards which is considered as a national concern in scientific communities. This paper considers the capabilities of some classical and intelligent methods for dust detection from satellite imagery around the Middle East region. In the study of dust detection, MODIS images have been a good candidate due to their suitable spectral and temporal resolution. In this study, physical-based and intelligent methods including decision tree, ANN (Artificial Neural Network) and SVM (Support Vector Machine) have been applied to detect dust storms. Among the mentioned approaches, in this paper, SVM method has been implemented for the first time in domain of dust detection studies. Finally, AOD (Aerosol Optical Depth) images, which are one the referenced standard products of OMI (Ozone Monitoring Instrument) sensor, have been used to assess the accuracy of all the implemented methods. Since the SVM method can distinguish dust storm over lands and oceans simultaneously, therefore the accuracy of SVM method is achieved better than the other applied approaches. As a conclusion, this paper shows that SVM can be a powerful tool for production of dust images with remarkable accuracy in comparison with AOT (Aerosol Optical Thickness) product of NASA.
Direct detection of methylation in genomic DNA
Bart, A.; van Passel, M. W. J.; van Amsterdam, K.; van der Ende, A.
2005-01-01
The identification of methylated sites on bacterial genomic DNA would be a useful tool to study the major roles of DNA methylation in prokaryotes: distinction of self and nonself DNA, direction of post-replicative mismatch repair, control of DNA replication and cell cycle, and regulation of gene expression. Three types of methylated nucleobases are known: N6-methyladenine, 5-methylcytosine and N4-methylcytosine. The aim of this study was to develop a method to detect all three types of DNA methylation in complete genomic DNA. It was previously shown that N6-methyladenine and 5-methylcytosine in plasmid and viral DNA can be detected by intersequence trace comparison of methylated and unmethylated DNA. We extended this method to include N4-methylcytosine detection in both in vitro and in vivo methylated DNA. Furthermore, application of intersequence trace comparison was extended to bacterial genomic DNA. Finally, we present evidence that intrasequence comparison suffices to detect methylated sites in genomic DNA. In conclusion, we present a method to detect all three natural types of DNA methylation in bacterial genomic DNA. This provides the possibility to define the complete methylome of any prokaryote. PMID:16091626
A simple experimental method to study depigmenting agents.
Abella, M L; de Rigal, J; Neveux, S
2007-08-01
The first objective of the study was to verify that a controlled UV exposure of four areas of the forearms together with randomized product application enabled to compare treatment efficacy and then to compare the depigmenting efficacy of different products with a simple experimental method. Sixteen volunteers received 0.7 minimal erythermal dose for four consecutive days. Products tested were ellagic acid (0.5%), vitamin C (5%) and C8-LHA (2%). Product application started 72 h post last exposure, was repeated for 42 days, the control zone being exposed, non-treated. Colour measurements included Chromameter, Chromasphere, Spectro-colorimeter and visual assessment. Comparison of colour values at day 1 and at day 7 showed that all zones were comparably tanned, allowing a rigorous comparison of the treatments. We report a new simple experimental model, which enables the rapid comparison of different depigmenting products. The efficacy and good tolerance of C8-LHA make it an excellent candidate for the treatment of hyperpigmentory disorders.
NASA Astrophysics Data System (ADS)
Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.
2014-10-01
In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.
INTER LABORATORY COMBAT HELMET BLUNT IMPACT TEST METHOD COMPARISON
2018-03-26
HELMET BLUNT IMPACT TEST METHOD COMPARISON by Tony J. Kayhart Charles A. Hewitt and Jonathan Cyganik March 2018 Final...Report March 2016 – August 2017 Approved for public release; distribution is unlimited U.S. Army Natick Soldier Research ...INTER-LABORATORY COMBAT HELMET BLUNT IMPACT TEST METHOD COMPARISON 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR
Ball, Elaine; McLoughlin, Moira; Darvill, Angela
2011-04-01
Qualitative methodology has increased in application and acceptability in all research disciplines. In nursing, it is appropriate that a plethora of qualitative methods can be found as nurses pose real-world questions to clinical, cultural and ethical issues of patient care (Johnson, 2007; Long and Johnson, 2007), yet the methods nurses readily use in pursuit of answers remains under intense scrutiny. One of the problems with qualitative methodology for nursing research is its place in the hierarchy of evidence (HOE); another is its comparison to the positivist constructs of what constitutes good research and the measurement of qualitative research against this. In order to position and strengthen its evidence base, nursing may well seek to distance itself from a qualitative perspective and utilise methods at the top of the HOE; yet given the relation of qualitative methods to nursing this would constrain rather than broaden the profession in search of answers and an evidence base. The comparison between qualitative and quantitative can be both mutually exclusive and rhetorical, by shifting the comparison this study takes a more reflexive position and critically appraises qualitative methods against the standards set by qualitative researchers. By comparing the design and application of qualitative methods in nursing over a two year period, the study examined how qualitative stands up to independent rather than comparative scrutiny. For the methods, a four-step mixed methods approach newly constructed by the first author was used to define the scope of the research question and develop inclusion criteria. 2. Synthesis tables were constructed to organise data, 3. Bibliometrics configured data. 4. Studies selected for inclusion in the review were critically appraised using a critical interpretive synthesis (Dixon-Woods et al., 2006). The paper outlines the research process as well as findings. Results showed of the 240 papers analysed, 27% used ad hoc or no references to qualitative; methodological terms such as thematic analysis or constant comparative methods were used inconsistently; qualitative was a catch-all panacea rather than a methodology with well-argued terms or contextual definition. Copyright © 2010 Elsevier Ltd. All rights reserved.
Standard methods for open hole tension testing of textile composites
NASA Technical Reports Server (NTRS)
Portanova, M. A.; Masters, J. E.
1995-01-01
Sizing effects have been investigated by comparing the open hole failure strengths of each of the four different braided architectures as a function of specimen thickness, hole diameter, and the ratio of specimen width to hole diameter. The data used to make these comparisons was primarily generated by Boeing. Direct comparisons of Boeing's results were made with experiments conducted at West Virginia University whenever possible. Indirect comparisons were made with test results for other 2-D braids and 3-D weaves tested by Boeing and Lockheed. In general, failure strength was found to decrease with increasing plate thickness, increase with decreasing hole size, and decreasing with decreasing width to diameter ratio. The interpretation of the sensitive to each of these geometrical parameters was complicated by scatter in the test data. For open hole tension testing of textile composites, the use of standard testing practices employed by industry, such as ASTM D5766 - Standard Test Method for Open Hole Tensile Strength of Polymer Matrix Composite Laminates should provide adequate results for material comparisons studies.
NASA Astrophysics Data System (ADS)
Maschio, Lorenzo; Kirtman, Bernard; Rérat, Michel; Orlando, Roberto; Dovesi, Roberto
2013-10-01
In this work, we validate a new, fully analytical method for calculating Raman intensities of periodic systems, developed and presented in Paper I [L. Maschio, B. Kirtman, M. Rérat, R. Orlando, and R. Dovesi, J. Chem. Phys. 139, 164101 (2013)]. Our validation of this method and its implementation in the CRYSTAL code is done through several internal checks as well as comparison with experiment. The internal checks include consistency of results when increasing the number of periodic directions (from 0D to 1D, 2D, 3D), comparison with numerical differentiation, and a test of the sum rule for derivatives of the polarizability tensor. The choice of basis set as well as the Hamiltonian is also studied. Simulated Raman spectra of α-quartz and of the UiO-66 Metal-Organic Framework are compared with the experimental data.
Prinz, I; Nubel, K; Gross, M
2002-09-01
Until now, the assumed benefits of digital hearing aids are reflected only in subjective descriptions by patients with hearing aids, but cannot be documented adequately by routine diagnostic methods. Seventeen schoolchildren with moderate severe bilateral symmetrical sensorineural hearing loss were examined in a double-blinded crossover study. Differences in performance between a fully digital hearing aid (DigiFocus compact/Oticon) and an analogous digitally programmable two-channel hearing aid were evaluated. Of the 17 children, 13 choose the digital and 4 the analogous hearing aid. In contrast to the clear subjective preferences for the fully digital hearing aid, we could not obtain any significant results with routine diagnostic methods. Using the "virtual hearing aid," a subjective comparison and speech recognition performance task yielded significant differences. The virtual hearing aid proved to be suitable for a direct comparison of different hearing aids and can be used for double-blind testing in a pediatric population.
ERIC Educational Resources Information Center
Lee, Yonghak
2009-01-01
The primary purpose of this study was to identify competencies needed by current human resource development (HRD) master's degree graduate students in Korea. The study used a quantitative method, the Delphi technique, in combination with a qualitative method consisting of a series of in-depth interviews. The Delphi technique was conducted using a…
Kawai, Y; Nagai, Y; Ogawa, E; Kondo, H
2017-04-01
To provide target values for the manufacturers' survey of the Japanese Society for Laboratory Hematology (JSLH), accurate standard data from healthy volunteers were needed for the five-part differential leukocyte count. To obtain such data, JSLH required an antibody panel that achieved high specificity (particularly for mononuclear cells) using simple gating procedures. We developed a flow cytometric method for determining the differential leukocyte count (JSLH-Diff) and validated it by comparison with the flow cytometric differential leukocyte count of the International Council for Standardization in Haematology (ICSH-Diff) and the manual differential count obtained by microscopy (Manual-Diff). First, the reference laboratory performed an imprecision study of JSLH-Diff and ICSH-Diff, as well as performing comparison among JSLH-Diff, Manual-Diff, and ICSH-Diff. Then two reference laboratories and seven participating laboratories performed imprecision and accuracy studies of JSLH-Diff, Manual-Diff, and ICSH-Diff. Simultaneously, six manufacturers' laboratories provided their own representative values by using automated hematology analyzers. The precision of both JSLH-Diff and ICSH-Diff methods was adequate. Comparison by the reference laboratory showed that all correlation coefficients, slopes and intercepts obtained by the JSLH-Diff, ICSH-Diff, and Manual-Diff methods conformed to the criteria. When the imprecision and accuracy of JSLH-Diff were assessed at seven laboratories, the CV% for lymphocytes, neutrophils, monocytes, eosinophils, and basophils was 0.5~0.9%, 0.3~0.7%, 1.7~2.6%, 3.0~7.9%, and 3.8~10.4%, respectively. More than 99% of CD45 positive leukocytes were identified as normal leukocytes by JSLH-Diff. When JSLH-Diff method were validated by comparison with Manual-Diff and ICSH-Diff, JSLH-Diff showed good performance as a reference method. © 2016 John Wiley & Sons Ltd.
Ghanbari, Behzad
2014-01-01
We aim to study the convergence of the homotopy analysis method (HAM in short) for solving special nonlinear Volterra-Fredholm integrodifferential equations. The sufficient condition for the convergence of the method is briefly addressed. Some illustrative examples are also presented to demonstrate the validity and applicability of the technique. Comparison of the obtained results HAM with exact solution shows that the method is reliable and capable of providing analytic treatment for solving such equations.
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1991-01-01
The three dimensional quasi-analytical sensitivity analysis and the ancillary driver programs are developed needed to carry out the studies and perform comparisons. The code is essentially contained in one unified package which includes the following: (1) a three dimensional transonic wing analysis program (ZEBRA); (2) a quasi-analytical portion which determines the matrix elements in the quasi-analytical equations; (3) a method for computing the sensitivity coefficients from the resulting quasi-analytical equations; (4) a package to determine for comparison purposes sensitivity coefficients via the finite difference approach; and (5) a graphics package.
NASA Technical Reports Server (NTRS)
Herb, G. T.
1973-01-01
Two areas of a laser range finder for a Mars roving vehicle are investigated: (1) laser scanning systems, and (2) range finder methods and implementation. Several ways of rapidly scanning a laser are studied. Two digital deflectors and a matrix of laser diodes, are found to be acceptable. A complete range finder scanning system of high accuracy is proposed. The problem of incident laser spot distortion on the terrain is discussed. The instrumentation for a phase comparison, modulated laser range finder is developed and sections of it are tested.
Comparison of Event Detection Methods for Centralized Sensor Networks
NASA Technical Reports Server (NTRS)
Sauvageon, Julien; Agogiono, Alice M.; Farhang, Ali; Tumer, Irem Y.
2006-01-01
The development of an Integrated Vehicle Health Management (IVHM) for space vehicles has become a great concern. Smart Sensor Networks is one of the promising technologies that are catching a lot of attention. In this paper, we propose to a qualitative comparison of several local event (hot spot) detection algorithms in centralized redundant sensor networks. The algorithms are compared regarding their ability to locate and evaluate the event under noise and sensor failures. The purpose of this study is to check if the ratio performance/computational power of the Mote Fuzzy Validation and Fusion algorithm is relevant compare to simpler methods.
Vos, T.; Mathers, C. D.
2000-01-01
The national and Victorian burden of disease studies in Australia set out to examine critically the methods used in the Global Burden of Disease study to estimate the burden of mental disorders. The main differences include the use of a different set of disability weights allowing estimates in greater detail by level of severity, adjustments for comorbidity between mental disorders, a greater number of mental disorders measured, and modelling of substance use disorders, anxiety disorders and bipolar disorder as chronic conditions. Uniform age-weighting in the Australian studies produces considerably lower estimates of the burden due to mental disorders in comparison with age-weighted disability-adjusted life years. A lack of follow-up data on people with mental disorders who are identified in cross-sectional surveys poses the greatest challenge in determining the burden of mental disorders more accurately. PMID:10885161
2014-10-01
density using automated methods will be optimized during this study through the evaluation of outlier correction, comparison of several different...7 VBD comparison y = 1.3477x - 1.3764 R2 = 0.8213 0 10 20 30 40 50 60 70 0 10 20 30 40 50 VBD volpara [%] VB D cu m ul us [% ] VBD cumulus...Access database and chart review. 5c. Conduct chart review for selected cases (month 4-6). Comparison of information from the Breast Cancer
1988-06-01
partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN MANAGEMENT from the NAVAL POSTGRADUATE SCHOOL June 1988 Author: Denise M...of work), management study reviews and detailed cost comparisons. A Cost Comparison Handbook ( CCH ), also published in 1979, provided detailed...1, dated 12 August 1985. The cost comparison methodology was changed from the complex full cost method outlined in the CCH , to a simpler incremen- tal
2011-01-01
Background Meta-analysis is a popular methodology in several fields of medical research, including genetic association studies. However, the methods used for meta-analysis of association studies that report haplotypes have not been studied in detail. In this work, methods for performing meta-analysis of haplotype association studies are summarized, compared and presented in a unified framework along with an empirical evaluation of the literature. Results We present multivariate methods that use summary-based data as well as methods that use binary and count data in a generalized linear mixed model framework (logistic regression, multinomial regression and Poisson regression). The methods presented here avoid the inflation of the type I error rate that could be the result of the traditional approach of comparing a haplotype against the remaining ones, whereas, they can be fitted using standard software. Moreover, formal global tests are presented for assessing the statistical significance of the overall association. Although the methods presented here assume that the haplotypes are directly observed, they can be easily extended to allow for such an uncertainty by weighting the haplotypes by their probability. Conclusions An empirical evaluation of the published literature and a comparison against the meta-analyses that use single nucleotide polymorphisms, suggests that the studies reporting meta-analysis of haplotypes contain approximately half of the included studies and produce significant results twice more often. We show that this excess of statistically significant results, stems from the sub-optimal method of analysis used and, in approximately half of the cases, the statistical significance is refuted if the data are properly re-analyzed. Illustrative examples of code are given in Stata and it is anticipated that the methods developed in this work will be widely applied in the meta-analysis of haplotype association studies. PMID:21247440
NASA Astrophysics Data System (ADS)
Santoso, S. E.; Sulistiono, D.; Mawardi, A. F.
2017-11-01
FAA code for airport design has been broadly used by Indonesian Ministry of Aviation since decades ago. However, there is not much comprehensive study about its relevance and efficiency towards current situation in Indonesia. Therefore, a further comparison study on flexible pavement design for airport runway using comparable method has become essential. The main focus of this study is to compare which method between FAA and LCN that offer the most efficient and effective way in runway pavement planning. The comparative methods in this study mainly use the variety of variable approach. FAA code for instance, will use the approach on the aircraft’s maximum take-off weight and annual departure. Whilst LCN code use the variable of equivalent single wheel load and tire pressure. Based on the variables mentioned above, a further classification and rated method will be used to determine which code is best implemented. According to the analysis, it is clear that FAA method is the most effective way to plan runway design in Indonesia with consecutively total pavement thickness of 127cm and LCN method total pavement thickness of 70cm. Although, FAA total pavement is thicker that LCN its relevance towards sustainable and pristine condition in the future has become an essential aspect to consider in design and planning.
STANDARDIZED ASSESSMENT METHOD (SAM) FOR RIVERINE MACROINVERTEBRATES
During the summer of 2001, twelve sites were sampled for macroinvertebrates, six each on the Great Miami and Kentucky Rivers. Sites were chosen in each river from those sampled in the 1999 methods comparison study to reflect a disturbance gradient. At each site, a total distanc...
Development and Testing of Novel Canine Fecal Source-Identification Assays
The extent to which dogs contribute to aquatic fecal contamination is unknown despite the potential for zoonotic transfer of harmful human pathogens. Recent method comparison studies have shown that available Bacteroidales 16S rRNA-based methods for the detection of canine fecal ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Yan-Lin, E-mail: yanlin.shao@dnvgl.com; Faltinsen, Odd M.
2014-10-01
We propose a new efficient and accurate numerical method based on harmonic polynomials to solve boundary value problems governed by 3D Laplace equation. The computational domain is discretized by overlapping cells. Within each cell, the velocity potential is represented by the linear superposition of a complete set of harmonic polynomials, which are the elementary solutions of Laplace equation. By its definition, the method is named as Harmonic Polynomial Cell (HPC) method. The characteristics of the accuracy and efficiency of the HPC method are demonstrated by studying analytical cases. Comparisons will be made with some other existing boundary element based methods,more » e.g. Quadratic Boundary Element Method (QBEM) and the Fast Multipole Accelerated QBEM (FMA-QBEM) and a fourth order Finite Difference Method (FDM). To demonstrate the applications of the method, it is applied to some studies relevant for marine hydrodynamics. Sloshing in 3D rectangular tanks, a fully-nonlinear numerical wave tank, fully-nonlinear wave focusing on a semi-circular shoal, and the nonlinear wave diffraction of a bottom-mounted cylinder in regular waves are studied. The comparisons with the experimental results and other numerical results are all in satisfactory agreement, indicating that the present HPC method is a promising method in solving potential-flow problems. The underlying procedure of the HPC method could also be useful in other fields than marine hydrodynamics involved with solving Laplace equation.« less
Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data
Young, Alistair A.; Li, Xiaosong
2014-01-01
Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382
NASA Astrophysics Data System (ADS)
Peselnick, L.
1982-08-01
An ultrasonic method is presented which combines features of the differential path and the phase comparison methods. The proposed differential path phase comparison method, referred to as the `hybrid' method for brevity, eliminates errors resulting from phase changes in the bond between the sample and buffer rod. Define r(P) [and R(P)] as the square of the normalized frequency for cancellation of sample waves for shear [and for compressional] waves. Define N as the number of wavelengths in twice the sample length. The pressure derivatives r'(P) and R' (P) for samples of Alcoa 2024-T4 aluminum were obtained by using the phase comparison and the hybrid methods. The values of the pressure derivatives obtained by using the phase comparison method show variations by as much as 40% for small values of N (N < 50). The pressure derivatives as determined from the hybrid method are reproducible to within ±2% independent of N. The values of the pressure derivatives determined by the phase comparison method for large N are the same as those determined by the hybrid method. Advantages of the hybrid method are (1) no pressure dependent phase shift at the buffer-sample interface, (2) elimination of deviatoric stress in the sample portion of the sample assembly with application of hydrostatic pressure, and (3) operation at lower ultrasonic frequencies (for comparable sample lengths), which eliminates detrimental high frequency ultrasonic problems. A reduction of the uncertainties of the pressure derivatives of single crystals and of low porosity polycrystals permits extrapolation of such experimental data to deeper mantle depths.
Evaluation of the eigenvalue method in the solution of transient heat conduction problems
NASA Astrophysics Data System (ADS)
Landry, D. W.
1985-01-01
The eigenvalue method is evaluated to determine the advantages and disadvantages of the method as compared to fully explicit, fully implicit, and Crank-Nicolson methods. Time comparisons and accuracy comparisons are made in an effort to rank the eigenvalue method in relation to the comparison schemes. The eigenvalue method is used to solve the parabolic heat equation in multidimensions with transient temperatures. Extensions into three dimensions are made to determine the method's feasibility in handling large geometry problems requiring great numbers of internal mesh points. The eigenvalue method proves to be slightly better in accuracy than the comparison routines because of an exact treatment, as opposed to a numerical approximation, of the time derivative in the heat equation. It has the potential of being a very powerful routine in solving long transient type problems. The method is not well suited to finely meshed grid arrays or large regions because of the time and memory requirements necessary for calculating large sets of eigenvalues and eigenvectors.
ERIC Educational Resources Information Center
Suh, Youngsuk; Talley, Anna E.
2015-01-01
This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…
ERIC Educational Resources Information Center
Preston, Janet E.; Kunz, Margie H.
This study compared student learning in secondary consumer and homemaking foods classes using three different methods of teacher preparation. In Method 1, teachers were provided with lists of competencies and workshop training for using the competencies. In Method 2, teachers were provided with lists of competencies and no workshop training; and…
A Comparison of Two Flash-Card Methods for Improving Sight-Word Reading
ERIC Educational Resources Information Center
Kupzyk, Sara; Daly, Edward J., III; Andersen, Melissa N.
2011-01-01
Flash cards have been shown to be useful for teaching sight-word reading. To date, the most effective flash-card instruction method is incremental rehearsal (IR). This method involves the instructor interspersing unknown stimulus items into the presentation of known stimulus items. In this study, we compared IR to a modified IR…
Comparison of the Efficiency of Two Flashcard Drill Methods on Children's Reading Performance
ERIC Educational Resources Information Center
Joseph, Laurice; Eveleigh, Elisha; Konrad, Moira; Neef, Nancy; Volpe, Robert
2012-01-01
The purpose of this study was to extend prior flashcard drill and practice research by holding instructional time constant and allowing learning trials to vary. Specifically, the authors aimed to determine whether an incremental rehearsal method or a traditional drill and practice method was most efficient in helping 5 first-grade children read,…
USDA-ARS?s Scientific Manuscript database
Many different screening devices and sampling methods have been used to detect the presence of naturally occurring Salmonella on commercially processed broiler carcasses. The objective of this study was to compare two commercial screening systems (BAX® and Roka®) to a standard cultural procedure use...
ERIC Educational Resources Information Center
Putnam, Susan K.; Lopata, Christopher; Fox, Jeffery D.; Thomeer, Marcus L.; Rodgers, Jonathan D.; Volker, Martin A.; Lee, Gloria K.; Neilans, Erik G.; Werth, Jilynn
2012-01-01
This study compared cortisol concentrations yielded using three saliva collection methods (passive drool, salivette, and sorbette) in both in vitro and in vivo conditions, as well as method acceptability for a sample of children (n = 39) with High Functioning Autism Spectrum Disorders. No cortisol concentration differences were observed between…
ERIC Educational Resources Information Center
Gold, Michael S.; Bentler, Peter M.; Kim, Kevin H.
2003-01-01
This article describes a Monte Carlo study of 2 methods for treating incomplete nonnormal data. Skewed, kurtotic data sets conforming to a single structured model, but varying in sample size, percentage of data missing, and missing-data mechanism, were produced. An asymptotically distribution-free available-case (ADFAC) method and structured-model…
USDA-ARS?s Scientific Manuscript database
It is important to find an appropriate pattern-recognition method for in-field plant identification based on spectral measurement in order to classify the crop and weeds accurately. In this study, the method of Support Vector Machine (SVM) was evaluated and compared with two other methods, Decision ...
A Comparison of Students' Performances Using Audio Only and Video Media Methods
ERIC Educational Resources Information Center
Sulaiman, Norazean; Muhammad, Ahmad Mazli; Ganapathy, Nurul Nadiah Dewi Faizul; Khairuddin, Zulaikha; Othman, Salwa
2017-01-01
Listening is a very crucial skill to be learnt in second language classroom because it is essential for the development of spoken language proficiency (Hamouda, 2013). The aim of this study is to investigate the significant differences in terms of students' performance when using traditional (audio-only) method and video media method. The data of…
R.B. Ferguson; V. Clark Baldwin
1995-01-01
Estimating tree and stand volume in mature plantations is time consuming, involving much manpower and equipment; however, several sampling and volume-prediction techniques are available. This study showed that a well-constructed, volume-equation method yields estimates comparable to those of the often more time-consuming, height-accumulation method, even though the...
Factor Retention in Exploratory Factor Analysis: A Comparison of Alternative Methods.
ERIC Educational Resources Information Center
Mumford, Karen R.; Ferron, John M.; Hines, Constance V.; Hogarty, Kristine Y.; Kromrey, Jeffery D.
This study compared the effectiveness of 10 methods of determining the number of factors to retain in exploratory common factor analysis. The 10 methods included the Kaiser rule and a modified Kaiser criterion, 3 variations of parallel analysis, 4 regression-based variations of the scree procedure, and the minimum average partial procedure. The…
ERIC Educational Resources Information Center
Igra, Amnon
1980-01-01
Three methods of estimating a model of school effects are compared: ordinary least squares; an approach based on the analysis of covariance; and, a residualized input-output approach. Results are presented using a matrix algebra formulation, and advantages of the first two methods are considered. (Author/GK)
Identifying Differentially Abundant Metabolic Pathways in Metagenomic Datasets
NASA Astrophysics Data System (ADS)
Liu, Bo; Pop, Mihai
Enabled by rapid advances in sequencing technology, metagenomic studies aim to characterize entire communities of microbes bypassing the need for culturing individual bacterial members. One major goal of such studies is to identify specific functional adaptations of microbial communities to their habitats. Here we describe a powerful analytical method (MetaPath) that can identify differentially abundant pathways in metagenomic data-sets, relying on a combination of metagenomic sequence data and prior metabolic pathway knowledge. We show that MetaPath outperforms other common approaches when evaluated on simulated datasets. We also demonstrate the power of our methods in analyzing two, publicly available, metagenomic datasets: a comparison of the gut microbiome of obese and lean twins; and a comparison of the gut microbiome of infant and adult subjects. We demonstrate that the subpathways identified by our method provide valuable insights into the biological activities of the microbiome.
An investigation of a PRESAGE® in-vivo dosimeter for brachytherapy
Vidovic, A K; Juang, T; Meltsner, S; Adamovics, J; Chino, J; Steffey, B; Craciunescu, O; Oldham, M
2014-01-01
Determining accurate in-vivo dosimetry in brachytherapy treatment with high dose gradients is challenging. Here we introduce, investigate, and characterize a novel in-vivo dosimeter and readout technique with the potential to address this problem. A cylindrical (4 mm x 20 mm) tissue equivalent radiochromic dosimeter PRESAGE® In-Vivo (PRESAGE®-IV) is investigated. Two readout methods of the radiation induced change in optical density (OD) were investigated: (i) volume-averaged readout by spectrophotometer, and (ii) a line profile readout by 2D projection imaging utilizing a high-resolution (50 micron) telecentric optical system. Method (i) is considered the gold standard when applied to PRESAGE® in optical cuvettes. The feasibility of both methods was evaluated by comparison to standard measurements on PRESAGE® in optical cuvettes via spectrophotometer. An end-to-end feasibility study was performed by a side-by-side comparison with TLDs in an 192Ir HDR delivery. 7 and 8 Gy was delivered to PRESAGE®-IV and TLDs attached to the surface of a vaginal cylinder. Known geometry enabled direct comparison of measured dose with commissioned treatment planning system. A high-resolution readout study under a steep dose gradient region showed 98.9% (5%/1 mm) agreement between PRESAGE®-IV and Gafchromic® EBT2 Film. Spectrometer measurements exhibited a linear dose response between 0–15 Gy with sensitivity of 0.0133 ± 0.0007 ΔOD/(Gy·cm) at the 95% confidence interval. Method (ii) yielded a linear response with sensitivity of 0.0132 ± 0.0006 (ΔOD/Gy), within 2% of method (i). Method (i) has poor spatial resolution due to volume averaging. Method (ii) has higher resolution (~1mm) without loss of sensitivity or increased noise. Both readout methods are shown to be feasible. The end-to-end comparison revealed a 2.5% agreement between PRESAGE®-IV and treatment plan in regions of uniform high dose. PRESAGE®-IV shows promise for in-vivo dose verification, although improved sensitivity would be desirable. Advantages include high-resolution, convenience and fast, low-cost readout. PMID:24957850
An investigation of a PRESAGE® in vivo dosimeter for brachytherapy
NASA Astrophysics Data System (ADS)
Vidovic, A. K.; Juang, T.; Meltsner, S.; Adamovics, J.; Chino, J.; Steffey, B.; Craciunescu, O.; Oldham, M.
2014-07-01
Determining accurate in vivo dosimetry in brachytherapy treatment with high dose gradients is challenging. Here we introduce, investigate, and characterize a novel in vivo dosimeter and readout technique with the potential to address this problem. A cylindrical (4 mm × 20 mm) tissue equivalent radiochromic dosimeter PRESAGE® in vivo (PRESAGE®-IV) is investigated. Two readout methods of the radiation induced change in optical density (OD) were investigated: (i) volume-averaged readout by spectrophotometer, and (ii) a line profile readout by 2D projection imaging utilizing a high-resolution (50 micron) telecentric optical system. Method (i) is considered the gold standard when applied to PRESAGE® in optical cuvettes. The feasibility of both methods was evaluated by comparison to standard measurements on PRESAGE® in optical cuvettes via spectrophotometer. An end-to-end feasibility study was performed by a side-by-side comparison with TLDs in an 192Ir HDR delivery. 7 and 8 Gy was delivered to PRESAGE®-IV and TLDs attached to the surface of a vaginal cylinder. Known geometry enabled direct comparison of measured dose with a commissioned treatment planning system. A high-resolution readout study under a steep dose gradient region showed 98.9% (5%/1 mm) agreement between PRESAGE®-IV and Gafchromic® EBT2 Film. Spectrometer measurements exhibited a linear dose response between 0-15 Gy with sensitivity of 0.0133 ± 0.0007 ΔOD/(Gy ṡ cm) at the 95% confidence interval. Method (ii) yielded a linear response with sensitivity of 0.0132 ± 0.0006 (ΔOD/Gy), within 2% of method (i). Method (i) has poor spatial resolution due to volume averaging. Method (ii) has higher resolution (˜1 mm) without loss of sensitivity or increased noise. Both readout methods are shown to be feasible. The end-to-end comparison revealed a 2.5% agreement between PRESAGE®-IV and treatment plan in regions of uniform high dose. PRESAGE®-IV shows promise for in vivo dose verification, although improved sensitivity would be desirable. Advantages include high-resolution, convenience and fast, low-cost readout.
Eisinger-Watzl, Marianne; Straßburg, Andrea; Ramünke, Josa; Krems, Carolin; Heuer, Thorsten; Hoffmann, Ingrid
2015-04-01
To further characterise the performance of the diet history method and the 24-h recalls method, both in an updated version, a comparison was conducted. The National Nutrition Survey II, representative for Germany, assessed food consumption with both methods. The comparison was conducted in a sample of 9,968 participants aged 14-80. Besides calculating mean differences, statistical agreement measurements encompass Spearman and intraclass correlation coefficients, ranking participants in quartiles and the Bland-Altman method. Mean consumption of 12 out of 18 food groups was higher assessed with the diet history method. Three of these 12 food groups had a medium to large effect size (e.g., raw vegetables) and seven showed at least a small strength while there was basically no difference for coffee/tea or ice cream. Intraclass correlations were strong only for beverages (>0.50) and revealed the least correlation for vegetables (<0.20). Quartile classification of participants exhibited more than two-thirds being ranked in the same or adjacent quartile assessed by both methods. For every food group, Bland-Altman plots showed that the agreement of both methods weakened with increasing consumption. The cognitive effort essential for the diet history method to remember consumption of the past 4 weeks may be a source of inaccurateness, especially for inhomogeneous food groups. Additionally, social desirability gains significance. There is no assessment method without errors and attention to specific food groups is a critical issue with every method. Altogether, the 24-h recalls method applied in the presented study, offers advantages approximating food consumption as compared to the diet history method.
Gerber, S; Rodolphe, F
1994-06-01
The first step in the construction of a linkage map involves the estimation and test for linkage between all possible pairs of markers. The lod score method is used in many linkage studies for the latter purpose. In contrast with classical statistical tests, this method does not rely on the choice of a first-type error level. We thus provide a comparison between the lod score and a χ (2) test on linkage data from a gymnosperm, the maritime pine. The lod score appears to be a very conservative test with the usual thresholds. Its severity depends on the type of data used.
Digital Sound Synthesis Algorithms: a Tutorial Introduction and Comparison of Methods
NASA Astrophysics Data System (ADS)
Lee, J. Robert
The objectives of the dissertation are to provide both a compendium of sound-synthesis methods with detailed descriptions and sound examples, as well as a comparison of the relative merits of each method based on ease of use, observed sound quality, execution time, and data storage requirements. The methods are classified under the general headings of wavetable-lookup synthesis, additive synthesis, subtractive synthesis, nonlinear methods, and physical modelling. The nonlinear methods comprise a large group that ranges from the well-known frequency-modulation synthesis to waveshaping. The final category explores computer modelling of real musical instruments and includes numerical and analytical solutions to the classical wave equation of motion, along with some of the more sophisticated time -domain models that are possible through the prudent combination of simpler synthesis techniques. The dissertation is intended to be understandable by a musician who is mathematically literate but who does not necessarily have a background in digital signal processing. With this limitation in mind, a brief and somewhat intuitive description of digital sampling theory is provided in the introduction. Other topics such as filter theory are discussed as the need arises. By employing each of the synthesis methods to produce the same type of sound, interesting comparisons can be made. For example, a struck string sound, such as that typical of a piano, can be produced by algorithms in each of the synthesis classifications. Many sounds, however, are peculiar to a single algorithm and must be examined independently. Psychoacoustic studies were conducted as an aid in the comparison of the sound quality of several implementations of the synthesis algorithms. Other psychoacoustic experiments were conducted to supplement the established notions of which timbral issues are important in the re -synthesis of the sounds of acoustic musical instruments.
Comparisons of Methods for Predicting Community Annoyance Due to Sonic Booms
NASA Technical Reports Server (NTRS)
Hubbard, Harvey H.; Shepherd, Kevin P.
1996-01-01
Two approaches to the prediction of community response to sonic boom exposure are examined and compared. The first approach is based on the wealth of data concerning community response to common transportation noises coupled with results of a sonic boom/aircraft noise comparison study. The second approach is based on limited field studies of community response to sonic booms. Substantial differences between indoor and outdoor listening conditions are observed. Reasonable agreement is observed between predicted community responses and available measured responses.
Zaki, Rafdzah; Bulgiba, Awang; Ismail, Roshidi; Ismail, Noor Azina
2012-01-01
Accurate values are a must in medicine. An important parameter in determining the quality of a medical instrument is agreement with a gold standard. Various statistical methods have been used to test for agreement. Some of these methods have been shown to be inappropriate. This can result in misleading conclusions about the validity of an instrument. The Bland-Altman method is the most popular method judging by the many citations of the article proposing this method. However, the number of citations does not necessarily mean that this method has been applied in agreement research. No previous study has been conducted to look into this. This is the first systematic review to identify statistical methods used to test for agreement of medical instruments. The proportion of various statistical methods found in this review will also reflect the proportion of medical instruments that have been validated using those particular methods in current clinical practice. Five electronic databases were searched between 2007 and 2009 to look for agreement studies. A total of 3,260 titles were initially identified. Only 412 titles were potentially related, and finally 210 fitted the inclusion criteria. The Bland-Altman method is the most popular method with 178 (85%) studies having used this method, followed by the correlation coefficient (27%) and means comparison (18%). Some of the inappropriate methods highlighted by Altman and Bland since the 1980s are still in use. This study finds that the Bland-Altman method is the most popular method used in agreement research. There are still inappropriate applications of statistical methods in some studies. It is important for a clinician or medical researcher to be aware of this issue because misleading conclusions from inappropriate analyses will jeopardize the quality of the evidence, which in turn will influence quality of care given to patients in the future.
Zaki, Rafdzah; Bulgiba, Awang; Ismail, Roshidi; Ismail, Noor Azina
2012-01-01
Background Accurate values are a must in medicine. An important parameter in determining the quality of a medical instrument is agreement with a gold standard. Various statistical methods have been used to test for agreement. Some of these methods have been shown to be inappropriate. This can result in misleading conclusions about the validity of an instrument. The Bland-Altman method is the most popular method judging by the many citations of the article proposing this method. However, the number of citations does not necessarily mean that this method has been applied in agreement research. No previous study has been conducted to look into this. This is the first systematic review to identify statistical methods used to test for agreement of medical instruments. The proportion of various statistical methods found in this review will also reflect the proportion of medical instruments that have been validated using those particular methods in current clinical practice. Methodology/Findings Five electronic databases were searched between 2007 and 2009 to look for agreement studies. A total of 3,260 titles were initially identified. Only 412 titles were potentially related, and finally 210 fitted the inclusion criteria. The Bland-Altman method is the most popular method with 178 (85%) studies having used this method, followed by the correlation coefficient (27%) and means comparison (18%). Some of the inappropriate methods highlighted by Altman and Bland since the 1980s are still in use. Conclusions This study finds that the Bland-Altman method is the most popular method used in agreement research. There are still inappropriate applications of statistical methods in some studies. It is important for a clinician or medical researcher to be aware of this issue because misleading conclusions from inappropriate analyses will jeopardize the quality of the evidence, which in turn will influence quality of care given to patients in the future. PMID:22662248
Lee, Won-Joon; Wilkinson, Caroline M; Hwang, Hyeon-Shik; Lee, Sang-Mi
2015-05-01
Accuracy is the most important factor supporting the reliability of forensic facial reconstruction (FFR) comparing to the corresponding actual face. A number of methods have been employed to evaluate objective accuracy of FFR. Recently, it has been attempted that the degree of resemblance between computer-generated FFR and actual face is measured by geometric surface comparison method. In this study, three FFRs were produced employing live adult Korean subjects and three-dimensional computerized modeling software. The deviations of the facial surfaces between the FFR and the head scan CT of the corresponding subject were analyzed in reverse modeling software. The results were compared with those from a previous study which applied the same methodology as this study except average facial soft tissue depth dataset. Three FFRs of this study that applied updated dataset demonstrated lesser deviation errors between the facial surfaces of the FFR and corresponding subject than those from the previous study. The results proposed that appropriate average tissue depth data are important to increase quantitative accuracy of FFR. © 2015 American Academy of Forensic Sciences.
ERIC Educational Resources Information Center
Overlock, Terrence H., Sr.
To determine the effect of collaborative learning methods on the success rate of physics students at Northern Maine Technical College (NMTC), a study was undertaken to compare the mean final exam scores of a students in a physics course taught by traditional lecture/lab methods to those in a group taught by collaborative techniques. The…
Joseph, Agnel Praveen; Srinivasan, Narayanaswamy; de Brevern, Alexandre G
2012-09-01
Comparison of multiple protein structures has a broad range of applications in the analysis of protein structure, function and evolution. Multiple structure alignment tools (MSTAs) are necessary to obtain a simultaneous comparison of a family of related folds. In this study, we have developed a method for multiple structure comparison largely based on sequence alignment techniques. A widely used Structural Alphabet named Protein Blocks (PBs) was used to transform the information on 3D protein backbone conformation as a 1D sequence string. A progressive alignment strategy similar to CLUSTALW was adopted for multiple PB sequence alignment (mulPBA). Highly similar stretches identified by the pairwise alignments are given higher weights during the alignment. The residue equivalences from PB based alignments are used to obtain a three dimensional fit of the structures followed by an iterative refinement of the structural superposition. Systematic comparisons using benchmark datasets of MSTAs underlines that the alignment quality is better than MULTIPROT, MUSTANG and the alignments in HOMSTRAD, in more than 85% of the cases. Comparison with other rigid-body and flexible MSTAs also indicate that mulPBA alignments are superior to most of the rigid-body MSTAs and highly comparable to the flexible alignment methods. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Abdel-Aal, El-Sayed M; Akhtar, Humayoun; Rabalski, Iwona; Bryan, Michael
2014-02-01
Anthocyanins are important dietary components with diverse positive functions in human health. This study investigates effects of accelerated solvent extraction (ASE) and microwave-assisted extraction (MAE) on anthocyanin composition and extraction efficiency from blue wheat, purple corn, and black rice in comparison with the commonly used solvent extraction (CSE). Factorial experimental design was employed to study effects of ASE and MAE variables, and anthocyanin extracts were analyzed by spectrophotometry, high-performance liquid chromatography-diode array detector (DAD), and liquid chromatography-mass spectrometry chromatography. The extraction efficiency of ASE and MAE was comparable with CSE at the optimal conditions. The greatest extraction by ASE was achieved at 50 °C, 2500 psi, 10 min using 5 cycles, and 100% flush. For MAE, a combination of 70 °C, 300 W, and 10 min in MAE was the most effective in extracting anthocyanins from blue wheat and purple corn compared with 50 °C, 1200 W, and 20 min for black rice. The anthocyanin composition of grain extracts was influenced by the extraction method. The ASE extraction method seems to be more appropriate in extracting anthocyanins from the colored grains as being comparable with the CSE method based on changes in anthocyanin composition. The method caused lower structural changes in anthocaynins compared with the MAE method. Changes in blue wheat anthocyanins were lower in comparison with purple corn or black rice perhaps due to the absence of acylated anthocyanin compounds in blue wheat. The results show significant differences in anthocyanins among the 3 extraction methods, which indicate a need to standardize a method for valid comparisons among studies and for quality assurance purposes. © 2014 Her Majesty the Queen in Right of Canada Journal of Food Science © 2014 Institute of Food Technologists® Reproduced with the permission of the Minister of Agriculture and Agri-Food Canada.
Water-waves on linear shear currents. A comparison of experimental and numerical results.
NASA Astrophysics Data System (ADS)
Simon, Bruno; Seez, William; Touboul, Julien; Rey, Vincent; Abid, Malek; Kharif, Christian
2016-04-01
Propagation of water waves can be described for uniformly sheared current conditions. Indeed, some mathematical simplifications remain applicable in the study of waves whether there is no current or a linearly sheared current. However, the widespread use of mathematical wave theories including shear has rarely been backed by experimental studies of such flows. New experimental and numerical methods were both recently developed to study wave current interactions for constant vorticity. On one hand, the numerical code can simulate, in two dimensions, arbitrary non-linear waves. On the other hand, the experimental methods can be used to generate waves with various shear conditions. Taking advantage of the simplicity of the experimental protocol and versatility of the numerical code, comparisons between experimental and numerical data are discussed and compared with linear theory for validation of the methods. ACKNOWLEDGEMENTS The DGA (Direction Générale de l'Armement, France) is acknowledged for its financial support through the ANR grant N° ANR-13-ASTR-0007.
NASA Astrophysics Data System (ADS)
Ghassemi, Aazam; Yazdani, Mostafa; Hedayati, Mohamad
2017-12-01
In this work, based on the First Order Shear Deformation Theory (FSDT), an attempt is made to explore the applicability and accuracy of the Generalized Differential Quadrature Method (GDQM) for bending analysis of composite sandwich plates under static loading. Comparative studies of the bending behavior of composite sandwich plates are made between two types of boundary conditions for different cases. The effects of fiber orientation, ratio of thickness to length of the plate, the ratio of thickness of core to thickness of the face sheet are studied on the transverse displacement and moment resultants. As shown in this study, the role of the core thickness in deformation of these plates can be reversed by the stiffness of the core in comparison with sheets. The obtained graphs give very good results due to optimum design of sandwich plates. In Comparison with existing solutions, fast convergent rates and high accuracy results can be achieved by the GDQ method.
Applied Graph-Mining Algorithms to Study Biomolecular Interaction Networks
2014-01-01
Protein-protein interaction (PPI) networks carry vital information on the organization of molecular interactions in cellular systems. The identification of functionally relevant modules in PPI networks is one of the most important applications of biological network analysis. Computational analysis is becoming an indispensable tool to understand large-scale biomolecular interaction networks. Several types of computational methods have been developed and employed for the analysis of PPI networks. Of these computational methods, graph comparison and module detection are the two most commonly used strategies. This review summarizes current literature on graph kernel and graph alignment methods for graph comparison strategies, as well as module detection approaches including seed-and-extend, hierarchical clustering, optimization-based, probabilistic, and frequent subgraph methods. Herein, we provide a comprehensive review of the major algorithms employed under each theme, including our recently published frequent subgraph method, for detecting functional modules commonly shared across multiple cancer PPI networks. PMID:24800226
A comparison between GO/aperture-field and physical-optics methods for offset reflectors
NASA Technical Reports Server (NTRS)
Rahmat-Samii, Y.
1984-01-01
Both geometrical optics (GO)/aperture-field and physical-optics (PO) methods are used extensively in the diffraction analysis of offset parabolic and dual reflectors. An analytical/numerical comparative study is performed to demonstrate the limitations of the GO/aperture-field method for accurately predicting the sidelobe and null positions and levels. In particular, it is shown that for offset parabolic reflectors and for feeds located at the focal point, the predicted far-field patterns (amplitude) by the GO/aperture-field method will always be symmetric even in the offset plane. This, of course, is inaccurate for the general case and it is shown that the physical-optics method can result in asymmetric patterns for cases in which the feed is located at the focal point. Representative numerical data are presented and a comparison is made with available measured data.
KEY COMPARISON: Final report on CCQM-K69 key comparison: Testosterone glucuronide in human urine
NASA Astrophysics Data System (ADS)
Liu, Fong-Ha; Mackay, Lindsey; Murby, John
2010-01-01
The CCQM-K69 key comparison of testosterone glucuronide in human urine was organized under the auspices of the CCQM Organic Analysis Working Group (OAWG). The National Measurement Institute Australia (NMIA) acted as the coordinating laboratory for the comparison. The samples distributed for the key comparison were prepared at NMIA with funding from the World Anti-Doping Agency (WADA). WADA granted the approval for this material to be used for the intercomparison provided the distribution and handling of the material were strictly controlled. Three national metrology institutes (NMIs)/designated institutes (DIs) developed reference methods and submitted data for the key comparison along with two other laboratories who participated in the parallel pilot study. A good selection of analytical methods and sample workup procedures was displayed in the results submitted considering the complexities of the matrix involved. The comparability of measurement results was successfully demonstrated by the participating NMIs. Only the key comparison data were used to estimate the key comparison reference value (KCRV), using the arithmetic mean approach. The reported expanded uncertainties for results ranged from 3.7% to 6.7% at the 95% level of confidence and all results agreed within the expanded uncertainty of the KCRV. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Teramura, H; Sota, K; Iwasaki, M; Ogihara, H
2017-07-01
Sanita-kun™ CC (coliform count) and EC (Escherichia coli/coliform count), sheet quantitative culture systems which can avoid chromogenic interference by lactase in food, were evaluated in comparison with conventional methods for these bacteria. Based on the results of inclusivity and exclusivity studies using 77 micro-organisms, sensitivity and specificity of both Sanita-kun™ met the criteria for ISO 16140. Both media were compared with deoxycholate agar, violet red bile agar, Merck Chromocult™ coliform agar (CCA), 3M Petrifilm™ CC and EC (PEC) and 3-tube MPN, as reference methods, in 100 naturally contaminated food samples. The correlation coefficients of both Sanita-kun™ for coliform detection were more than 0·95 for all comparisons. For E. coli detection, Sanita-kun™ EC was compared with CCA, PEC and MPN in 100 artificially contaminated food samples. The correlation coefficients for E. coli detection of Sanita-kun™ EC were more than 0·95 for all comparisons. There were no significant differences in all comparisons when conducting a one-way analysis of variance (anova). Both Sanita-kun™ significantly inhibited colour interference by lactase when inhibition of enzymatic staining was assessed using 40 natural cheese samples spiked with coliform. Our results demonstrated Sanita-kun™ CC and EC are suitable alternatives for the enumeration of coliforms and E. coli/coliforms, respectively, in a variety of foods, and specifically in fermented foods. Current chromogenic media for coliforms and Escherichia coli/coliforms have enzymatic coloration due to breaking down of chromogenic substrates by food lactase. The novel sheet culture media which have film layer to avoid coloration by food lactase have been developed for enumeration of coliforms and E. coli/coliforms respectively. In this study, we demonstrated these media had comparable performance with reference methods and less interference by food lactase. These media have a possibility not only to be useful alternatives but also to contribute for accurate enumeration of these bacteria in a variety of foods, and specifically in fermented foods. © 2017 The Society for Applied Microbiology.
2013-01-01
Background Perturbations in intestinal microbiota composition have been associated with a variety of gastrointestinal tract-related diseases. The alleviation of symptoms has been achieved using treatments that alter the gastrointestinal tract microbiota toward that of healthy individuals. Identifying differences in microbiota composition through the use of 16S rRNA gene hypervariable tag sequencing has profound health implications. Current computational methods for comparing microbial communities are usually based on multiple alignments and phylogenetic inference, making them time consuming and requiring exceptional expertise and computational resources. As sequencing data rapidly grows in size, simpler analysis methods are needed to meet the growing computational burdens of microbiota comparisons. Thus, we have developed a simple, rapid, and accurate method, independent of multiple alignments and phylogenetic inference, to support microbiota comparisons. Results We create a metric, called compression-based distance (CBD) for quantifying the degree of similarity between microbial communities. CBD uses the repetitive nature of hypervariable tag datasets and well-established compression algorithms to approximate the total information shared between two datasets. Three published microbiota datasets were used as test cases for CBD as an applicable tool. Our study revealed that CBD recaptured 100% of the statistically significant conclusions reported in the previous studies, while achieving a decrease in computational time required when compared to similar tools without expert user intervention. Conclusion CBD provides a simple, rapid, and accurate method for assessing distances between gastrointestinal tract microbiota 16S hypervariable tag datasets. PMID:23617892
Yang, Fang; Chia, Nicholas; White, Bryan A; Schook, Lawrence B
2013-04-23
Perturbations in intestinal microbiota composition have been associated with a variety of gastrointestinal tract-related diseases. The alleviation of symptoms has been achieved using treatments that alter the gastrointestinal tract microbiota toward that of healthy individuals. Identifying differences in microbiota composition through the use of 16S rRNA gene hypervariable tag sequencing has profound health implications. Current computational methods for comparing microbial communities are usually based on multiple alignments and phylogenetic inference, making them time consuming and requiring exceptional expertise and computational resources. As sequencing data rapidly grows in size, simpler analysis methods are needed to meet the growing computational burdens of microbiota comparisons. Thus, we have developed a simple, rapid, and accurate method, independent of multiple alignments and phylogenetic inference, to support microbiota comparisons. We create a metric, called compression-based distance (CBD) for quantifying the degree of similarity between microbial communities. CBD uses the repetitive nature of hypervariable tag datasets and well-established compression algorithms to approximate the total information shared between two datasets. Three published microbiota datasets were used as test cases for CBD as an applicable tool. Our study revealed that CBD recaptured 100% of the statistically significant conclusions reported in the previous studies, while achieving a decrease in computational time required when compared to similar tools without expert user intervention. CBD provides a simple, rapid, and accurate method for assessing distances between gastrointestinal tract microbiota 16S hypervariable tag datasets.
Sweetlove, Cyril; Chenèble, Jean-Charles; Barthel, Yves; Boualam, Marc; L'Haridon, Jacques; Thouand, Gérald
2016-09-01
Difficulties encountered in estimating the biodegradation of poorly water-soluble substances are often linked to their limited bioavailability to microorganisms. Many original bioavailability improvement methods (BIMs) have been described, but no global approach was proposed for a standardized comparison of these. The latter would be a valuable tool as part of a wider strategy for evaluating poorly water-soluble substances. The purpose of this study was to define an evaluation strategy following the assessment of different BIMs adapted to poorly water-soluble substances with ready biodegradability tests. The study was performed with two poorly water-soluble chemicals-a solid, anthraquinone, and a liquid, isodecyl neopentanoate-and five BIMs were compared to the direct addition method (reference method), i.e., (i) ultrasonic dispersion, (ii) adsorption onto silica gel, (iii) dispersion using an emulsifier, (iv) dispersion with silicone oil, and (v) dispersion with emulsifier and silicone oil. A two-phase evaluation strategy of solid and liquid chemicals was developed involving the selection of the most relevant BIMs for enhancing the biodegradability of tested substances. A description is given of a BIM classification ratio (R BIM), which enables a comparison to be made between the different test chemical sample preparation methods used in the various tests. Thereby, using this comparison, the BIMs giving rise to the greatest biodegradability were ultrasonic dispersion and dispersion with silicone oil or with silicone oil and emulsifier for the tested solid chemical, adsorption onto silica gel, and ultrasonic dispersion for the liquid one.
Results of interlaboratory comparison of fission track ages for 1992 fission track workshop
Miller, D.S.; Crowley, K.D.; Dokka, R.K.; Galbraith, R.F.; Kowallis, B.J.; Naeser, C.W.
1993-01-01
Two apatites and one sphene were made available to the fission track research community for analysis prior to the 1992 Fission Track Workshop held in Philadelphia, U.S.A., 13-17 July. Eighteen laboratories throughout the world received aliquots of apatite and sphene. To date, analyses by 33 different scientists have been representing 15 different laboratories. With respect to the previous two interlaboratory comparisons, there is a noticeable improvement in the accuracy of the age results (Naeser and Cebula, 1978; Naeser et al., 1981; Miller et al., 1985;Miller et al.1990). Ninety-four percent of the analysis used the external detector method (EDM) combined with the zeta technique while the remaining individuals used the population method (POP). Track length measurements (requested for the first time in the interlaboratory comparison studies) were in relatively good agreement. ?? 1993.
Liao, Qing; Deng, Yaping; Shi, Xiaoqing; Sun, Yuanyuan; Duan, Weidong; Wu, Jichun
2018-03-03
Precise delineation of contaminant plume distribution is essential for effective remediation of contaminated sites. Traditional in situ investigation methods like direct-push (DP) sampling are accurate, but are usually intrusive and costly. Electrical resistivity tomography (ERT) method, as a non-invasive geophysical technique to map spatiotemporal changes in resistivity of the subsurface, is becoming increasingly popular in environmental science. However, the resolution of ERT for delineation of contaminant plumes still remains controversial. In this study, ERT and DP technique were both conducted at a real inorganic contaminated site. The reliability of the ERT method was validated by the direct comparisons of their investigation results that the resistivity acquired by ERT method is in accordance with the total dissolved solid concentration in groundwater and the overall variation of the total iron content in soil obtained by DP technique. After testifying the applicability of ERT method for contaminant identification, the extension of contaminant plume at the study site was revealed by supplementary ERT surveys conducted subsequently in the surrounding area of the contaminant source zone.
Comparison analysis for classification algorithm in data mining and the study of model use
NASA Astrophysics Data System (ADS)
Chen, Junde; Zhang, Defu
2018-04-01
As a key technique in data mining, classification algorithm was received extensive attention. Through an experiment of classification algorithm in UCI data set, we gave a comparison analysis method for the different algorithms and the statistical test was used here. Than that, an adaptive diagnosis model for preventive electricity stealing and leakage was given as a specific case in the paper.
A Comparison of Two Methods Used for Ranking Task Exposure Levels Using Simulated Multi-Task Data
1999-12-17
OF OKLAHOMA HEALTH SCIENCES CENTER GRADUATE COLLEGE A COMPARISON OF TWO METHODS USED FOR RANKING TASK EXPOSURE LEVELS USING SIMULATED MULTI-TASK...COSTANTINO Oklahoma City, Oklahoma 1999 ^ooo wx °^ A COMPARISON OF TWO METHODS USED FOR RANKING TASK EXPOSURE LEVELS USING SIMULATED MULTI-TASK DATA... METHODS AND MATERIALS 9 TV. RESULTS 14 V. DISCUSSION AND CONCLUSION 28 LIST OF REFERENCES 31 APPENDICES 33 Appendix A JJ -in Appendix B Dl IV
NASA Technical Reports Server (NTRS)
1976-01-01
The application of NASTRAN to a wide variety of static and dynamic structural problems is discussed. The following topics are focused upon: (1) methods of analysis; (2) hydroelastic methods; (3) complete analysis of structures; (4) elements and material studies; (5) critical comparisons with other programs; and (6) pre- and post-processor operations.
A comparison of cover pole with standard vegetation monitoring methods
USDA-ARS?s Scientific Manuscript database
The ability of resource managers to make informed decisions regarding wildlife habitat could be improved with the use of existing datasets and the use of cost effective, standardized methods to simultaneously quantify vertical and horizontal cover. The objectives of this study were to (1) characteri...
Meta-Analytic Structural Equation Modeling (MASEM): Comparison of the Multivariate Methods
ERIC Educational Resources Information Center
Zhang, Ying
2011-01-01
Meta-analytic Structural Equation Modeling (MASEM) has drawn interest from many researchers recently. In doing MASEM, researchers usually first synthesize correlation matrices across studies using meta-analysis techniques and then analyze the pooled correlation matrix using structural equation modeling techniques. Several multivariate methods of…
Bacterial agents and cell components can be spread as bioaerosols, producing infections and asthmatic problems. This study compares four methods for the detection and enumeration of aerosolized bacteria collected in an AGI-30 impinger. Changes in the total and viable concentratio...
Calibration of the DRASTIC ground water vulnerability mapping method
Rupert, M.G.
2001-01-01
Ground water vulnerability maps developed using the DRASTIC method have been produced in many parts of the world. Comparisons of those maps with actual ground water quality data have shown that the DRASTIC method is typically a poor predictor of ground water contamination. This study significantly improved the effectiveness of a modified DRASTIC ground water vulnerability map by calibrating the point rating schemes to actual ground water quality data by using nonparametric statistical techniques and a geographic information system. Calibration was performed by comparing data on nitrite plus nitrate as nitrogen (NO2 + NO3-N) concentrations in ground water to land-use, soils, and depth to first-encountered ground water data. These comparisons showed clear statistical differences between NO2 + NO3-N concentrations and the various categories. Ground water probability point ratings for NO2 + NO3-N contamination were developed from the results of these comparisons, and a probability map was produced. This ground water probability map was then correlated with an independent set of NO2 + NO3-N data to demonstrate its effectiveness in predicting elevated NO2 + NO3-N concentrations in ground water. This correlation demonstrated that the probability map was effective, but a vulnerability map produced with the uncalibrated DRASTIC method in the same area and using the same data layers was not effective. Considerable time and expense have been outlaid to develop ground water vulnerability maps with the DRASTIC method. This study demonstrates a cost-effective method to improve and verify the effectiveness of ground water vulnerability maps.
Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.
Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E
2014-02-28
The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.
Passive sampling of gas-phase air toxics and criteria pollutants has become an attractive monitoring method in human exposure studies due to the relatively low sampling cost and ease of use. This study evaluates the performance of Model 3300 Ogawa(TM) Passive NO2 Samplers and 3...
ERIC Educational Resources Information Center
Zhang, Mo; Williamson, David M.; Breyer, F. Jay; Trapani, Catherine
2012-01-01
This article describes two separate, related studies that provide insight into the effectiveness of "e-rater" score calibration methods based on different distributional targets. In the first study, we developed and evaluated a new type of "e-rater" scoring model that was cost-effective and applicable under conditions of absent human rating and…
ERIC Educational Resources Information Center
Diemer, Richard M.; Mazzocco, Daniel M.
Hypothesizing that experimentation with various teaching methodologies and individual student differences may show certain teaching methods to be more effective than others for a certain type of student, the authors studied the application of such experimentation to a portion of the radiology sequence in the dental curriculum. A review of the…
Comparison of rotation algorithms for digital images
NASA Astrophysics Data System (ADS)
Starovoitov, Valery V.; Samal, Dmitry
1999-09-01
The paper presents a comparative study of several algorithms developed for digital image rotation. No losing generality we studied gray scale images. We have tested methods preserving gray values of the original images, performing some interpolation and two procedures implemented into the Corel Photo-paint and Adobe Photoshop soft packages. By the similar way methods for rotation of color images may be evaluated also.
ERIC Educational Resources Information Center
Green-Gibson, Andrea
2011-01-01
This mixed, causal-comparative study was an investigation of culture infusion methods and AYP of two different public schools in Chicago, a school that infuses African culture and a school that does not. The purpose of the study was to identify if there was a significant causative relationship between culture infusion methods and Adequate Yearly…
ERIC Educational Resources Information Center
Pike, Gary R.
Because change is fundamental to education and the measurement of change assesses the quality and effectiveness of postsecondary education, this study examined three methods of measuring change: (1) gain scores; (2) residual scores; and (3) repeated measures. Data for the study was obtained from transcripts of 722 graduating seniors at the…
ERIC Educational Resources Information Center
Berger, Roland; Hanze, Martin
2009-01-01
Twelfth-grade physics classes with 344 students participated in a quasi-experimental study comparing two small-group learning settings. In the jigsaw classroom, in contrast to the cyclical rotation method, teaching expectancy as well as resource interdependence is established. The study is based on the self-determination theory of motivation,…
ERIC Educational Resources Information Center
Huang, Liuli
2018-01-01
Research frequently uses the quantitative approach to explore undergraduate students' anxiety regarding statistics. However, few studies of adults' statistics anxiety use the qualitative method, or a sole focus on graduate students. Moreover, even fewer studies focus on a comparison of adults' anxiety levels before and after an introductory…
ERIC Educational Resources Information Center
Novosel, Leslie C.
2012-01-01
Employing multiple methods, including a comparison group pre/posttest design and student interviews and self-reflections, this study represents an initial attempt to investigate the efficacy of a social and emotional learning self-regulation strategy relative to the general reading ability, reading self-concept, and social and emotional well-being…
Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.
2015-01-01
Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB). Additional metrics of comparison can easily be incorporated into this type of analysis. By considering such a multifaceted approach, the top-performing models can easily be identified and considered for further research. The top-performing models can then provide a basis for future applications and explorations by scientists, engineers, managers, and practitioners to suit their own needs.
Comparison of Artificial Compressibility Methods
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Housman, Jeffrey; Kwak, Dochan
2004-01-01
Various artificial compressibility methods for calculating the three-dimensional incompressible Navier-Stokes equations are compared. Each method is described and numerical solutions to test problems are conducted. A comparison based on convergence behavior, accuracy, and robustness is given.
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Ely, Jay J.; Koppen, Sandra V.
2001-01-01
This paper describes the implementation of mode-stirred method for susceptibility testing according to the current DO-160D standard. Test results on an Engine Data Processor using the implemented procedure and the comparisons with the standard anechoic test results are presented. The comparison experimentally shows that the susceptibility thresholds found in mode-stirred method are consistently higher than anechoic. This is consistent with the recent statistical analysis finding by NIST that the current calibration procedure overstates field strength by a fixed amount. Once the test results are adjusted for this value, the comparisons with the anechoic results are excellent. The results also show that test method has excellent chamber to chamber repeatability. Several areas for improvements to the current procedure are also identified and implemented.
NASA Astrophysics Data System (ADS)
Jacobson, Abram R.; Shao, Xuan-Min; Holzworth, Robert
2010-05-01
We are developing and testing a steep-incidence D region sounding method for inferring profile information, principally regarding electron density. The method uses lightning emissions (in the band 5-500 kHz) as the probe signal. The data are interpreted by comparison against a newly developed single-reflection model of the radio wave's encounter with the lower ionosphere. The ultimate application of the method will be to study transient, localized disturbances of the nocturnal D region, including those instigated by lightning itself. Prior to applying the method to study lightning-induced perturbations of the nighttime D region, we have performed a validation test against more stable and predictable daytime observations, where the profile of electron density is largely determined by direct solar X-ray illumination. This article reports on the validation test. Predictions from our recently developed full-wave ionospheric-reflection model are compared to statistical summaries of daytime lightning radiated waveforms, recorded by the Los Alamos Sferic Array. The comparison is used to retrieve best fit parameters for an exponential profile of electron density in the ionospheric D region. The optimum parameter values are compared to those found elsewhere using a narrowband beacon technique, which used totally different measurements, ranges, and modeling approaches from those of the work reported here.
Comparative Study of Impedance Eduction Methods. Part 1; DLR Tests and Methodology
NASA Technical Reports Server (NTRS)
Busse-Gerstengarbe, Stefan; Bake, Friedrich; Enghardt, Lars; Jones, Michael G.
2013-01-01
The absorption efficiency of acoustic liners used in aircraft engines is characterized by the acoustic impedance. World wide, many grazing ow test rigs and eduction methods are available that provide values for that impedance. However, a direct comparison and assessment of the data of the di erent rigs and methods is often not possible because test objects and test conditions are quite di erent. Only a few papers provide a direct comparison. Therefore, this paper together with a companion paper, present data measured with a reference test object under similar conditions in the DLR and NASA grazing ow test rigs. Additionally, by applying the in-house methods Liner Impedance Non-Uniform ow Solving algorithm (LINUS, DLR) and Convected Helmhholtz Equation approach (CHE, NASA) on the data sets, similarities and differences due to underlying theory are identi ed and discussed.
Adibrad, Nastaran; Sedgh poor, Bahram Saleh
2010-01-01
Objective The aim of this study was to examine the comparison of relationship beliefs and couples burnout in women who apply for divorce and women who want to continue their marital life. Method for this study, 50 women who referred to judicial centers and 50 women who claimed they wanted to continue their marital life were randomly selected. Participants were asked to complete the relationship beliefs inventory and marital burnout questionnaires. In this study, descriptive statistical methods such as standard deviation, mean, t- students for independent groups, correlation, multi-variable regression and independent group's correlation difference test were used. Results The comparison between the relationship beliefs of the 2 groups (those wanting to divorce and women wanting to continue their marital life) was significantly different (p<0/1). In addition, the comparison of marital burnout was significantly different in the 2 groups (p<0/1). Discussion Women who were about to divorce were significantly different from those who wanted to continue their marital relationship in the general measure of the relationship beliefs and factors of “believing that disagreement is destructive and their partners can not change their undesirable behaviors”. In other words, women who were applying for divorce had more unreasonable thoughts and burnout compared to those who wanted to continue their marital life. PMID:22952488
Andrasiak, Iga; Rybka, Justyna; Knopinska-Posluszny, Wanda; Wrobel, Tomasz
2017-05-01
Bendamustine and ibrutinib are commonly used in the treatment of patients suffering from chronic lymphocytic leukemia (CLL). In this study we compare efficacy and safety bendamustine versus ibrutinib therapy in previously untreated patients with CLL. Because there are no head-to-head comparisons between bendamustine and ibrutinib, we performed indirect comparison using Bucher method. A systematic literature review was performed and 2 studies published before June 2016 were taken into analysis. Treatment with ibrutinib significantly improves PFS determined by investigator (HR of 0.3; P = .01) and OS (HR of 0.21; P < .001. Our study indicates that ibrutinib therapy improves PFS, OS and is superior in terms of safety comparing with bendamustine therapy in CLL patients. Copyright © 2017 Elsevier Inc. All rights reserved.
Computational Chemistry Comparison and Benchmark Database
National Institute of Standards and Technology Data Gateway
SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access) The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.
Mendoza, G A; Prabhu, R
2000-12-01
This paper describes an application of multiple criteria analysis (MCA) in assessing criteria and indicators adapted for a particular forest management unit. The methods include: ranking, rating, and pairwise comparisons. These methods were used in a participatory decision-making environment where a team representing various stakeholders and professionals used their expert opinions and judgements in assessing different criteria and indicators (C&I) on the one hand, and how suitable and applicable they are to a forest management unit on the other. A forest concession located in Kalimantan, Indonesia, was used as the site for the case study. Results from the study show that the multicriteria methods are effective tools that can be used as structured decision aids to evaluate, prioritize, and select sets of C&I for a particular forest management unit. Ranking and rating approaches can be used as a screening tool to develop an initial list of C&I. Pairwise comparison, on the other hand, can be used as a finer filter to further reduce the list. In addition to using these three MCA methods, the study also examines two commonly used group decision-making techniques, the Delphi method and the nominal group technique. Feedback received from the participants indicates that the methods are transparent, easy to implement, and provide a convenient environment for participatory decision-making.
A Comparison of Two Methods of Determining Interrater Reliability
ERIC Educational Resources Information Center
Fleming, Judith A.; Taylor, Janeen McCracken; Carran, Deborah
2004-01-01
This article offers an alternative methodology for practitioners and researchers to use in establishing interrater reliability for testing purposes. The majority of studies on interrater reliability use a traditional methodology where by two raters are compared using a Pearson product-moment correlation. This traditional method of estimating…
A multi-phase instrument comparison study was conducted on two different diesel engines on a dynamometer to compare commonly used particulate matter (PM) measurement techniques while sampling the same diesel exhaust aerosol and to evaluate inter- and intra-method variability. In...
La Methode Experimentale en Pedagogie (The Experimental Method in Pedagogy)
ERIC Educational Resources Information Center
Rouquette, Michel-Louis
1975-01-01
The pedagogue is caught between the qualitative and quantitative or regularized aspects of his work, a situation not automatically conducive to scientific study. The article refreshes the instructor on the elementary principles of experimentation: observation, systematization, elaboration of hypothesis, and startegies of comparison. (Text is in…
COMPARISON OF SAMPLING METHODS FOR SEMI-VOLATILE ORGANIC CARBON (SVOC) ASSOCIATED WITH PM 2.5
This study evaluates the influence of denuder sampling methods and filter collection media on the measurement of semi-volatile organic carbon (SVOC) associated with PM2.5. Two types of collection media, charcoal (activated carbon) and XAD, were used both in diffusion denuders ...
COMPARISON OF SAMPLING METHODS FOR SEMI-VOLATILE ORGANIC CARBON ASSOCIATED WITH PM 2.5
This study evaluates the influence of denuder sampling methods and filter collection media on the measurement of semi-volatile organic carbon (SVOC) associated with PM2.5. Two types of collection media, charcoal (activated carbon) and XAD, were used both in diffusion denuders ...
Stored grain pack factors for wheat: comparison of three methods to field measurements
USDA-ARS?s Scientific Manuscript database
Storing grain in bulk storage units results in grain packing from overbearing pressure, which increases grain bulk density and storage-unit capacity. This study compared pack factors of hard red winter (HRW) wheat in vertical storage bins using different methods: the existing packing model (WPACKING...
Measuring Cognitive Load: A Comparison of Self-Report and Physiological Methods
ERIC Educational Resources Information Center
Joseph, Stacey
2013-01-01
This study explored three methods to measure cognitive load in a learning environment using four logic puzzles that systematically varied in level of intrinsic cognitive load. Participants' perceived intrinsic load was simultaneously measured with a self-report measure-a traditional subjective measure-and two objective, physiological measures…
Endoparasites must breach host barriers to establish infection and then must survive host internal defenses to cause disease. Such barriers may frustrate attempts to experimentally transmit parasites by ?natural' methods. In addition, the host's condition may affect a study's out...
Comparisons and Analyses of Gifted Students' Characteristics and Learning Methods
ERIC Educational Resources Information Center
Lu, Jiamei; Li, Daqi; Stevens, Carla; Ye, Renmin
2017-01-01
Using PISA 2009, an international education database, this study compares gifted and talented (GT) students in three groups with normal (non-GT) students by examining student characteristics, reading, schooling, learning methods, and use of strategies for understanding and memorizing. Results indicate that the GT and non-GT gender distributions…
ERIC Educational Resources Information Center
Boyle, Cathy; McCann, John; Miyamoto, Sheridan; Rogers, Kristen
2008-01-01
Objective: To compare the effectiveness of three different examination methods in their ability to help the examiner detect both acute and non-acute genital injuries in prepubertal and pubertal girls suspected of having been sexually abused. Methods: Forty-six prepubertal and 74 pubertal girls, whose ages ranged from 4 months to 18 years, were…
ERIC Educational Resources Information Center
Sloan, Tina Rye; Vinson, Beth; Haynes, Jonita; Gresham, Regina
This study examined the effectiveness of a methods course in the reduction of mathematics anxiety levels among three groups of preservice teachers majoring in elementary education. The sample included 61 novices enrolled in a course entitled Mathematics for the Young Child. This methods course utilized concrete manipulatives and active learning…
A comparison of two methods for estimating conifer live foliar moisture content
W. Matt Jolly; Ann M. Hadlow
2012-01-01
Foliar moisture content is an important factor regulating how wildland fires ignite in and spread through live fuels but moisture content determination methods are rarely standardised between studies. One such difference lies between the uses of rapid moisture analysers or drying ovens. Both of these methods are commonly used in live fuel research but they have never...
A Comparison of the Kernel Equating Method with Traditional Equating Methods Using SAT[R] Data
ERIC Educational Resources Information Center
Liu, Jinghua; Low, Albert C.
2008-01-01
This study applied kernel equating (KE) in two scenarios: equating to a very similar population and equating to a very different population, referred to as a distant population, using SAT[R] data. The KE results were compared to the results obtained from analogous traditional equating methods in both scenarios. The results indicate that KE results…
Terré, M; Castells, L; Fàbregas, F; Bach, A
2013-08-01
The objective of this study was to compare rumen samples from young dairy calves obtained via a stomach tube (ST) or a ruminal cannula (RC). Five male Holstein calves (46±4.0 kg of body weight and 11±4.9 d of age) were ruminally cannulated at 15 d of age. Calves received 4 L/d of a commercial milk replacer (25% crude protein and 19.2% fat) at 12.5% dry matter, and were provided concentrate and chopped oats hay ad libitum throughout the study (56 d). In total, 29 paired rumen samples were obtained weekly throughout the study in most of the calves by each extraction method. These samples were used to determine pH and volatile fatty acids (VFA) concentration, and to quantify Prevotella ruminicola and Streptococcus bovis by quantitative PCR. Furthermore, a denaturing gradient gel electrophoresis was performed on rumen samples harvested during wk 8 of the study to determine the degree of similarity between rumen bacteria communities. Rumen pH was 0.30 units greater in ST compared with RC samples. Furthermore, total VFA concentrations were greater in RC than in ST samples. However, when analyzing the proportion of each VFA by ANOVA, no differences were found between the sampling methods. The quantification of S. bovis and P. ruminicola was similar in both extraction methods, and values obtained using different methods were highly correlated (R(2)=0.89 and 0.98 for S. bovis and P. ruminicola, respectively). Fingerprinting analysis showed similar bacteria band profiles between samples obtained from the same calves using different extraction methods. In conclusion, when comparing rumen parameters obtained using different sampling techniques, it is recommended that VFA profiles be used rather than total VFA concentrations, as total VFA concentrations are more affected by the method of collection. Furthermore, although comparisons of pH across studies should be avoided when samples are not obtained using the same sampling method, the comparison of fingerprinting of a bacteria community or a specific rumen bacterium is valid. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Performance characteristics of the ARCHITECT Active-B12 (Holotranscobalamin) assay.
Merrigan, Stephen D; Owen, William E; Straseski, Joely A
2015-01-01
Vitamin B12 (cobalamin) is a necessary cofactor in methionine and succinyl-CoA metabolism. Studies estimate the deficiency prevalence as high as 30% in the elderly population. Ten to thirty percent of circulating cobalamin is bound to transcobalamin (holotranscobalamin, holoTC) which can readily enter cells and is therefore considered the bioactive form. The objective of our study was to evaluate the analytical performance of a high-throughput, automated holoTC assay (ARCHITECT i2000(SR) Active-B12 (Holotranscobalamin)) and compare it to other available methods. Manufacturer-specified limits of blank (LoB), detection (LoD), and quantitation (LoQ), imprecision, interference, and linearity were evaluated for the ARCHITECT HoloTC assay. Residual de-identified serum samples were used to compare the ARCHITECT HoloTC assay with the automated AxSYM Active-B12 (Holotranscobalamin) assay (Abbott Diagnostics) and the manual Active-B12 (Holotranscobalamin) Enzyme Immunoassay (EIA) (Axis-Shield Diagnostics, Dundee, Scotland, UK). Manufacturer's claims of LoB, LoD, LoQ, imprecision, interference, and linearity to the highest point tested (113.4 pmol/L) were verified for the ARCHITECT HoloTC assay. Method comparison of the ARCHITECT HoloTC to the AxSYM HoloTC produced the following Deming regression statistics: (ARCHITECT(HoloTc)) = 0.941 (AxSYM(HoloTC)) + 1.2 pmol/L, S(y/x) = 6.4, r = 0.947 (n = 98). Comparison to the Active-B12 EIA produced: (ARCHITECT(HoloTC)) = 1.105 (EIA(Active-B12)) - 6.8 pmol/L, S(y/x) = 11.0, r = 0.950 (n = 221). This assay performed acceptably for LoB, LoD, LoQ, imprecision, interference, linearity and method comparison to the predicate device (AxSYM). An additional comparison to a manual Active-B12 EIA method performed similarly, with minor exceptions. This study determined that the ARCHITECT HoloTC assay is suitable for routine clinical use, which provides a high-throughput alternative for automated testing of this emerging marker of cobalamin deficiency.
Ciepiela, Olga; Kotuła, Iwona; Kierat, Szymon; Sieczkowska, Sandra; Podsiadłowska, Anna; Jenczelewska, Anna; Księżarczyk, Karolina; Demkow, Urszula
2016-11-01
Modern automated laboratory hematology analyzers allow the measurement of over 30 different hematological parameters useful in the diagnostic and clinical interpretation of patient symptoms. They use different methods to measure the same parameters. Thus, a comparison of complete blood count made by Mindray BC-6800, Sysmex XN-2000 and Beckman Coulter LH750 was performed. A comparison of results obtained by automated analysis of 807 anticoagulated blood samples from children and 125 manual microscopic differentiations were performed. This comparative study included white blood cell count, red blood cell count, and erythrocyte indices, as well as platelet count. The present study showed a poor level of agreement between white blood cell enumeration and differentiation of the three automated hematology analyzers under comparison. A very good agreement was found when comparing manual blood smear and automated granulocytes, monocytes, and lymphocytes differentiation. Red blood cell evaluation showed better agreement than white blood cells between the studied analyzers. To conclude, studied instruments did not ensure satisfactory interchangeability and did not facilitate a substitution of one analyzer by another. © 2016 Wiley Periodicals, Inc.
Ultrasound – A new approach for non-woven scaffolds investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khramtsova, E. A.; Morokov, E. S.; Levin, V. M.
2016-05-18
In this study we verified the method of impulse acoustic microscopy as a tool for scaffold evaluation in tissue engineering investigation. Cellulose diacetate (CDA) non-woven 3D scaffold was used as a model object. Scanning electron microscopy and optical microscopy were used as reference methods in order to realize feasibility of acoustic microscopy method in a regenerative medicine field. Direct comparison of the different methods was carried out.
NASA Astrophysics Data System (ADS)
Retheesh, R.; Ansari, Md. Zaheer; Radhakrishnan, P.; Mujeeb, A.
2018-03-01
This study demonstrates the feasibility of a view-based method, the motion history image (MHI) to map biospeckle activity around the scar region in a green orange fruit. The comparison of MHI with the routine intensity-based methods validated the effectiveness of the proposed method. The results show that MHI can be implementated as an alternative online image processing tool in the biospeckle analysis.
Serwetnyk, Tara M; Filmore, Kristi; VonBacho, Stephanie; Cole, Robert; Miterko, Cindy; Smith, Caitlin; Smith, Charlene M
2015-01-01
Basic Life Support certification for nursing staff is achieved through various training methods. This study compared three American Heart Association training methods for nurses seeking Basic Life Support renewal: a traditional classroom approach and two online options. Findings indicate that online methods for Basic Life Support renewal deliver cost and time savings, while maintaining positive learning outcomes, satisfaction, and confidence level of participants.
ERIC Educational Resources Information Center
Cossu, Claude
1975-01-01
A group of French universities modified the NCHEMS accounting method for use in a study of its budget control procedures and cost-evaluation methods. The conceptual differences in French university education (as compared to American higher education) are keyed to the adjustments in the accounting method. French universities, rather than being…
Multiple comparisons permutation test for image based data mining in radiotherapy.
Chen, Chun; Witte, Marnix; Heemsbergen, Wilma; van Herk, Marcel
2013-12-23
: Comparing incidental dose distributions (i.e. images) of patients with different outcomes is a straightforward way to explore dose-response hypotheses in radiotherapy. In this paper, we introduced a permutation test that compares images, such as dose distributions from radiotherapy, while tackling the multiple comparisons problem. A test statistic Tmax was proposed that summarizes the differences between the images into a single value and a permutation procedure was employed to compute the adjusted p-value. We demonstrated the method in two retrospective studies: a prostate study that relates 3D dose distributions to failure, and an esophagus study that relates 2D surface dose distributions of the esophagus to acute esophagus toxicity. As a result, we were able to identify suspicious regions that are significantly associated with failure (prostate study) or toxicity (esophagus study). Permutation testing allows direct comparison of images from different patient categories and is a useful tool for data mining in radiotherapy.
Multiple comparisons in drug efficacy studies: scientific or marketing principles?
Leo, Jonathan
2004-01-01
When researchers design an experiment to compare a given medication to another medication, a behavioral therapy, or a placebo, the experiment often involves numerous comparisons. For instance, there may be several different evaluation methods, raters, and time points. Although scientifically justified, such comparisons can be abused in the interests of drug marketing. This article provides two recent examples of such questionable practices. The first involves the case of the arthritis drug celecoxib (Celebrex), where the study lasted 12 months but the authors only presented 6 months of data. The second case involves the NIMH Multimodal Treatment Study (MTA) study evaluating the efficacy of stimulant medication for attention-deficit hyperactivity disorder where ratings made by several groups are reported in contradictory fashion. The MTA authors have not clarified the confusion, at least in print, suggesting that the actual findings of the study may have played little role in the authors' reported conclusions.
Cordioli, M; Ranzi, A; Freni Sterrantino, A; Erspamer, L; Razzini, G; Ferrari, U; Gatti, M G; Bonora, K; Artioli, F; Goldoni, C A; Lauriola, P
2014-06-01
In epidemiological studies both questionnaire results and GIS modeling have been used to assess exposure to environmental risk factors. Nevertheless, few studies have used both these techniques to evaluate the degree of agreement between different exposure assessment methodologies. As part of a case-control study on lung cancer, we present a comparison between self-reported and GIS-derived proxies of residential exposure to environmental pollution. 649 subjects were asked to fill out a questionnaire and give information about residential history and perceived exposure. Using GIS, for each residence we evaluated land use patterns, proximity to major roads and exposure to industrial pollution. We then compared the GIS exposure-index values among groups created on the basis of questionnaire responses. Our results showed a relatively high agreement between the two methods. Although none of these methods is the "exposure gold standard", understanding similarities, weaknesses and strengths of each method is essential to strengthen epidemiological evidence. Copyright © 2014 Elsevier Ltd. All rights reserved.
Comparison of 3 in vivo methods for assessment of alcohol-based hand rubs.
Edmonds-Wilson, Sarah; Campbell, Esther; Fox, Kyle; Macinga, David
2015-05-01
Alcohol-based hand rubs (ABHRs) are the primary method of hand hygiene in health-care settings. ICPs increasingly are assessing ABHR product efficacy data as improved products and test methods are developed. As a result, ICPs need better tools and recommendations for how to assess and compare ABHRs. Two ABHRs (70% ethanol) were tested according to 3 in vivo methods approved by ASTM International: E1174, E2755, and E2784. Log10 reductions were measured after a single test product use and after 10 consecutive uses at an application volume of 2 mL. The test method used had a significant influence on ABHR efficacy; however, in this study the test product (gel or foam) did not significantly influence efficacy. In addition, for all test methods, log10 reductions obtained after a single application were not predictive of results after 10 applications. Choice of test method can significantly influence efficacy results. Therefore, when assessing antimicrobial efficacy data of hand hygiene products, ICPs should pay close attention to the test method used, and ensure that product comparisons are made head to head in the same study using the same test methodology. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Comparison of parameter-adapted segmentation methods for fluorescence micrographs.
Held, Christian; Palmisano, Ralf; Häberle, Lothar; Hensel, Michael; Wittenberg, Thomas
2011-11-01
Interpreting images from fluorescence microscopy is often a time-consuming task with poor reproducibility. Various image processing routines that can help investigators evaluate the images are therefore useful. The critical aspect for a reliable automatic image analysis system is a robust segmentation algorithm that can perform accurate segmentation for different cell types. In this study, several image segmentation methods were therefore compared and evaluated in order to identify the most appropriate segmentation schemes that are usable with little new parameterization and robustly with different types of fluorescence-stained cells for various biological and biomedical tasks. The study investigated, compared, and enhanced four different methods for segmentation of cultured epithelial cells. The maximum-intensity linking (MIL) method, an improved MIL, a watershed method, and an improved watershed method based on morphological reconstruction were used. Three manually annotated datasets consisting of 261, 817, and 1,333 HeLa or L929 cells were used to compare the different algorithms. The comparisons and evaluations showed that the segmentation performance of methods based on the watershed transform was significantly superior to the performance of the MIL method. The results also indicate that using morphological opening by reconstruction can improve the segmentation of cells stained with a marker that exhibits the dotted surface of cells. Copyright © 2011 International Society for Advancement of Cytometry.
Comparison of Satellite Surveying to Traditional Surveying Methods for the Resources Industry
NASA Astrophysics Data System (ADS)
Osborne, B. P.; Osborne, V. J.; Kruger, M. L.
Modern ground-based survey methods involve detailed survey, which provides three-space co-ordinates for surveyed points, to a high level of accuracy. The instruments are operated by surveyors, who process the raw results to create survey location maps for the subject of the survey. Such surveys are conducted for a location or region and referenced to the earth global co- ordinate system with global positioning system (GPS) positioning. Due to this referencing the survey is only as accurate as the GPS reference system. Satellite survey remote sensing utilise satellite imagery which have been processed using commercial geographic information system software. Three-space co-ordinate maps are generated, with an accuracy determined by the datum position accuracy and optical resolution of the satellite platform.This paper presents a case study, which compares topographic surveying undertaken by traditional survey methods with satellite surveying, for the same location. The purpose of this study is to assess the viability of satellite remote sensing for surveying in the resources industry. The case study involves a topographic survey of a dune field for a prospective mining project area in Pakistan. This site has been surveyed using modern surveying techniques and the results are compared to a satellite survey performed on the same area.Analysis of the results from traditional survey and from the satellite survey involved a comparison of the derived spatial co- ordinates from each method. In addition, comparisons have been made of costs and turnaround time for both methods.The results of this application of remote sensing is of particular interest for survey in areas with remote and extreme environments, weather extremes, political unrest, poor travel links, which are commonly associated with mining projects. Such areas frequently suffer language barriers, poor onsite technical support and resources.
Steroid hormones in environmental matrices: extraction method comparison.
Andaluri, Gangadhar; Suri, Rominder P S; Graham, Kendon
2017-11-09
The U.S. Environmental Protection Agency (EPA) has developed methods for the analysis of steroid hormones in water, soil, sediment, and municipal biosolids by HRGC/HRMS (EPA Method 1698). Following the guidelines provided in US-EPA Method 1698, the extraction methods were validated with reagent water and applied to municipal wastewater, surface water, and municipal biosolids using GC/MS/MS for the analysis of nine most commonly detected steroid hormones. This is the first reported comparison of the separatory funnel extraction (SFE), continuous liquid-liquid extraction (CLLE), and Soxhlet extraction methods developed by the U.S. EPA. Furthermore, a solid phase extraction (SPE) method was also developed in-house for the extraction of steroid hormones from aquatic environmental samples. This study provides valuable information regarding the robustness of the different extraction methods. Statistical analysis of the data showed that SPE-based methods provided better recovery efficiencies and lower variability of the steroid hormones followed by SFE. The analytical methods developed in-house for extraction of biosolids showed a wide recovery range; however, the variability was low (≤ 7% RSD). Soxhlet extraction and CLLE are lengthy procedures and have been shown to provide highly variably recovery efficiencies. The results of this study are guidance for better sample preparation strategies in analytical methods for steroid hormone analysis, and SPE adds to the choice in environmental sample analysis.
COMPARISON OF METHODS FOR MEASURING CONCENTRATIONS OF SEMIVOLATILE PARTICULATE MATTER
The paper gives results of a comparison of methods for measuring concentrations of semivolatile particulate matter (PM) from indoor-environment, small, combustion sources. Particle concentration measurements were compared for methods using filters and a small electrostatic precip...
Coban, Huseyin Oguz; Koc, Ayhan; Eker, Mehmet
2010-01-01
Previous studies have been able to successfully detect changes in gently-sloping forested areas with low-diversity and homogeneous vegetation cover using medium-resolution satellite data such as landsat. The aim of the present study is to examine the capacity of multi-temporal landsat data to identify changes in forested areas with mixed vegetation and generally located on steep slopes or non-uniform topography landsat thematic mapper (TM) and landsat enhanced thematic mapperplus (ETM+) data for the years 1987-2000 was used to detect changes within a 19,500 ha forested area in the Western Black sea region of Turkey. The data comply with the forest cover type maps previously created for forest management plans of the research area. The methods used to detect changes were: post-classification comparison, image differencing, image rationing and NDVI (Normalized Difference Vegetation Index) differencing methods. Following the supervised classification process, error matrices were used to evaluate the accuracy of classified images obtained. The overall accuracy has been calculated as 87.59% for 1987 image and as 91.81% for 2000 image. General kappa statistics have been calculated as 0.8543 and 0.9038 for 1987 and 2000, respectively. The changes identified via the post-classification comparison method were compared with other change detetion methods. Maximum coherence was found to be 74.95% at 4/3 band rate. The NDVI difference and 3rd band difference methods achieved the same coherence with slight variations. The results suggest that landsat satellite data accurately conveys the temporal changes which occur on steeply-sloping forested areas with a mixed structure, providing a limited amount of detail but with a high level of accuracy. Moreover it has been decided that the post-classification comparison method can meet the needs of forestry activities better than other methods as it provides information about the direction of these changes.
A Stellar Dynamical Black Hole Mass for the Reverberation Mapped AGN NGC 5273
NASA Astrophysics Data System (ADS)
Batiste, Merida; Bentz, Misty C.; Valluri, Monica; Onken, Christopher A.
2018-01-01
We present preliminary results from stellar dynamical modeling of the mass of the central super-massive black hole (MBH) in the active galaxy NGC 5273. NGC 5273 is one of the few AGN with a secure MBH measurement from reverberation-mapping that is also nearby enough to measure MBH with stellar dynamical modeling. Dynamical modeling and reverberation-mapping are the two most heavily favored methods of direct MBH determination in the literature, however the specific limitations of each method means that there are very few galaxies for which both can be used. To date only two such galaxies, NGC 3227 and NGC 4151, have MBH determinations from both methods. Given this small sample size, it is not yet clear that the two methods give consistent results. Moreover, given the inherent uncertainties and potential systematic biases in each method, it is likewise unclear whether one method should be preferred over the other. This study is part of an ongoing project to increase the sample of galaxies with secure MBH measurements from both methods, so that a direct comparison may be made. NGC 5273 provides a particularly valuable comparison because it is free of kinematic substructure (e.g. the presence of a bar, as is the case for NGC 4151) which can complicate and potentially bias results from stellar dynamical modeling. I will discuss our current results as well as the advantages and limitations of each method, and the potential sources of systematic bias that may affect comparison between results.
Detecting sulphate aerosol geoengineering with different methods
Lo, Y. T. Eunice; Charlton-Perez, Andrew J.; Lott, Fraser C.; ...
2016-12-15
Sulphate aerosol injection has been widely discussed as a possible way to engineer future climate. Monitoring it would require detecting its effects amidst internal variability and in the presence of other external forcings. Here, we investigate how the use of different detection methods and filtering techniques affects the detectability of sulphate aerosol geoengineering in annual-mean global-mean near-surface air temperature. This is done by assuming a future scenario that injects 5 Tg yr -1 of sulphur dioxide into the stratosphere and cross-comparing simulations from 5 climate models. 64% of the studied comparisons would require 25 years or more for detection whenmore » no filter and the multi-variate method that has been extensively used for attributing climate change are used, while 66% of the same comparisons would require fewer than 10 years for detection using a trend-based filter. This then highlights the high sensitivity of sulphate aerosol geoengineering detectability to the choice of filter. With the same trend-based filter but a non-stationary method, 80% of the comparisons would require fewer than 10 years for detection. This does not imply sulphate aerosol geoengineering should be deployed, but suggests that both detection methods could be used for monitoring geoengineering in global, annual mean temperature should it be needed.« less
Hassett, Brenna R
2014-03-01
Linear enamel hypoplasia (LEH), the presence of linear defects of dental enamel formed during periods of growth disruption, is frequently analyzed in physical anthropology as evidence for childhood health in the past. However, a wide variety of methods for identifying and interpreting these defects in archaeological remains exists, preventing easy cross-comparison of results from disparate studies. This article compares a standard approach to identifying LEH using the naked eye to the evidence of growth disruption observed microscopically from the enamel surface. This comparison demonstrates that what is interpreted as evidence of growth disruption microscopically is not uniformly identified with the naked eye, and provides a reference for the level of consistency between the number and timing of defects identified using microscopic versus macroscopic approaches. This is done for different tooth types using a large sample of unworn permanent teeth drawn from several post-medieval London burial assemblages. The resulting schematic diagrams showing where macroscopic methods achieve more or less similar results to microscopic methods are presented here and clearly demonstrate that "naked-eye" methods of identifying growth disruptions do not identify LEH as often as microscopic methods in areas where perikymata are more densely packed. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Duncan, Elizabeth C.; Reddick, Wilburn E.; Glass, John O.; Hyun, Jung Won; Ji, Qing; Li, Yimei; Gajjar, Amar
2016-03-01
We applied a modified probabilistic fiber-tracking method for the extraction of fiber pathways to quantify decreased white matter integrity as a surrogate of structural loss in connectivity due to cranial radiation therapy (CRT) as treatment for pediatric medulloblastoma. Thirty subjects were examined (n=8 average-risk, n=22 high-risk) and the groups did not differ significantly in age at examination. The pathway analysis created a structural connectome focused on sub-networks within the central executive network (CEN) for comparison between baseline and post-CRT scans and for comparison between standard and high dose CRT. A paired-wise comparison of the connectivity between baseline and post-CRT scans showed the irradiation did have a significant detrimental impact on white matter integrity (decreased fractional anisotropy (FA) and decreased axial diffusivity (AX)) in most of the CEN sub-networks. Group comparisons of the change in the connectivity revealed that patients receiving high dose CRT experienced significant AX decreases in all sub-networks while the patients receiving standard dose CRT had relatively stable AX measures across time. This study on pediatric patients with medulloblastoma demonstrated the utility of this method to identify specific sub-networks within the developing brain affected by CRT.
Comparison of Coarse-Grained Approaches in Predicting Polymer Nanocomposite Phase Behavior
Koski, Jason P.; Ferrier, Robert C.; Krook, Nadia M.; ...
2017-11-02
Because of the considerable parameter space, efficient theoretical and simulation methods are required to predict the morphology and guide experiments in polymer nanocomposites (PNCs). Unfortunately, theoretical and simulation methods are restricted in their ability to accurately map to experiments based on necessary approximations and numerical limitations. In this study, we provide direct comparisons of two recently developed coarse-grained approaches for modeling polymer nanocomposites (PNCs): polymer nanocomposite field theory (PNC-FT) and dynamic mean-field theory (DMFT). These methods are uniquely suited to efficiently capture mesoscale phase behavior of PNCs in comparison to other theoretical and simulation frameworks. We demonstrate the ability ofmore » both methods to capture macrophase separation and describe the thermodynamics of PNCs. We systematically test how the nanoparticle morphology in PNCs is affected by a uniform probability distribution of grafting sites, common in field-based methods, versus random discrete grafting sites on the nanoparticle surface. We also analyze the accuracy of the mean-field approximation in capturing the phase behavior of PNCs. Moreover, the DMFT method introduces the ability to describe nonequilibrium phase behavior while the PNC-FT method is strictly an equilibrium method. With the DMFT method we are able to show the evolution of nonequilibrium states toward their equilibrium state and a qualitative assessment of the dynamics in these systems. These simulations are compared to experiments consisting of polystyrene grafted gold nanorods in a poly(methyl methacrylate) matrix to ensure the model gives results that qualitatively agree with the experiments. This study reveals that nanoparticles in a relatively high matrix molecular weight are trapped in a nonequilibrium state and demonstrates the utility of the DMFT framework in capturing nonequilibrium phase behavior of PNCs. In conclusion, both the PNC-FT and DMFT framework are promising methods to describe the thermodynamic and nonequilibrium phase behavior of PNCs.« less
Comparison of Coarse-Grained Approaches in Predicting Polymer Nanocomposite Phase Behavior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koski, Jason P.; Ferrier, Robert C.; Krook, Nadia M.
Because of the considerable parameter space, efficient theoretical and simulation methods are required to predict the morphology and guide experiments in polymer nanocomposites (PNCs). Unfortunately, theoretical and simulation methods are restricted in their ability to accurately map to experiments based on necessary approximations and numerical limitations. In this study, we provide direct comparisons of two recently developed coarse-grained approaches for modeling polymer nanocomposites (PNCs): polymer nanocomposite field theory (PNC-FT) and dynamic mean-field theory (DMFT). These methods are uniquely suited to efficiently capture mesoscale phase behavior of PNCs in comparison to other theoretical and simulation frameworks. We demonstrate the ability ofmore » both methods to capture macrophase separation and describe the thermodynamics of PNCs. We systematically test how the nanoparticle morphology in PNCs is affected by a uniform probability distribution of grafting sites, common in field-based methods, versus random discrete grafting sites on the nanoparticle surface. We also analyze the accuracy of the mean-field approximation in capturing the phase behavior of PNCs. Moreover, the DMFT method introduces the ability to describe nonequilibrium phase behavior while the PNC-FT method is strictly an equilibrium method. With the DMFT method we are able to show the evolution of nonequilibrium states toward their equilibrium state and a qualitative assessment of the dynamics in these systems. These simulations are compared to experiments consisting of polystyrene grafted gold nanorods in a poly(methyl methacrylate) matrix to ensure the model gives results that qualitatively agree with the experiments. This study reveals that nanoparticles in a relatively high matrix molecular weight are trapped in a nonequilibrium state and demonstrates the utility of the DMFT framework in capturing nonequilibrium phase behavior of PNCs. In conclusion, both the PNC-FT and DMFT framework are promising methods to describe the thermodynamic and nonequilibrium phase behavior of PNCs.« less
2015-06-19
effective and scientifically valid method of making comparisons of clothing and equipment changes prior to conducting human research. predictive modeling...valid method of making comparisons of clothing and equipment changes prior to conducting human research. 2 INTRODUCTION Modern day...clothing and equipment changes prior to conducting human research. METHODS Ensembles Three different body armor (BA) plus clothing ensembles were
Measurement of health outcomes.
Thavorncharoensap, Montarat
2014-05-01
Health outcomes are one of the most important components of health technology assessments (HTAs). All HTA outcomes should be measured from a relevant sample using a properly designed study and method. A number of recommendations on health outcome measurements are made in this second edition of Thailand's HTA guidelines. In particular the use of final outcomes, rather than surrogate outcomes, in HTAs is stressed. Where surrogate outcomes are used, strong justification and evidence must be provided. Effectiveness is preferred over efficacy. The relative treatment effect (the difference between health outcome that would be experienced by patients receiving the technology and that experienced by the same group were they to receive an alternative technology) should be derived from a systematic review of head-to-head RCTs. Mixed treatment comparison (MTC) should be used only to provide supplementary data that cannot be obtained from a head-to-head comparison. Where no direct comparison evidence exists, indirect comparison and observational study data can be used.
Research study on stabilization and control: Modern sampled data control theory
NASA Technical Reports Server (NTRS)
Kuo, B. C.; Singh, G.; Yackel, R. A.
1973-01-01
A numerical analysis of spacecraft stability parameters was conducted. The analysis is based on a digital approximation by point by point state comparison. The technique used is that of approximating a continuous data system by a sampled data model by comparison of the states of the two systems. Application of the method to the digital redesign of the simplified one axis dynamics of the Skylab is presented.
NASA Astrophysics Data System (ADS)
Moraczewski, Krzysztof; Rytlewski, Piotr; Malinowski, Rafał; Żenkiewicz, Marian
2015-08-01
The article presents the results of studies and comparison of selected properties of the modified PLA surface layer. The modification was carried out with three methods. In the chemical method, a 0.25 M solution of sodium hydroxide in water and ethanol was utilized. In the plasma method, a 50 W generator was used, which produced plasma in the air atmosphere under reduced pressure. In the laser method, a pulsed ArF excimer laser with fluency of 60 mJ/cm2 was applied. Polylactide samples were examined by using the following techniques: scanning electron microscopy (SEM), atomic force microscopy (AFM), goniometry and X-ray photoelectron spectroscopy (XPS). Images of surfaces of the modified samples were recorded, contact angles were measured, and surface free energy was calculated. Qualitative and quantitative analyses of chemical composition of the PLA surface layer were performed as well. Based on the survey it was found that the best modification results are obtained using the plasma method.
Sung, Yao-Ting; Wu, Jeng-Shin
2018-04-17
Traditionally, the visual analogue scale (VAS) has been proposed to overcome the limitations of ordinal measures from Likert-type scales. However, the function of VASs to overcome the limitations of response styles to Likert-type scales has not yet been addressed. Previous research using ranking and paired comparisons to compensate for the response styles of Likert-type scales has suffered from limitations, such as that the total score of ipsative measures is a constant that cannot be analyzed by means of many common statistical techniques. In this study we propose a new scale, called the Visual Analogue Scale for Rating, Ranking, and Paired-Comparison (VAS-RRP), which can be used to collect rating, ranking, and paired-comparison data simultaneously, while avoiding the limitations of each of these data collection methods. The characteristics, use, and analytic method of VAS-RRPs, as well as how they overcome the disadvantages of Likert-type scales, ranking, and VASs, are discussed. On the basis of analyses of simulated and empirical data, this study showed that VAS-RRPs improved reliability, response style bias, and parameter recovery. Finally, we have also designed a VAS-RRP Generator for researchers' construction and administration of their own VAS-RRPs.
Secure multiparty computation of a comparison problem.
Liu, Xin; Li, Shundong; Liu, Jian; Chen, Xiubo; Xu, Gang
2016-01-01
Private comparison is fundamental to secure multiparty computation. In this study, we propose novel protocols to privately determine [Formula: see text], or [Formula: see text] in one execution. First, a 0-1-vector encoding method is introduced to encode a number into a vector, and the Goldwasser-Micali encryption scheme is used to compare integers privately. Then, we propose a protocol by using a geometric method to compare rational numbers privately, and the protocol is information-theoretical secure. Using the simulation paradigm, we prove the privacy-preserving property of our protocols in the semi-honest model. The complexity analysis shows that our protocols are more efficient than previous solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleary, M.P.
This paper provides comments to a companion journal paper on predictive modeling of hydraulic fracturing patterns (N.R. Warpinski et. al., 1994). The former paper was designed to compare various modeling methods to demonstrate the most accurate methods under various geologic constraints. The comments of this paper are centered around potential deficiencies in the former authors paper which include: limited actual comparisons offered between models, the issues of matching predictive data with that from related field operations was lacking or undocumented, and the relevance/impact of accurate modeling on the overall hydraulic fracturing cost and production.
Applications of remote sensing, volume 3
NASA Technical Reports Server (NTRS)
Landgrebe, D. A. (Principal Investigator)
1977-01-01
The author has identified the following significant results. Of the four change detection techniques (post classification comparison, delta data, spectral/temporal, and layered spectral temporal), the post classification comparison was selected for further development. This was based upon test performances of the four change detection method, straightforwardness of the procedures, and the output products desired. A standardized modified, supervised classification procedure for analyzing the Texas coastal zone data was compiled. This procedure was developed in order that all quadrangles in the study are would be classified using similar analysis techniques to allow for meaningful comparisons and evaluations of the classifications.
Comparison of the abundance and composition of litter fauna in tropical and subalpine forests
G. Gonzalez; T.R. Seastedt
2000-01-01
In this study, we quantify the abundance and composition of the litter fauna in dry and wet tropical forests and north- and south-facing subalpine forests. We used the same litter species contained in litterbags across study sites to standardize for substrate conditions, and a single method of fauna extraction from the litter (Tullgren method). Fauna densities were...
ERIC Educational Resources Information Center
Basak, Tulay; Yildiz, Dilek
2014-01-01
Objective: The aim of this study was to compare the effectiveness of cooperative learning and traditional learning methods on the development of drug-calculation skills. Design: Final-year nursing students ("n" = 85) undergoing internships during the 2010-2011 academic year at a nursing school constituted the study group of this…
Comparison of normalization methods for differential gene expression analysis in RNA-Seq experiments
Maza, Elie; Frasse, Pierre; Senin, Pavel; Bouzayen, Mondher; Zouine, Mohamed
2013-01-01
In recent years, RNA-Seq technologies became a powerful tool for transcriptome studies. However, computational methods dedicated to the analysis of high-throughput sequencing data are yet to be standardized. In particular, it is known that the choice of a normalization procedure leads to a great variability in results of differential gene expression analysis. The present study compares the most widespread normalization procedures and proposes a novel one aiming at removing an inherent bias of studied transcriptomes related to their relative size. Comparisons of the normalization procedures are performed on real and simulated data sets. Real RNA-Seq data sets analyses, performed with all the different normalization methods, show that only 50% of significantly differentially expressed genes are common. This result highlights the influence of the normalization step on the differential expression analysis. Real and simulated data sets analyses give similar results showing 3 different groups of procedures having the same behavior. The group including the novel method named “Median Ratio Normalization” (MRN) gives the lower number of false discoveries. Within this group the MRN method is less sensitive to the modification of parameters related to the relative size of transcriptomes such as the number of down- and upregulated genes and the gene expression levels. The newly proposed MRN method efficiently deals with intrinsic bias resulting from relative size of studied transcriptomes. Validation with real and simulated data sets confirmed that MRN is more consistent and robust than existing methods. PMID:26442135
Dahabreh, Issa J.; Sheldrick, Radley C.; Paulus, Jessica K.; Chung, Mei; Varvarigou, Vasileia; Jafri, Haseeb; Rassen, Jeremy A.; Trikalinos, Thomas A.; Kitsios, Georgios D.
2012-01-01
Aims Randomized controlled trials (RCTs) are the gold standard for assessing the efficacy of therapeutic interventions because randomization protects from biases inherent in observational studies. Propensity score (PS) methods, proposed as a potential solution to confounding of the treatment–outcome association, are widely used in observational studies of therapeutic interventions for acute coronary syndromes (ACS). We aimed to systematically assess agreement between observational studies using PS methods and RCTs on therapeutic interventions for ACS. Methods and results We searched for observational studies of interventions for ACS that used PS methods to estimate treatment effects on short- or long-term mortality. Using a standardized algorithm, we matched observational studies to RCTs based on patients’ characteristics, interventions, and outcomes (‘topics’), and we compared estimates of treatment effect between the two designs. When multiple observational studies or RCTs were identified for the same topic, we performed a meta-analysis and used the summary relative risk for comparisons. We matched 21 observational studies investigating 17 distinct clinical topics to 63 RCTs (median = 3 RCTs per observational study) for short-term (7 topics) and long-term (10 topics) mortality. Estimates from PS analyses differed statistically significantly from randomized evidence in two instances; however, observational studies reported more extreme beneficial treatment effects compared with RCTs in 13 of 17 instances (P = 0.049). Sensitivity analyses limited to large RCTs, and using alternative meta-analysis models yielded similar results. Conclusion For the treatment of ACS, observational studies using PS methods produce treatment effect estimates that are of more extreme magnitude compared with those from RCTs, although the differences are rarely statistically significant. PMID:22711757
Methodological integrative review of the work sampling technique used in nursing workload research.
Blay, Nicole; Duffield, Christine M; Gallagher, Robyn; Roche, Michael
2014-11-01
To critically review the work sampling technique used in nursing workload research. Work sampling is a technique frequently used by researchers and managers to explore and measure nursing activities. However, work sampling methods used are diverse making comparisons of results between studies difficult. Methodological integrative review. Four electronic databases were systematically searched for peer-reviewed articles published between 2002-2012. Manual scanning of reference lists and Rich Site Summary feeds from contemporary nursing journals were other sources of data. Articles published in the English language between 2002-2012 reporting on research which used work sampling to examine nursing workload. Eighteen articles were reviewed. The review identified that the work sampling technique lacks a standardized approach, which may have an impact on the sharing or comparison of results. Specific areas needing a shared understanding included the training of observers and subjects who self-report, standardization of the techniques used to assess observer inter-rater reliability, sampling methods and reporting of outcomes. Work sampling is a technique that can be used to explore the many facets of nursing work. Standardized reporting measures would enable greater comparison between studies and contribute to knowledge more effectively. Author suggestions for the reporting of results may act as guidelines for researchers considering work sampling as a research method. © 2014 John Wiley & Sons Ltd.
Lundh, Anna; Kowalski, Jan; Sundberg, Carl Johan; Landén, Mikael
2012-11-01
The aim of this study was to compare two methods to conduct CGAS rater training. A total of 648 raters were randomized to training (CD or seminar), and rated five cases before and 12 months after training. The ICC at baseline/end of study was 0.71/0.78 (seminar), 0.76/0.78 (CD), and 0.67/0.79 (comparison). There were no differences in training effect in terms of agreement with expert ratings, which speaks in favor of using the less resource-demanding CD. However, the effect was modest in both groups, and untrained comparison group improved of the same order of magnitude, which proposes more extensive training.
Valx: A System for Extracting and Structuring Numeric Lab Test Comparison Statements from Text.
Hao, Tianyong; Liu, Hongfang; Weng, Chunhua
2016-05-17
To develop an automated method for extracting and structuring numeric lab test comparison statements from text and evaluate the method using clinical trial eligibility criteria text. Leveraging semantic knowledge from the Unified Medical Language System (UMLS) and domain knowledge acquired from the Internet, Valx takes seven steps to extract and normalize numeric lab test expressions: 1) text preprocessing, 2) numeric, unit, and comparison operator extraction, 3) variable identification using hybrid knowledge, 4) variable - numeric association, 5) context-based association filtering, 6) measurement unit normalization, and 7) heuristic rule-based comparison statements verification. Our reference standard was the consensus-based annotation among three raters for all comparison statements for two variables, i.e., HbA1c and glucose, identified from all of Type 1 and Type 2 diabetes trials in ClinicalTrials.gov. The precision, recall, and F-measure for structuring HbA1c comparison statements were 99.6%, 98.1%, 98.8% for Type 1 diabetes trials, and 98.8%, 96.9%, 97.8% for Type 2 diabetes trials, respectively. The precision, recall, and F-measure for structuring glucose comparison statements were 97.3%, 94.8%, 96.1% for Type 1 diabetes trials, and 92.3%, 92.3%, 92.3% for Type 2 diabetes trials, respectively. Valx is effective at extracting and structuring free-text lab test comparison statements in clinical trial summaries. Future studies are warranted to test its generalizability beyond eligibility criteria text. The open-source Valx enables its further evaluation and continued improvement among the collaborative scientific community.
Statistical and Spatial Analysis of Bathymetric Data for the St. Clair River, 1971-2007
Bennion, David
2009-01-01
To address questions concerning ongoing geomorphic processes in the St. Clair River, selected bathymetric datasets spanning 36 years were analyzed. Comparisons of recent high-resolution datasets covering the upper river indicate a highly variable, active environment. Although statistical and spatial comparisons of the datasets show that some changes to the channel size and shape have taken place during the study period, uncertainty associated with various survey methods and interpolation processes limit the statistically certain results. The methods used to spatially compare the datasets are sensitive to small variations in position and depth that are within the range of uncertainty associated with the datasets. Characteristics of the data, such as the density of measured points and the range of values surveyed, can also influence the results of spatial comparison. With due consideration of these limitations, apparently active and ongoing areas of elevation change in the river are mapped and discussed.
Warren, R B; Brnabic, A; Saure, D; Langley, R G; See, K; Wu, J J; Schacht, A; Mallbris, L; Nast, A
2018-05-01
Head-to-head randomized studies comparing ixekizumab and secukinumab in the treatment of psoriasis are not available. To assess efficacy and quality of life using matching-adjusted indirect comparisons for treatment with ixekizumab vs. secukinumab. Psoriasis Area and Severity Index (PASI) improvement of at least 75%, 90% and 100% and Dermatology Life Quality Index (DLQI) 0/1 response rates for approved dosages of ixekizumab (160 mg at Week 0, then 80 mg every two weeks for the first 12 weeks) and secukinumab (300 mg at Weeks 0, 1, 2, 3 and 4, then 300 mg every 4 weeks) treatment were compared using data from active (etanercept and ustekinumab) and placebo-controlled studies. Comparisons were made using the Bucher (BU) method and two modified versions of the Signorovitch (SG) method (SG total and SG separate). Subsequently, results based on active treatment common comparators were combined using generic inverse-variance meta-analysis. In the meta-analysis of studies with active comparators, PASI 90 response rates were 12·7% [95% confidence interval (CI) 5·5-19·8, P = 0·0005], 10·0% (95% CI 2·1-18·0, P = 0·01) and 11·2% (95% CI 3·2-19·1, P = 0·006) higher and PASI 100 response rates were 11·7% (95% CI 5·9-17·5, P < 0·001), 12·7% (95% CI 6·0-19·4, P < 0·001) and 13·1% (95% CI 6·3-19·9, P < 0·001) higher for ixekizumab compared with secukinumab using BU, SG total and SG separate methods. PASI 75 results were comparable when SG methods were used and favoured ixekizumab when the BU method was used. Week 12 DLQI 0/1 response rates did not differ significantly. Ixekizumab had higher PASI 90 and PASI 100 responses at week 12 compared with secukinumab using adjusted indirect comparisons. © 2017 The Authors. British Journal of Dermatology published by John Wiley & Sons Ltd on behalf of British Association of Dermatologists.
Flores, David I; Sotelo-Mundo, Rogerio R; Brizuela, Carlos A
2014-01-01
The automatic identification of catalytic residues still remains an important challenge in structural bioinformatics. Sequence-based methods are good alternatives when the query shares a high percentage of identity with a well-annotated enzyme. However, when the homology is not apparent, which occurs with many structures from the structural genome initiative, structural information should be exploited. A local structural comparison is preferred to a global structural comparison when predicting functional residues. CMASA is a recently proposed method for predicting catalytic residues based on a local structure comparison. The method achieves high accuracy and a high value for the Matthews correlation coefficient. However, point substitutions or a lack of relevant data strongly affect the performance of the method. In the present study, we propose a simple extension to the CMASA method to overcome this difficulty. Extensive computational experiments are shown as proof of concept instances, as well as for a few real cases. The results show that the extension performs well when the catalytic site contains mutated residues or when some residues are missing. The proposed modification could correctly predict the catalytic residues of a mutant thymidylate synthase, 1EVF. It also successfully predicted the catalytic residues for 3HRC despite the lack of information for a relevant side chain atom in the PDB file.
Fast and Accurate Approximation to Significance Tests in Genome-Wide Association Studies
Zhang, Yu; Liu, Jun S.
2011-01-01
Genome-wide association studies commonly involve simultaneous tests of millions of single nucleotide polymorphisms (SNP) for disease association. The SNPs in nearby genomic regions, however, are often highly correlated due to linkage disequilibrium (LD, a genetic term for correlation). Simple Bonferonni correction for multiple comparisons is therefore too conservative. Permutation tests, which are often employed in practice, are both computationally expensive for genome-wide studies and limited in their scopes. We present an accurate and computationally efficient method, based on Poisson de-clumping heuristics, for approximating genome-wide significance of SNP associations. Compared with permutation tests and other multiple comparison adjustment approaches, our method computes the most accurate and robust p-value adjustments for millions of correlated comparisons within seconds. We demonstrate analytically that the accuracy and the efficiency of our method are nearly independent of the sample size, the number of SNPs, and the scale of p-values to be adjusted. In addition, our method can be easily adopted to estimate false discovery rate. When applied to genome-wide SNP datasets, we observed highly variable p-value adjustment results evaluated from different genomic regions. The variation in adjustments along the genome, however, are well conserved between the European and the African populations. The p-value adjustments are significantly correlated with LD among SNPs, recombination rates, and SNP densities. Given the large variability of sequence features in the genome, we further discuss a novel approach of using SNP-specific (local) thresholds to detect genome-wide significant associations. This article has supplementary material online. PMID:22140288
A three-term conjugate gradient method under the strong-Wolfe line search
NASA Astrophysics Data System (ADS)
Khadijah, Wan; Rivaie, Mohd; Mamat, Mustafa
2017-08-01
Recently, numerous studies have been concerned in conjugate gradient methods for solving large-scale unconstrained optimization method. In this paper, a three-term conjugate gradient method is proposed for unconstrained optimization which always satisfies sufficient descent direction and namely as Three-Term Rivaie-Mustafa-Ismail-Leong (TTRMIL). Under standard conditions, TTRMIL method is proved to be globally convergent under strong-Wolfe line search. Finally, numerical results are provided for the purpose of comparison.
Hey, Hwee Weng Dennis; Lau, Eugene Tze-Chun; Lim, Joel-Louis; Choong, Denise Ai-Wen; Tan, Chuen-Seng; Liu, Gabriel Ka-Po; Wong, Hee-Kit
2017-03-01
Flexion radiographs have been used to identify cases of spinal instability. However, current methods are not standardized and are not sufficiently sensitive or specific to identify instability. This study aimed to introduce a new slump sitting method for performing lumbar spine flexion radiographs and comparison of the angular range of motions (ROMs) and displacements between the conventional method and this new method. This study used is a prospective study on radiological evaluation of the lumbar spine flexion ROMs and displacements using dynamic radiographs. Sixty patients were recruited from a single spine tertiary center. Angular and displacement measurements of lumbar spine flexion were carried out. Participants were randomly allocated into two groups: those who did the new method first, followed by the conventional method versus those who did the conventional method first, followed by the new method. A comparison of the angular and displacement measurements of lumbar spine flexion between the conventional method and the new method was performed and tested for superiority and non-inferiority. The measurements of global lumbar angular ROM were, on average, 17.3° larger (p<.0001) using the new slump sitting method compared with the conventional method. They were most significant at the levels of L3-L4, L4-L5, and L5-S1 (p<.0001, p<.0001 and p=.001, respectively). There was no significant difference between both methods when measuring lumbar displacements (p=.814). The new method of slump sitting dynamic radiograph was shown to be superior to the conventional method in measuring the angular ROM and non-inferior to the conventional method in the measurement of displacement. Copyright © 2016 Elsevier Inc. All rights reserved.
Stem Cell Niche is Partially Lost during Follicular Plucking: A Preliminary Pilot Study
Kumar, Anil; Gupta, Somesh; Mohanty, Sujata; Bhargava, Balram; Airan, Balram
2013-01-01
Background: Clinical hair transplant studies have revealed that follicular unit extraction (FUE) is superior in terms of stable hair growth in comparison to follicular plucking (FP). Various reasons have been cited for this clinical outcome. FUE and FP are employed to obtain the hair follicle units for hair transplant and recently for cell based therapies in vitiligo. However, there is no scientific data available on the comparison of stem cell fraction in the cell suspension obtained by FUE and FP. Therefore, we undertook this study to compare the percentage of stem cells in the hair follicle obtained by FUE and FP. Objective: The purpose of the following study is to evaluate the quantitative stem cell pool in the hair follicle obtained by FUE and FP. Materials and Methods: A total of 3 human subjects were enrolled with age groups of 17-25 years. Both methods of tissue harvest: FUE and FP; were employed on each subject. There was no vitiligo lesion on the scalp in any of the patients. Hair follicles were incubated with trypsin-EDTA solution at 37°C for 90 min to separate outer root sheath cells. The cell suspension was passed through a 70 μm cell strainer; filtrate was centrifuged to obtain the cell pellet. Cells were labeled with cluster of differentiation (CD200) antibody and acquired with flowcytometry. Results: The mean percentage of CD200 positive cells in FUE and FP method come out to be 8.43 and 1.63 respectively (P = 0.0152). Conclusion: FUE is a better method of the hair follicle harvesting for cell based applications as the stem cell fraction is significantly higher in comparison to FP. PMID:24403776
NASA Astrophysics Data System (ADS)
Mamat, Siti Salwana; Ahmad, Tahir; Awang, Siti Rahmah
2017-08-01
Analytic Hierarchy Process (AHP) is a method used in structuring, measuring and synthesizing criteria, in particular ranking of multiple criteria in decision making problems. On the other hand, Potential Method is a ranking procedure in which utilizes preference graph ς (V, A). Two nodes are adjacent if they are compared in a pairwise comparison whereby the assigned arc is oriented towards the more preferred node. In this paper Potential Method is used to solve problem on a catering service selection. The comparison of result by using Potential method is made with Extent Analysis. The Potential Method is found to produce the same rank as Extent Analysis in AHP.
Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions
NASA Astrophysics Data System (ADS)
McGrath-Spangler, E. L.; Molod, A.
2014-07-01
Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen-Geiger climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number methods are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.
2013-04-02
photometric particle counting instrument, DustTrak, to the established OSHA modified NIOSH P&CAM 304 method to determine correlation between the two...study compared the non-specific, rapid photometric particle counting instrument, DustTrak, to the established OSHA modified NIOSH P&CAM 304 method...mask confidence training (27) . This study will compare a direct reading, non-specific photometric particle count instrument (DustTrak TSI Model
NASA Technical Reports Server (NTRS)
Schmid, Beat; Michalsky, J.; Slater, D.; Barnard, J.; Halthore, R.; Liljegren, J.; Holben, B.; Eck, T.; Livingston, J.; Russell, P.;
2000-01-01
In the fall of 1997 the Atmospheric Radiation Measurement (ARM program conducted an intensive Observation Period (IOP) to study water vapor at its Southern Great Plains (SGP) site. Among the large number of instruments, four sun-tracking radiometers were present to measure the columnar water vapor (CWV). All four solar radiometers retrieve CWV by measuring solar transmittance in the 0.94-micrometer water vapor absorption band. As one of the steps in the CWV retrievals the aerosol component is subtracted from the total transmittance, in the 0.94-micrometer band. The aerosol optical depth comparisons among the same four radiometers are presented elsewhere. We have used three different methods to retrieve CWV. Without attempting to standardize on the same radiative transfer model and its underlying water vapor spectroscopy we found the CWV to agree within 0.13 cm (rms) for CWV values ranging from 1 to 5 cm. Preliminary results obtained when using the same updated radiative transfer model with updated spectroscopy for all instruments will also be shown. Comparisons to the microwave radiometer results will be included in the comparisons.
Truancy Interventions: A Review of the Research Literature
ERIC Educational Resources Information Center
Sutphen, Richard D.; Ford, Janet P.; Flaherty, Chris
2010-01-01
Objectives: This article presents a systematic review of the literature on evaluative studies of truancy interventions. Method: Included studies evaluating truancy interventions appearing in peer-reviewed academic journals from 1990 to 2007. Findings: In total, 16 studies were assessed. Eight studies used group comparison designs and eight studies…
NASA Astrophysics Data System (ADS)
Chitaru, George; Berville, Charles; Dogeanu, Angel
2018-02-01
This paper presents a comparison between a displacement ventilation method and a mixed flow ventilation method using computational fluid dynamics (CFD) approach. The paper analyses different aspects of the two systems, like the draft effect in certain areas, the air temperatureand velocity distribution in the occupied zone. The results highlighted that the displacement ventilation system presents an advantage for the current scenario, due to the increased buoyancy driven flows caused by the interior heat sources. For the displacement ventilation case the draft effect was less prone to appear in the occupied zone but the high heat emissions from the interior sources have increased the temperature gradient in the occupied zone. Both systems have been studied in similar conditions, concentrating only on the flow patterns for each case.
Effect of SiC buffer layer on GaN growth on Si via PA-MBE
NASA Astrophysics Data System (ADS)
Kukushkin, S. A.; Mizerov, A. M.; Osipov, A. V.; Redkov, A. V.; Telyatnik, R. S.; Timoshnev, S. N.
2017-11-01
The study is devoted to comparison of GaN thin films grown on SiC/Si substrates made by the method of atoms substitution with the films grown directly on Si substrates. The growth was performed in a single process via plasma assisted molecular beam epitaxy. The samples were studied via optical microscopy, Raman spectroscopy, ellipsometry, and a comparison of their characteristics was made. Using chemical etching in KOH, the polarity of GaN films grown on SiC/Si and Si substrates was determined.
A unified in vitro evaluation for apatite-forming ability of bioactive glasses and their variants.
Maçon, Anthony L B; Kim, Taek B; Valliant, Esther M; Goetschius, Kathryn; Brow, Richard K; Day, Delbert E; Hoppe, Alexander; Boccaccini, Aldo R; Kim, Ill Yong; Ohtsuki, Chikara; Kokubo, Tadashi; Osaka, Akiyoshi; Vallet-Regí, Maria; Arcos, Daniel; Fraile, Leandro; Salinas, Antonio J; Teixeira, Alexandra V; Vueva, Yuliya; Almeida, Rui M; Miola, Marta; Vitale-Brovarone, Chiara; Verné, Enrica; Höland, Wolfram; Jones, Julian R
2015-02-01
The aim of this study was to propose and validate a new unified method for testing dissolution rates of bioactive glasses and their variants, and the formation of calcium phosphate layer formation on their surface, which is an indicator of bioactivity. At present, comparison in the literature is difficult as many groups use different testing protocols. An ISO standard covers the use of simulated body fluid on standard shape materials but it does not take into account that bioactive glasses can have very different specific surface areas, as for glass powders. Validation of the proposed modified test was through round robin testing and comparison to the ISO standard where appropriate. The proposed test uses fixed mass per solution volume ratio and agitated solution. The round robin study showed differences in hydroxyapatite nucleation on glasses of different composition and between glasses of the same composition but different particle size. The results were reproducible between research facilities. Researchers should use this method when testing new glasses, or their variants, to enable comparison between the literature in the future.
Evaluation of variability in high-resolution protein structures by global distance scoring.
Anzai, Risa; Asami, Yoshiki; Inoue, Waka; Ueno, Hina; Yamada, Koya; Okada, Tetsuji
2018-01-01
Systematic analysis of the statistical and dynamical properties of proteins is critical to understanding cellular events. Extraction of biologically relevant information from a set of high-resolution structures is important because it can provide mechanistic details behind the functional properties of protein families, enabling rational comparison between families. Most of the current structural comparisons are pairwise-based, which hampers the global analysis of increasing contents in the Protein Data Bank. Additionally, pairing of protein structures introduces uncertainty with respect to reproducibility because it frequently accompanies other settings for superimposition. This study introduces intramolecular distance scoring for the global analysis of proteins, for each of which at least several high-resolution structures are available. As a pilot study, we have tested 300 human proteins and showed that the method is comprehensively used to overview advances in each protein and protein family at the atomic level. This method, together with the interpretation of the model calculations, provide new criteria for understanding specific structural variation in a protein, enabling global comparison of the variability in proteins from different species.
Comparison between four dissimilar solar panel configurations
NASA Astrophysics Data System (ADS)
Suleiman, K.; Ali, U. A.; Yusuf, Ibrahim; Koko, A. D.; Bala, S. I.
2017-12-01
Several studies on photovoltaic systems focused on how it operates and energy required in operating it. Little attention is paid on its configurations, modeling of mean time to system failure, availability, cost benefit and comparisons of parallel and series-parallel designs. In this research work, four system configurations were studied. Configuration I consists of two sub-components arranged in parallel with 24 V each, configuration II consists of four sub-components arranged logically in parallel with 12 V each, configuration III consists of four sub-components arranged in series-parallel with 8 V each, and configuration IV has six sub-components with 6 V each arranged in series-parallel. Comparative analysis was made using Chapman Kolmogorov's method. The derivation for explicit expression of mean time to system failure, steady state availability and cost benefit analysis were performed, based on the comparison. Ranking method was used to determine the optimal configuration of the systems. The results of analytical and numerical solutions of system availability and mean time to system failure were determined and it was found that configuration I is the optimal configuration.
NASA Astrophysics Data System (ADS)
Oliver, Karen D.; Cousett, Tamira A.; Whitaker, Donald A.; Smith, Luther A.; Mukerjee, Shaibal; Stallings, Casson; Thoma, Eben D.; Alston, Lillian; Colon, Maribel; Wu, Tai; Henkle, Stacy
2017-08-01
A sample integrity evaluation and an interlaboratory comparison were conducted in application of U.S. Environmental Protection Agency (EPA) Methods 325A and 325B for diffusively monitoring benzene and other selected volatile organic compounds (VOCs) using Carbopack X sorbent tubes. To evaluate sample integrity, VOC samples were refrigerated for up to 240 days and analyzed using thermal desorption/gas chromatography-mass spectrometry at the EPA Office of Research and Development laboratory in Research Triangle Park, NC, USA. For the interlaboratory comparison, three commercial analytical laboratories were asked to follow Method 325B when analyzing samples of VOCs that were collected in field and laboratory settings for EPA studies. Overall results indicate that the selected VOCs collected diffusively on sorbent tubes generally were stable for 6 months or longer when samples were refrigerated. This suggests the specified maximum 30-day storage time of VOCs collected diffusively on Carbopack X passive samplers and analyzed using Method 325B might be able to be relaxed. Interlaboratory comparison results were in agreement for the challenge samples collected diffusively in an exposure chamber in the laboratory, with most measurements within ±25% of the theoretical concentration. Statistically significant differences among laboratories for ambient challenge samples were small, less than 1 part per billion by volume (ppbv). Results from all laboratories exhibited good precision and generally agreed well with each other.
Home Economics Education, Research Summary.
ERIC Educational Resources Information Center
California State Dept. of Education, Sacramento. Research Coordinating Unit.
Annotations on selected home economics research studies published since 1963 are presented. Program development studies summarize data on such topics as teacher and supervisor involvement in curriculum planning, comparison of instructional methods, curriculum evaluation, preparation for gainful employment, family finance, and attitudes toward the…
Two laboratory methods for the calibration of GPS speed meters
NASA Astrophysics Data System (ADS)
Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie
2015-01-01
The set-ups of two calibration systems are presented to investigate calibration methods of GPS speed meters. The GPS speed meter calibrated is a special type of high accuracy speed meter for vehicles which uses Doppler demodulation of GPS signals to calculate the measured speed of a moving target. Three experiments are performed: including simulated calibration, field-test signal replay calibration, and in-field test comparison with an optical speed meter. The experiments are conducted at specific speeds in the range of 40-180 km h-1 with the same GPS speed meter as the device under calibration. The evaluation of measurement results validates both methods for calibrating GPS speed meters. The relative deviations between the measurement results of the GPS-based high accuracy speed meter and those of the optical speed meter are analyzed, and the equivalent uncertainty of the comparison is evaluated. The comparison results justify the utilization of GPS speed meters as reference equipment if no fewer than seven satellites are available. This study contributes to the widespread use of GPS-based high accuracy speed meters as legal reference equipment in traffic speed metrology.
The truth about mouse, human, worms and yeast
2004-01-01
Genome comparisons are behind the powerful new annotation methods being developed to find all human genes, as well as genes from other genomes. Genomes are now frequently being studied in pairs to provide cross-comparison datasets. This 'Noah's Ark' approach often reveals unsuspected genes and may support the deletion of false-positive predictions. Joining mouse and human as the cross-comparison dataset for the first two mammals are: two Drosophila species, D. melanogaster and D. pseudoobscura; two sea squirts, Ciona intestinalis and Ciona savignyi; four yeast (Saccharomyces) species; two nematodes, Caenorhabditis elegans and Caenorhabditis briggsae; and two pufferfish (Takefugu rubripes and Tetraodon nigroviridis). Even genomes like yeast and C. elegans, which have been known for more than five years, are now being significantly improved. Methods developed for yeast or nematodes will now be applied to mouse and human, and soon to additional mammals such as rat and dog, to identify all the mammalian protein-coding genes. Current large disparities between human Unigene predictions (127,835 genes) and gene-scanning methods (45,000 genes) still need to be resolved. This will be the challenge during the next few years. PMID:15601543
The truth about mouse, human, worms and yeast.
Nelson, David R; Nebert, Daniel W
2004-01-01
Genome comparisons are behind the powerful new annotation methods being developed to find all human genes, as well as genes from other genomes. Genomes are now frequently being studied in pairs to provide cross-comparison datasets. This 'Noah's Ark' approach often reveals unsuspected genes and may support the deletion of false-positive predictions. Joining mouse and human as the cross-comparison dataset for the first two mammals are: two Drosophila species, D. melanogaster and D. pseudoobscura; two sea squirts, Ciona intestinalis and Ciona savignyi; four yeast (Saccharomyces) species; two nematodes, Caenorhabditis elegans and Caenorhabditis briggsae; and two pufferfish (Takefugu rubripes and Tetraodon nigroviridis). Even genomes like yeast and C. elegans, which have been known for more than five years, are now being significantly improved. Methods developed for yeast or nematodes will now be applied to mouse and human, and soon to additional mammals such as rat and dog, to identify all the mammalian protein-coding genes. Current large disparities between human Unigene predictions (127,835 genes) and gene-scanning methods (45,000 genes) still need to be resolved. This will be the challenge during the next few years.
A new method for the determination of vaporization enthalpies of ionic liquids at low temperatures.
Verevkin, Sergey P; Zaitsau, Dzmitry H; Emelyanenko, Vladimir N; Heintz, Andreas
2011-11-10
A new method for the determination of vaporization enthalpies of extremely low volatile ILs has been developed using a newly constructed quartz crystal microbalance (QCM) vacuum setup. Because of the very high sensitivity of the QCM it has been possible to reduce the average temperature of the vaporization studies by approximately 100 K in comparison to other conventional techniques. The physical basis of the evaluation procedure has been developed and test measurements have been performed with the common ionic liquid 1-ethyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide [C(2)mim][NTf(2)] extending the range of measuring vaporization enthalpies down to 363 K. The results obtained for [C(2)mim][NTf(2)] have been tested for thermodynamic consistency by comparison with data already available at higher temperatures. Comparison of the temperature-dependent vaporization enthalpy data taken from the literature show only acceptable agreement with the heat capacity difference of -40 J K(-1) mol(-1). The method developed in this work opens also a new way to obtain reliable values of vaporization enthalpies of thermally unstable ionic liquids.
Gupta, Ravindra Kumar; Achalkumar, Ammathnadu Sudhakar
2018-05-18
A high yielding microwave-assisted synthetic method to obtain unsymmetrical perylene diester monoimide (PEI), by treating the perylene tetrester (PTE) with requisite amine is reported. Perylene-based molecules are widely used in the construction of self-assembled supramolecular structures because of their propensity to aggregate under various conditions. In comparison to perylene bisimides (PBIs), PEIs are less studied in organic electronics/self-assembly due to the synthetic difficulty and low yields in their preparation. PEIs are less electron deficient and have an unsymmetric structure in comparison to PBIs. Further, the PEIs got higher solubility than PBIs. The present method is applicable with a wide range of substrates like aliphatic, aromatic, benzyl amines, PTEs and bay-annulated PTEs. This method provides a tuning handle for the optical/electronic properties of perylene derivatives and also provides an easy access to unsymmetrical PBIs from the PEIs.
Bayesian Computation for Log-Gaussian Cox Processes: A Comparative Analysis of Methods
Teng, Ming; Nathoo, Farouk S.; Johnson, Timothy D.
2017-01-01
The Log-Gaussian Cox Process is a commonly used model for the analysis of spatial point pattern data. Fitting this model is difficult because of its doubly-stochastic property, i.e., it is an hierarchical combination of a Poisson process at the first level and a Gaussian Process at the second level. Various methods have been proposed to estimate such a process, including traditional likelihood-based approaches as well as Bayesian methods. We focus here on Bayesian methods and several approaches that have been considered for model fitting within this framework, including Hamiltonian Monte Carlo, the Integrated nested Laplace approximation, and Variational Bayes. We consider these approaches and make comparisons with respect to statistical and computational efficiency. These comparisons are made through several simulation studies as well as through two applications, the first examining ecological data and the second involving neuroimaging data. PMID:29200537
A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.
Nagaoka, Tomoaki; Watanabe, Soichi
2010-01-01
Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.
Comparison between Bactec Peds Plus F Broth and Conventional Medium for Vitreous Culture.
Tabatabaei, Seyed Ali; Tabatabaei, Seyed Mehdi; Soleimani, Mohammad; Hejrati, Bahman; Mirshahi, Ahmad; Khadabandeh, Alireza; Ahmadraji, Aliasghar; Valipour, Niloofar
2018-05-10
To evaluate the yield of Bactec Peds Plus F broth for vitreous sample culture in cases with infectious endophthalmitis in comparison to conventional medium. Consecutive cases of clinically suspected endophthalmitis were prospectively enrolled in this study. Cultures of the vitreous sample were performed in both Bactec Peds Plus F broth and conventional mediums. Forty eyes of 40 patients who were clinically suspected of infectious endophthalmitis with different etiologies were enrolled in this study. The positive culture yield was 50% and 35% in Bactec Peds Plus F broth and conventional mediums, respectively (p = 0.07). The result of Bactec group was not significantly different among patients who had a history of intravitreal antibiotic injection (p > 0.05) (Table 2). However, results of the conventional method were significantly negative in the previous intravitreal antibiotic injection group (p = 0.02). There was no correlation between the methods of vitreous sampling in both culture methods. Although the difference between two culture methods was not statistically significant in this study, Bactec Peds Plus F broth showed higher positive culture yield in patients with a history of intravitreal antibiotic injection.
Hesford, Andrew J; Tillett, Jason C; Astheimer, Jeffrey P; Waag, Robert C
2014-08-01
Accurate and efficient modeling of ultrasound propagation through realistic tissue models is important to many aspects of clinical ultrasound imaging. Simplified problems with known solutions are often used to study and validate numerical methods. Greater confidence in a time-domain k-space method and a frequency-domain fast multipole method is established in this paper by analyzing results for realistic models of the human breast. Models of breast tissue were produced by segmenting magnetic resonance images of ex vivo specimens into seven distinct tissue types. After confirming with histologic analysis by pathologists that the model structures mimicked in vivo breast, the tissue types were mapped to variations in sound speed and acoustic absorption. Calculations of acoustic scattering by the resulting model were performed on massively parallel supercomputer clusters using parallel implementations of the k-space method and the fast multipole method. The efficient use of these resources was confirmed by parallel efficiency and scalability studies using large-scale, realistic tissue models. Comparisons between the temporal and spectral results were performed in representative planes by Fourier transforming the temporal results. An RMS field error less than 3% throughout the model volume confirms the accuracy of the methods for modeling ultrasound propagation through human breast.
NASA Technical Reports Server (NTRS)
Jones, Henry E.
1997-01-01
A study of the full-potential modeling of a blade-vortex interaction was made. A primary goal of this study was to investigate the effectiveness of the various methods of modeling the vortex. The model problem restricts the interaction to that of an infinite wing with an infinite line vortex moving parallel to its leading edge. This problem provides a convenient testing ground for the various methods of modeling the vortex while retaining the essential physics of the full three-dimensional interaction. A full-potential algorithm specifically tailored to solve the blade-vortex interaction (BVI) was developed to solve this problem. The basic algorithm was modified to include the effect of a vortex passing near the airfoil. Four different methods of modeling the vortex were used: (1) the angle-of-attack method, (2) the lifting-surface method, (3) the branch-cut method, and (4) the split-potential method. A side-by-side comparison of the four models was conducted. These comparisons included comparing generated velocity fields, a subcritical interaction, and a critical interaction. The subcritical and critical interactions are compared with experimentally generated results. The split-potential model was used to make a survey of some of the more critical parameters which affect the BVI.
Dong, Ming; Fisher, Carolyn; Añez, Germán; Rios, Maria; Nakhasi, Hira L.; Hobson, J. Peyton; Beanan, Maureen; Hockman, Donna; Grigorenko, Elena; Duncan, Robert
2016-01-01
Aims To demonstrate standardized methods for spiking pathogens into human matrices for evaluation and comparison among diagnostic platforms. Methods and Results This study presents detailed methods for spiking bacteria or protozoan parasites into whole blood and virus into plasma. Proper methods must start with a documented, reproducible pathogen source followed by steps that include standardized culture, preparation of cryopreserved aliquots, quantification of the aliquots by molecular methods, production of sufficient numbers of individual specimens and testing of the platform with multiple mock specimens. Results are presented following the described procedures that showed acceptable reproducibility comparing in-house real-time PCR assays to a commercially available multiplex molecular assay. Conclusions A step by step procedure has been described that can be followed by assay developers who are targeting low prevalence pathogens. Significance and Impact of Study The development of diagnostic platforms for detection of low prevalence pathogens such as biothreat or emerging agents is challenged by the lack of clinical specimens for performance evaluation. This deficit can be overcome using mock clinical specimens made by spiking cultured pathogens into human matrices. To facilitate evaluation and comparison among platforms, standardized methods must be followed in the preparation and application of spiked specimens. PMID:26835651
Haap, Michael; Roth, Heinz Jürgen; Huber, Thomas; Dittmann, Helmut; Wahl, Richard
2017-01-01
The aim of our study was to develop and validate an inexpensive, rapid, easy to use quantitative method to determine urinary iodine without major procurement costs for equipment. The rationale behind introducing this method is the increasing demand for urinary iodine assessments. Our study included 103 patients (76 female, 27 male), age (arithmetic mean) 52 ± 17.3 years. Urinary iodine was determined in microplates by a modification of the Sandell-Kolthoff reaction. The results were compared with inductively-coupled plasma mass spectrometry (ICP-MS) for iodine, considered as reference method. Geometric mean of urinary iodine determined by the Sandell-Kolthoff reaction method was 62.69 μg/l (95% confidence interval 53.16–73.92) whereas by the ICP-MS method it was 65.53 μg/l (95% confidence interval 54.77–78.41). Passing-Bablok regression equations for both methods gave y = 3.374 + 0.873x (y: Sandell-Kolthoff method, x: ICP-MS). Spearman´s correlation coefficient was 0.981, indicating a very high degree of agreement between the two methods. Bland-Altman plots showed no significant systematic difference between the two methods. The modified Sandell-Kolthoff method using microtiter plate technique presented here is a simple, inexpensive semi-automated method to determine urinary iodine with very little toxic waste. Comparison with the ICP-MS-technique yielded a good agreement between the two methods. PMID:28045077
Application of CFD to a generic hypersonic flight research study
NASA Technical Reports Server (NTRS)
Green, Michael J.; Lawrence, Scott L.; Dilley, Arthur D.; Hawkins, Richard W.; Walker, Mary M.; Oberkampf, William L.
1993-01-01
Computational analyses have been performed for the initial assessment of flight research vehicle concepts that satisfy requirements for potential hypersonic experiments. Results were obtained from independent analyses at NASA Ames, NASA Langley, and Sandia National Labs, using sophisticated time-dependent Navier-Stokes and parabolized Navier-Stokes methods. Careful study of a common problem consisting of hypersonic flow past a slightly blunted conical forebody was undertaken to estimate the level of uncertainty in the computed results, and to assess the capabilities of current computational methods for predicting boundary-layer transition onset. Results of this study in terms of surface pressure and heat transfer comparisons, as well as comparisons of boundary-layer edge quantities and flow-field profiles are presented here. Sensitivities to grid and gas model are discussed. Finally, representative results are presented relating to the use of Computational Fluid Dynamics in the vehicle design and the integration/support of potential experiments.
Comparison of attrition test methods: ASTM standard fluidized bed vs jet cup
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, R.; Goodwin, J.G. Jr.; Jothimurugesan, K.
2000-05-01
Attrition resistance is one of the key design parameters for catalysts used in fluidized-bed and slurry phase types of reactors. The ASTM fluidized-bed test has been one of the most commonly used attrition resistance evaluation methods; however, it requires the use of 50 g samples--a large amount for catalyst development studies. Recently a test using the jet cup requiring only 5 g samples has been proposed. In the present study, two series of spray-dried iron catalysts were evaluated using both the ASTM fluidized-bed test and a test based on the jet cup to determine this comparability. It is shown thatmore » the two tests give comparable results. This paper, by reporting a comparison of the jet-cup test with the ASTM standard, provides a basis for utilizing the more efficient jet cup with confidence in catalyst attrition studies.« less
Jackson, Jeffrey B
2017-11-01
The following short report outlines a proposed study designed to evaluate the Interprofessional Collaborative Competency Attainment Survey and its recommended method of administration. This exploratory study seeks to determine if there is a significant difference between two methods of administration, the recommended and validated retrospective pre-test and post-test, and a traditional pre-test and post-test. If a significant difference does exist, this data will provide a means to determine the effect size of that difference. The comparison will be done using repeated measure ANOVA and the subsequent effect size will be evaluated using Cohen's d. As the retrospective design is utilised to evaluate a change in perceived competency, comparison of data from a traditional pre-test with a retrospective pre-test may provide a means for evaluation of the participants' change in understanding of the construct, and thus a more thorough picture of the forces driving changes to scores.
Comparison Groups in Yoga Research: A Systematic Review and Critical Evaluation of the Literature
Groessl, Erik; Maiya, Meghan; Sarkin, Andrew; Eisen, Susan V.; Riley, Kristen; Elwy, A. Rani
2014-01-01
Objectives Comparison groups are essential for accurate testing and interpretation of yoga intervention trials. However, selecting proper comparison groups is difficult because yoga comprises a very heterogeneous set of practices and its mechanisms of effect have not been conclusively established. Methods We conducted a systematic review of the control and comparison groups used in published randomized controlled trials (RCTs) of yoga. Results We located 128 RCTs that met our inclusion criteria; of these, 65 included only a passive control and 63 included at least one active comparison group. Primary comparison groups were physical exercise (43%), relaxation/meditation (20%), and education (16%). Studies rarely provided a strong rationale for choice of comparison. Considering year of publication, the use of active controls in yoga research appears to be slowly increasing over time. Conclusions Given that yoga has been established as a potentially powerful intervention, future research should use active control groups. Further, care is needed to select comparison conditions that help to isolate the specific mechanisms of yoga’s effects. PMID:25440384
Menant, Ophélie; Andersson, Frédéric; Zelena, Dóra; Chaillou, Elodie
2016-11-01
The periaqueductal gray (PAG) is a mesencephalic brain structure involved in the expression of numerous behaviours such as maternal, sexual and emotional. Histological approaches showed the PAG is composed by subdivisions with specific cell organisation, neurochemical composition and connections with the rest of the brain. The comparison of studies performed in rodents and cats as the most often examined species, suggests that PAG organisation differs between mammals. However, we should also consider the plurality of the methods used in these studies that makes difficult the comparison of the PAG organisation between species. Therefore, to study the PAG in all mammals including human, the most relevant in vivo imaging method seems to be the magnetic resonance imaging (MRI). The purpose of this review was to summarize the knowledge of the anatomical organisation of the PAG in mammals and highlights the benefits of MRI methods to extend this knowledge. Results obtained by MRI so far support the conclusions of ex vivo studies, especially to describe the subdivisions and the connections of the PAG. In these latter, diffusion-weighted MRI and functional connectivity seem the most appropriate methods. In conclusion firstly, the MRI seems to be the best judicious method to compare species and improve the comprehension of the role of the PAG. Secondly, MRI is an in vivo method aimed to manage repeated measures in the same cohort of subjects allowing to study the impact of aging and the development on the anatomical organisation of the PAG. Copyright © 2016 Elsevier B.V. All rights reserved.
A Comparison of the Efficacy of Survey Methods for Amphibians in Small Forest Ponds
Richard R. Buech; Leanna M. Egeland
2002-01-01
Although researchers have studied amphibians for many years, status assessments have been hampered by a lack of standards and protocols for inventory and monitoring. Heyer et al. (1994) and Olson et al. (1997) provide a foundation in their reviews of methods used for measuring and monitoring amphibian biodiversity. It is clear from these reviews that no single method...
ERIC Educational Resources Information Center
Fenton, Ginger D.; LaBorde, Luke F.; Radhakrishna, Rama B.; Brown, J. Lynne; Cutter, Catherine N.
2006-01-01
Computer-based training is increasingly favored by food companies for training workers due to convenience, self-pacing ability, and ease of use. The objectives of this study were to determine if personal hygiene training, offered through a computer-based method, is as effective as a face-to-face method in knowledge acquisition and improved…
Mercedes Berterretche; Andrew T. Hudak; Warren B. Cohen; Thomas K. Maiersperger; Stith T. Gower; Jennifer Dungan
2005-01-01
This study compared aspatial and spatial methods of using remote sensing and field data to predict maximum growing season leaf area index (LAI) maps in a boreal forest in Manitoba, Canada. The methods tested were orthogonal regression analysis (reduced major axis, RMA) and two geostatistical techniques: kriging with an external drift (KED) and sequential Gaussian...
ERIC Educational Resources Information Center
Liu, YuFing
2013-01-01
This paper applies a quasi-experimental research method to compare the difference in students' approaches to learning and their learning achievements between the group that follows the problem based learning (PBL) teaching method with computer support and the group that follows the non-PBL teaching methods. The study sample consisted of 68 junior…
ERIC Educational Resources Information Center
Moses, Tim; Miao, Jing; Dorans, Neil
2010-01-01
This study compared the accuracies of four differential item functioning (DIF) estimation methods, where each method makes use of only one of the following: raw data, logistic regression, loglinear models, or kernel smoothing. The major focus was on the estimation strategies' potential for estimating score-level, conditional DIF. A secondary focus…
Comparison of methods for estimating density of forest songbirds from point counts
Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey
2011-01-01
New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...
2010-01-01
Background The development of new wireless communication technologies that emit radio frequency electromagnetic fields (RF-EMF) is ongoing, but little is known about the RF-EMF exposure distribution in the general population. Previous attempts to measure personal exposure to RF-EMF have used different measurement protocols and analysis methods making comparisons between exposure situations across different study populations very difficult. As a result, observed differences in exposure levels between study populations may not reflect real exposure differences but may be in part, or wholly due to methodological differences. Methods The aim of this paper is to develop a study protocol for future personal RF-EMF exposure studies based on experience drawn from previous research. Using the current knowledge base, we propose procedures for the measurement of personal exposure to RF-EMF, data collection, data management and analysis, and methods for the selection and instruction of study participants. Results We have identified two basic types of personal RF-EMF measurement studies: population surveys and microenvironmental measurements. In the case of a population survey, the unit of observation is the individual and a randomly selected representative sample of the population is needed to obtain reliable results. For microenvironmental measurements, study participants are selected in order to represent typical behaviours in different microenvironments. These two study types require different methods and procedures. Conclusion Applying our proposed common core procedures in future personal measurement studies will allow direct comparisons of personal RF-EMF exposures in different populations and study areas. PMID:20487532
Gültas, Mehmet; Düzgün, Güncel; Herzog, Sebastian; Jäger, Sven Joachim; Meckbach, Cornelia; Wingender, Edgar; Waack, Stephan
2014-04-03
The identification of functionally or structurally important non-conserved residue sites in protein MSAs is an important challenge for understanding the structural basis and molecular mechanism of protein functions. Despite the rich literature on compensatory mutations as well as sequence conservation analysis for the detection of those important residues, previous methods often rely on classical information-theoretic measures. However, these measures usually do not take into account dis/similarities of amino acids which are likely to be crucial for those residues. In this study, we present a new method, the Quantum Coupled Mutation Finder (QCMF) that incorporates significant dis/similar amino acid pair signals in the prediction of functionally or structurally important sites. The result of this study is twofold. First, using the essential sites of two human proteins, namely epidermal growth factor receptor (EGFR) and glucokinase (GCK), we tested the QCMF-method. The QCMF includes two metrics based on quantum Jensen-Shannon divergence to measure both sequence conservation and compensatory mutations. We found that the QCMF reaches an improved performance in identifying essential sites from MSAs of both proteins with a significantly higher Matthews correlation coefficient (MCC) value in comparison to previous methods. Second, using a data set of 153 proteins, we made a pairwise comparison between QCMF and three conventional methods. This comparison study strongly suggests that QCMF complements the conventional methods for the identification of correlated mutations in MSAs. QCMF utilizes the notion of entanglement, which is a major resource of quantum information, to model significant dissimilar and similar amino acid pair signals in the detection of functionally or structurally important sites. Our results suggest that on the one hand QCMF significantly outperforms the previous method, which mainly focuses on dissimilar amino acid signals, to detect essential sites in proteins. On the other hand, it is complementary to the existing methods for the identification of correlated mutations. The method of QCMF is computationally intensive. To ensure a feasible computation time of the QCMF's algorithm, we leveraged Compute Unified Device Architecture (CUDA).The QCMF server is freely accessible at http://qcmf.informatik.uni-goettingen.de/.
Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems
NASA Astrophysics Data System (ADS)
Peng, Juan-juan; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong
2016-07-01
As a variation of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete and inconsistent information that exists in the real world. Simplified neutrosophic sets (SNSs) have been proposed for the main purpose of addressing issues with a set of specific numbers. However, there are certain problems regarding the existing operations of SNSs, as well as their aggregation operators and the comparison methods. Therefore, this paper defines the novel operations of simplified neutrosophic numbers (SNNs) and develops a comparison method based on the related research of intuitionistic fuzzy numbers. On the basis of these operations and the comparison method, some SNN aggregation operators are proposed. Additionally, an approach for multi-criteria group decision-making (MCGDM) problems is explored by applying these aggregation operators. Finally, an example to illustrate the applicability of the proposed method is provided and a comparison with some other methods is made.
New insights into soil temperature time series modeling: linear or nonlinear?
NASA Astrophysics Data System (ADS)
Bonakdari, Hossein; Moeeni, Hamid; Ebtehaj, Isa; Zeynoddin, Mohammad; Mahoammadian, Abdolmajid; Gharabaghi, Bahram
2018-03-01
Soil temperature (ST) is an important dynamic parameter, whose prediction is a major research topic in various fields including agriculture because ST has a critical role in hydrological processes at the soil surface. In this study, a new linear methodology is proposed based on stochastic methods for modeling daily soil temperature (DST). With this approach, the ST series components are determined to carry out modeling and spectral analysis. The results of this process are compared with two linear methods based on seasonal standardization and seasonal differencing in terms of four DST series. The series used in this study were measured at two stations, Champaign and Springfield, at depths of 10 and 20 cm. The results indicate that in all ST series reviewed, the periodic term is the most robust among all components. According to a comparison of the three methods applied to analyze the various series components, it appears that spectral analysis combined with stochastic methods outperformed the seasonal standardization and seasonal differencing methods. In addition to comparing the proposed methodology with linear methods, the ST modeling results were compared with the two nonlinear methods in two forms: considering hydrological variables (HV) as input variables and DST modeling as a time series. In a previous study at the mentioned sites, Kim and Singh Theor Appl Climatol 118:465-479, (2014) applied the popular Multilayer Perceptron (MLP) neural network and Adaptive Neuro-Fuzzy Inference System (ANFIS) nonlinear methods and considered HV as input variables. The comparison results signify that the relative error projected in estimating DST by the proposed methodology was about 6%, while this value with MLP and ANFIS was over 15%. Moreover, MLP and ANFIS models were employed for DST time series modeling. Due to these models' relatively inferior performance to the proposed methodology, two hybrid models were implemented: the weights and membership function of MLP and ANFIS (respectively) were optimized with the particle swarm optimization (PSO) algorithm in conjunction with the wavelet transform and nonlinear methods (Wavelet-MLP & Wavelet-ANFIS). A comparison of the proposed methodology with individual and hybrid nonlinear models in predicting DST time series indicates the lowest Akaike Information Criterion (AIC) index value, which considers model simplicity and accuracy simultaneously at different depths and stations. The methodology presented in this study can thus serve as an excellent alternative to complex nonlinear methods that are normally employed to examine DST.
Han, Z Y; Weng, W G
2011-05-15
In this paper, a qualitative and a quantitative risk assessment methods for urban natural gas pipeline network are proposed. The qualitative method is comprised of an index system, which includes a causation index, an inherent risk index, a consequence index and their corresponding weights. The quantitative method consists of a probability assessment, a consequences analysis and a risk evaluation. The outcome of the qualitative method is a qualitative risk value, and for quantitative method the outcomes are individual risk and social risk. In comparison with previous research, the qualitative method proposed in this paper is particularly suitable for urban natural gas pipeline network, and the quantitative method takes different consequences of accidents into consideration, such as toxic gas diffusion, jet flame, fire ball combustion and UVCE. Two sample urban natural gas pipeline networks are used to demonstrate these two methods. It is indicated that both of the two methods can be applied to practical application, and the choice of the methods depends on the actual basic data of the gas pipelines and the precision requirements of risk assessment. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.
Methods for land use impact assessment: A review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perminova, Tataina, E-mail: tatiana.perminova@utt.fr; Department of Geoecology and Geochemistry, Institute of Natural Resources, National Research Tomsk Polytechnic University, 30 Lenin Avenue, 634050 Tomsk; Sirina, Natalia, E-mail: natalia.sirina@utt.fr
Many types of methods to assess land use impact have been developed. Nevertheless a systematic synthesis of all these approaches is necessary to highlight the most commonly used and most effective methods. Given the growing interest in this area of research, a review of the different methods of assessing land use impact (LUI) was performed using bibliometric analysis. One hundred eighty seven articles of agricultural and biological science, and environmental sciences were examined. According to our results, the most frequently used land use assessment methods are Life-Cycle Assessment, Material Flow Analysis/Input–Output Analysis, Environmental Impact Assessment and Ecological Footprint. Comparison ofmore » the methods allowed their specific features to be identified and to arrive at the conclusion that a combination of several methods is the best basis for a comprehensive analysis of land use impact assessment. - Highlights: • We identified the most frequently used methods in land use impact assessment. • A comparison of the methods based on several criteria was carried out. • Agricultural land use is by far the most common area of study within the methods. • Incentive driven methods, like LCA, arouse the most interest in this field.« less
A Comparison of Two Methods of Needs Assessment: Implications for Continuing Professional Education.
ERIC Educational Resources Information Center
Igarashi, Michiko; Suveges, Linda; Moss, Gwenna
2002-01-01
A study of 113 Canadian pharmacists attending continuing education on asthma compared two methods of needs assessment: perceived needs and knowledge-based needs. Scores indicated no relationship between the two kinds of needs. Needs assessment in continuing professional education thus should recognize and address the limitations of self-perceived…
ERIC Educational Resources Information Center
Stone, Clement A.; Tang, Yun
2013-01-01
Propensity score applications are often used to evaluate educational program impact. However, various options are available to estimate both propensity scores and construct comparison groups. This study used a student achievement dataset with commonly available covariates to compare different propensity scoring estimation methods (logistic…
Physical Activity in Preschool Children: Comparison between Montessori and Traditional Preschools
ERIC Educational Resources Information Center
Pate, Russell R.; O'Neill, Jennifer R.; Byun, Wonwoo; McIver, Kerry L.; Dowda, Marsha; Brown, William H.
2014-01-01
Background: Little is known about the influence of Montessori methods on children's physical activity (PA). This cross-sectional study compared PA of children attending Montessori and traditional preschools. Methods: We enrolled 301 children in 9 Montessori and 8 traditional preschools in Columbia, South Carolina. PA was measured by accelerometry…
A Comparison of Two Methods of Teaching Molecular Architecture to High School Chemistry Students.
ERIC Educational Resources Information Center
Halsted, Douglas Alan
This investigation explored the question of how high school chemistry students best learn three-dimensional molecular, ionic, and metallic structures in CHEM Study (Freeman, 1963). The experimenter compared the achievement, attitude, and instructional preferences of 110 randomly selected students taught by two different methods: (1) student…
There Is a World outside of Experimental Designs: Using Twins to Investigate Causation
ERIC Educational Resources Information Center
Hart, Sara A.; Taylor, Jeanette; Schatschneider, Christopher
2013-01-01
This study introduces a co-twin control method commonly used in the medical literature but not often within educational research. This method allows for a comparison of twins discordant for an "exposure," approximating alternative outcomes in the counterfactual model. Example analyses use data drawn from the Florida Twin Project on…
A Comparison of Exposure Control Procedures in CATS Using the GPC Model
ERIC Educational Resources Information Center
Leroux, Audrey J.; Dodd, Barbara G.
2016-01-01
The current study compares the progressive-restricted standard error (PR-SE) exposure control method with the Sympson-Hetter, randomesque, and no exposure control (maximum information) procedures using the generalized partial credit model with fixed- and variable-length CATs and two item pools. The PR-SE method administered the entire item pool…
The Use of Propensity Scores as a Matching Strategy
ERIC Educational Resources Information Center
John, Lindsay; Wright, Robin; Duku, Eric K.; Willms, J. Douglas
2008-01-01
Objectives: This study reports on the concept and method of linear propensity scores used to obtain a comparison group from the National Longitudinal Survey of Children and Youth to assess the effects of a longitudinal, structured arts program for Canadian youth (aged 9 to 15 years) from low-income, multicultural communities. Method: This study…
A cost comparison of five midstory removal methods
Brian G. Bailey; Michael R. Saunders; Zachary E. Lowe
2011-01-01
Within mature hardwood forests, midstory removal treatments have been shown to provide the adequate light and growing space needed for early establishment of intermediate-shade-tolerant species. As the method gains popularity, it is worthwhile to determine what manner of removal is most cost-efficient. Th is study compared five midstory removal treatments across 10...
ERIC Educational Resources Information Center
Lackman, Jeremy; Chepyator-Thomson, Jepkorir
2017-01-01
Purpose: The purpose of this study was to understand first-year college students' reflections on past physical education (PE) experiences in urban high school settings. Method: Data collection included semi-structured, open-ended, qualitative interviews. Constant comparison method was used for data analysis. Results: Several findings emerged: (a)…
ERIC Educational Resources Information Center
Fathurrohman, Maman; Porter, Anne; Worthy, Annette L.
2014-01-01
In this paper, the use of guided hyperlearning, unguided hyperlearning, and conventional learning methods in mathematics are compared. The design of the research involved a quasi-experiment with a modified single-factor multiple treatment design comparing the three learning methods, guided hyperlearning, unguided hyperlearning, and conventional…
ERIC Educational Resources Information Center
Hedges, Sarai
2017-01-01
The statistics education community continues to explore the differences in performance outcomes and in student attitudes between online and face-to-face delivery methods of statistics courses. In this quasi-experimental study student persistence, exam, quiz, and homework scores were compared between delivery methods, class status, and programs of…
NASA Astrophysics Data System (ADS)
Ferrini, Silvia; Schaafsma, Marije; Bateman, Ian
2014-06-01
Benefit transfer (BT) methods are becoming increasingly important for environmental policy, but the empirical findings regarding transfer validity are mixed. A novel valuation survey was designed to obtain both stated preference (SP) and revealed preference (RP) data concerning river water quality values from a large sample of households. Both dichotomous choice and payment card contingent valuation (CV) and travel cost (TC) data were collected. Resulting valuations were directly compared and used for BT analyses using both unit value and function transfer approaches. WTP estimates are found to pass the convergence validity test. BT results show that the CV data produce lower transfer errors, below 20% for both unit value and function transfer, than TC data especially when using function transfer. Further, comparison of WTP estimates suggests that in all cases, differences between methods are larger than differences between study areas. Results show that when multiple studies are available, using welfare estimates from the same area but based on a different method consistently results in larger errors than transfers across space keeping the method constant.
NASA Astrophysics Data System (ADS)
Farihah, Umi
2018-04-01
The purpose of this study was to analyze students’ thinking preferences in solving mathematics problems using paper pencil comparing to geogebra based on their learning styles. This research employed a qualitative descriptive study. The subjects of this research was six of eighth grade students of Madrasah Tsanawiyah Negeri 2 Trenggalek, East Java Indonesia academic year 2015-2016 with their difference learning styles; two visual students, two auditory students, and two kinesthetic students.. During the interview, the students presented the Paper and Pencil-based Task (PBTs) and the Geogebra-based Task (GBTs). By investigating students’ solution methods and the representation in solving the problems, the researcher compared their visual and non-visual thinking preferences in solving mathematics problems while they were using Geogebra and without Geogebra. Based on the result of research analysis, it was shown that the comparison between students’ PBTs and GBTs solution either visual, auditory, or kinesthetic represented how Geogebra can influence their solution method. By using Geogebra, they prefer using visual method while presenting GBTs to using non-visual method.
NASA Astrophysics Data System (ADS)
Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb
2017-10-01
In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.
NASA Astrophysics Data System (ADS)
Li, Lu; Xu, Chong-Yu; Engeland, Kolbjørn
2013-04-01
SummaryWith respect to model calibration, parameter estimation and analysis of uncertainty sources, various regression and probabilistic approaches are used in hydrological modeling. A family of Bayesian methods, which incorporates different sources of information into a single analysis through Bayes' theorem, is widely used for uncertainty assessment. However, none of these approaches can well treat the impact of high flows in hydrological modeling. This study proposes a Bayesian modularization uncertainty assessment approach in which the highest streamflow observations are treated as suspect information that should not influence the inference of the main bulk of the model parameters. This study includes a comprehensive comparison and evaluation of uncertainty assessments by our new Bayesian modularization method and standard Bayesian methods using the Metropolis-Hastings (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions were used in combination with standard Bayesian method: the AR(1) plus Normal model independent of time (Model 1), the AR(1) plus Normal model dependent on time (Model 2) and the AR(1) plus Multi-normal model (Model 3). The results reveal that the Bayesian modularization method provides the most accurate streamflow estimates measured by the Nash-Sutcliffe efficiency and provide the best in uncertainty estimates for low, medium and entire flows compared to standard Bayesian methods. The study thus provides a new approach for reducing the impact of high flows on the discharge uncertainty assessment of hydrological models via Bayesian method.
Gu, Shuyan; Shi, Jihao; Tang, Zhiliu; Sawhney, Monika; Hu, Huimei; Shi, Lizheng; Fonseca, Vivian; Dong, Hengjin
2015-01-01
Background Metformin is the first-line oral hypoglycemic agent for type 2 diabetes mellitus recommended by international guidelines. However, little information exists comparing it with acarbose which is also commonly used in China. This study expanded knowledge by combining direct and indirect evidence to ascertain the glucose lowering effects of both drugs. Methods PubMed (1980- December 2013) and China National Knowledge Infrastructure databases (1994-January 2014) were systematically searched for eligible randomized controlled trials from Chinese and English literatures. Meta-analysis was conducted to estimate the glucose lowering effects of metformin vs. acarbose, or either of them vs. common comparators (placebo or sulphonylureas), using random- and fixed-effect models. Bucher method with indirect treatment comparison calculator was applied to convert the summary estimates from the meta-analyses into weighted-mean-difference (WMD) and 95% confidence intervals (CIs) to represent the comparative efficacy between metformin and acarbose. Results A total of 75 studies were included in the analysis. In direct comparison (8 trials), metformin reduced glycosylated hemoglobin (HbA1c) by 0.06% more than acarbose, with no significant difference (WMD,-0.06%; 95% CI, -0.32% to 0.20%). In indirect comparisons (67 trials), by using placebo and sulphonylureas as common comparators, metformin achieved significant HbA1c reduction than acarbose, by -0.38% (WMD,-0.38%, 95% CI, -0.736% to -0.024%) and -0.34% (WMD, -0.34%, 95% CI, -0.651% to -0.029%) respectively. Conclusion The glucose lowering effects of metformin monotherapy and acarbose monotherapy are the same by direct comparison, while metformin is a little better by indirect comparison. This implies that the effect of metformin is at least as good as acarbose's. PMID:25961824
Burke, Adam
2012-01-01
A significant number of studies have been published examining the mind-body effects of meditation and its clinical efficacy. There are very few studies, however, which directly compare different meditation methods with each other to explore potentially distinct mechanisms and effects, and no studies comparing individual preferences for different methods. As preference is seen as an important factor in consumer healthcare decision making, greater understanding of this aspect is needed as meditation becomes a more widely used therapeutic modality. For this reason a pilot study was conducted to compare four meditation techniques for personal preference. A within-subjects comparison design was employed. A convenience sample of 247 undergraduate university students participated in the study. Participants learned two open observing meditation techniques-Vipassana (Mindfulness) and Zen, and two focused attention techniques-Mantra and Qigong Visualization, practicing one method per week. At the end of a six-week training period participants ranked the four meditation methods in order of personal preference. Ranking of subjective preference of meditations practiced. A within subjects comparison revealed that significantly more participants chose Vipassana or Mantra meditation as their preferred techniques compared with Qigong Visualization and Zen. This study provides information on differences in preference for type of meditation. As the benefits of meditation accrue over time, selecting a method that motivates sustained practice is a critical objective if therapeutic effects are to be achieved. Copyright © 2012 Elsevier Inc. All rights reserved.
Rosenberry, Donald O.; Stannard, David L.; Winter, Thomas C.; Martinez, Margo L.
2004-01-01
Evapotranspiration determined using the energy-budget method at a semi-permanent prairie-pothole wetland in east-central North Dakota, USA was compared with 12 other commonly used methods. The Priestley-Taylor and deBruin-Keijman methods compared best with the energy-budget values; mean differences were less than 0.1 mm d−1, and standard deviations were less than 0.3 mm d−1. Both methods require measurement of air temperature, net radiation, and heat storage in the wetland water. The Penman, Jensen-Haise, and Brutsaert-Stricker methods provided the next-best values for evapotranspiration relative to the energy-budget method. The mass-transfer, deBruin, and Stephens-Stewart methods provided the worst comparisons; the mass-transfer and deBruin comparisons with energy-budget values indicated a large standard deviation, and the deBruin and Stephens-Stewart comparisons indicated a large bias. The Jensen-Haise method proved to be cost effective, providing relatively accurate comparisons with the energy-budget method (mean difference=0.44 mm d−1, standard deviation=0.42 mm d−1) and requiring only measurements of air temperature and solar radiation. The Mather (Thornthwaite) method is the simplest, requiring only measurement of air temperature, and it provided values that compared relatively well with energy-budget values (mean difference=0.47 mm d−1, standard deviation=0.56 mm d−1). Modifications were made to several of the methods to make them more suitable for use in prairie wetlands. The modified Makkink, Jensen-Haise, and Stephens-Stewart methods all provided results that were nearly as close to energy-budget values as were the Priestley-Taylor and deBruin-Keijman methods, and all three of these modified methods only require measurements of air temperature and solar radiation. The modified Hamon method provided values that were within 20 percent of energy-budget values during 95 percent of the comparison periods, and it only requires measurement of air temperature. The mass-transfer coefficient, associated with the commonly used mass-transfer method, varied seasonally, with the largest values occurring during summer.
Realization of the medium and high vacuum primary standard in CENAM, Mexico
NASA Astrophysics Data System (ADS)
Torres-Guzman, J. C.; Santander, L. A.; Jousten, K.
2005-12-01
A medium and high vacuum primary standard, based on the static expansion method, has been set up at Centro Nacional de Metrología (CENAM), Mexico. This system has four volumes and covers a measuring range of 1 × 10-5 Pa to 1 × 103 Pa of absolute pressure. As part of its realization, a characterization was performed, which included volume calibrations, several tests and a bilateral key comparison. To determine the expansion ratios, two methods were applied: the gravimetric method and the method with a linearized spinning rotor gauge. The outgassing ratios for the whole system were also determined. A comparison was performed with Physikalisch-Technische Bundesanstalt (comparison SIM-Euromet.M.P-BK3). By means of this comparison, a link has been achieved with the Euromet comparison (Euromet.M.P-K1.b). As a result, it is concluded that the value obtained at CENAM is equivalent to the Euromet reference value, and therefore the design, construction and operation of CENAM's SEE-1 vacuum primary standard were successful.
NASA Astrophysics Data System (ADS)
Naguib, Ibrahim A.; Darwish, Hany W.
2012-02-01
A comparison between support vector regression (SVR) and Artificial Neural Networks (ANNs) multivariate regression methods is established showing the underlying algorithm for each and making a comparison between them to indicate the inherent advantages and limitations. In this paper we compare SVR to ANN with and without variable selection procedure (genetic algorithm (GA)). To project the comparison in a sensible way, the methods are used for the stability indicating quantitative analysis of mixtures of mebeverine hydrochloride and sulpiride in binary mixtures as a case study in presence of their reported impurities and degradation products (summing up to 6 components) in raw materials and pharmaceutical dosage form via handling the UV spectral data. For proper analysis, a 6 factor 5 level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. An independent test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. The proposed methods (linear SVR (without GA) and linear GA-ANN) were successfully applied to the analysis of pharmaceutical tablets containing mebeverine hydrochloride and sulpiride mixtures. The results manifest the problem of nonlinearity and how models like the SVR and ANN can handle it. The methods indicate the ability of the mentioned multivariate calibration models to deconvolute the highly overlapped UV spectra of the 6 components' mixtures, yet using cheap and easy to handle instruments like the UV spectrophotometer.
Quantifying plant colour and colour difference as perceived by humans using digital images.
Kendal, Dave; Hauser, Cindy E; Garrard, Georgia E; Jellinek, Sacha; Giljohann, Katherine M; Moore, Joslin L
2013-01-01
Human perception of plant leaf and flower colour can influence species management. Colour and colour contrast may influence the detectability of invasive or rare species during surveys. Quantitative, repeatable measures of plant colour are required for comparison across studies and generalisation across species. We present a standard method for measuring plant leaf and flower colour traits using images taken with digital cameras. We demonstrate the method by quantifying the colour of and colour difference between the flowers of eleven grassland species near Falls Creek, Australia, as part of an invasive species detection experiment. The reliability of the method was tested by measuring the leaf colour of five residential garden shrub species in Ballarat, Australia using five different types of digital camera. Flowers and leaves had overlapping but distinct colour distributions. Calculated colour differences corresponded well with qualitative comparisons. Estimates of proportional cover of yellow flowers identified using colour measurements correlated well with estimates obtained by measuring and counting individual flowers. Digital SLR and mirrorless cameras were superior to phone cameras and point-and-shoot cameras for producing reliable measurements, particularly under variable lighting conditions. The analysis of digital images taken with digital cameras is a practicable method for quantifying plant flower and leaf colour in the field or lab. Quantitative, repeatable measurements allow for comparisons between species and generalisations across species and studies. This allows plant colour to be related to human perception and preferences and, ultimately, species management.
Quantifying Plant Colour and Colour Difference as Perceived by Humans Using Digital Images
Kendal, Dave; Hauser, Cindy E.; Garrard, Georgia E.; Jellinek, Sacha; Giljohann, Katherine M.; Moore, Joslin L.
2013-01-01
Human perception of plant leaf and flower colour can influence species management. Colour and colour contrast may influence the detectability of invasive or rare species during surveys. Quantitative, repeatable measures of plant colour are required for comparison across studies and generalisation across species. We present a standard method for measuring plant leaf and flower colour traits using images taken with digital cameras. We demonstrate the method by quantifying the colour of and colour difference between the flowers of eleven grassland species near Falls Creek, Australia, as part of an invasive species detection experiment. The reliability of the method was tested by measuring the leaf colour of five residential garden shrub species in Ballarat, Australia using five different types of digital camera. Flowers and leaves had overlapping but distinct colour distributions. Calculated colour differences corresponded well with qualitative comparisons. Estimates of proportional cover of yellow flowers identified using colour measurements correlated well with estimates obtained by measuring and counting individual flowers. Digital SLR and mirrorless cameras were superior to phone cameras and point-and-shoot cameras for producing reliable measurements, particularly under variable lighting conditions. The analysis of digital images taken with digital cameras is a practicable method for quantifying plant flower and leaf colour in the field or lab. Quantitative, repeatable measurements allow for comparisons between species and generalisations across species and studies. This allows plant colour to be related to human perception and preferences and, ultimately, species management. PMID:23977275
Sicras-Mainar, Antoni; Velasco-Velasco, Soledad; Navarro-Artieda, Ruth; Blanca Tamayo, Milagrosa; Aguado Jodar, Alba; Ruíz Torrejón, Amador; Prados-Torres, Alexandra; Violan-Fors, Concepción
2012-06-01
To compare three methods of measuring multiple morbidity according to the use of health resources (cost of care) in primary healthcare (PHC). Retrospective study using computerized medical records. Thirteen PHC teams in Catalonia (Spain). Assigned patients requiring care in 2008. The socio-demographic variables were co-morbidity and costs. Methods of comparison were: a) Combined Comorbidity Index (CCI): an index itself was developed from the scores of acute and chronic episodes, b) Charlson Index (ChI), and c) Adjusted Clinical Groups case-mix: resource use bands (RUB). The cost model was constructed by differentiating between fixed (operational) and variable costs. 3 multiple lineal regression models were developed to assess the explanatory power of each measurement of co-morbidity which were compared from the determination coefficient (R(2)), p< .05. The study included 227,235 patients. The mean unit of cost was €654.2. The CCI explained an R(2)=50.4%, the ChI an R(2)=29.2% and BUR an R(2)=39.7% of the variability of the cost. The behaviour of the ICC is acceptable, albeit with low scores (1 to 3 points), showing inconclusive results. The CCI may be a simple method of predicting PHC costs in routine clinical practice. If confirmed, these results will allow improvements in the comparison of the case-mix. Copyright © 2011 Elsevier España, S.L. All rights reserved.
NASA Astrophysics Data System (ADS)
Asim, Sumreen
This mixed method study investigated K-6 teacher candidates' beliefs about informal science instruction prior to and after their experiences in a 15-week science methods course and in comparison to a non-intervention group. The study is predicated by the literature that supports the extent to which teachers' beliefs influence their instructional practices. The intervention integrated the six strands of learning science in informal science education (NRC, 2009) and exposed candidates to out-of-school-time environments (NRC, 2010). Participants included 17 candidates in the intervention and 75 in the comparison group. All were undergraduate K-6 teacher candidates at one university enrolled in different sections of a required science methods course. All the participants completed the Beliefs about Science Teaching (BAT) survey. Reflective journals, drawings, interviews, and microteaching protocols were collected from participants in the intervention. There was no statistically significant difference in pre or post BAT scores of the two groups; However, there was a statistically significant interaction effect for the intervention group over time. Analysis of the qualitative data revealed that the intervention candidates displayed awareness of each of the six strands of learning science in informal environments and commitment to out-of-school-time learning of science. This study supports current reform efforts favoring integration of informal science instructional strategies in science methods courses of elementary teacher education programs.
NASA Technical Reports Server (NTRS)
Stoner, Mary Cecilia; Hehir, Austin R.; Ivanco, Marie L.; Domack, Marcia S.
2016-01-01
This cost-benefit analysis assesses the benefits of the Advanced Near Net Shape Technology (ANNST) manufacturing process for fabricating integrally stiffened cylinders. These preliminary, rough order-of-magnitude results report a 46 to 58 percent reduction in production costs and a 7-percent reduction in weight over the conventional metallic manufacturing technique used in this study for comparison. Production cost savings of 35 to 58 percent were reported over the composite manufacturing technique used in this study for comparison; however, the ANNST concept was heavier. In this study, the predicted return on investment of equipment required for the ANNST method was ten cryogenic tank barrels when compared with conventional metallic manufacturing. The ANNST method was compared with the conventional multi-piece metallic construction and composite processes for fabricating integrally stiffened cylinders. A case study compared these three alternatives for manufacturing a cylinder of specified geometry, with particular focus placed on production costs and process complexity, with cost analyses performed by the analogy and parametric methods. Furthermore, a scalability study was conducted for three tank diameters to assess the highest potential payoff of the ANNST process for manufacture of large-diameter cryogenic tanks. The analytical hierarchy process (AHP) was subsequently used with a group of selected subject matter experts to assess the value of the various benefits achieved by the ANNST method for potential stakeholders. The AHP study results revealed that decreased final cylinder mass and quality assurance were the most valued benefits of cylinder manufacturing methods, therefore emphasizing the relevance of the benefits achieved with the ANNST process for future projects.
Terada, Masaki; Matsushita, Hiroki; Oosugi, Masanori; Inoue, Kazuyasu; Yaegashi, Taku; Anma, Takeshi
2009-03-20
The advantage of the higher signal-to-noise ratio (SNR) of 3-Tesla magnetic resonance imaging (3-Tesla) has the possibility of contributing to the improvement of high spatial resolution without causing image deterioration. In this study, we compared SNR and the apparent diffusion coefficient (ADC) value with 3-Tesla as the condition in the diffusion-weighted image (DWI) parameter of the 1.5-Tesla magnetic resonance imaging (1.5-Tesla) and we examined the high spatial resolution images in the imaging method [respiratory-triggering (RT) method and breath free (BF) method] and artifact (motion and zebra) in the upper abdominal region of DWI at 3-Tesla. We have optimized scan parameters based on phantom and in vivo study. As a result, 3-Tesla was able to obtain about 1.5 times SNR in comparison with the 1.5-Tesla, ADC value had few differences. Moreover, the RT method was effective in correcting the influence of respiratory movement in comparison with the BF method, and image improvement by the effective acquisition of SNR and reduction of the artifact were provided. Thus, DWI of upper abdominal region was a useful sequence for the high spatial resolution in 3-Tesla.
NASA Astrophysics Data System (ADS)
Strunin, M. A.; Hiyama, T.
2004-11-01
The wavelet spectral method was applied to aircraft-based measurements of atmospheric turbulence obtained during joint Russian-Japanese research on the atmospheric boundary layer near Yakutsk (eastern Siberia) in April-June 2000. Practical ways to apply Fourier and wavelet methods for aircraft-based turbulence data are described. Comparisons between Fourier and wavelet transform results are shown and they demonstrate, in conjunction with theoretical and experimental restrictions, that the Fourier transform method is not useful for studying non-homogeneous turbulence. The wavelet method is free from many disadvantages of Fourier analysis and can yield more informative results. Comparison of Fourier and Morlet wavelet spectra showed good agreement at high frequencies (small scales). The quality of the wavelet transform and corresponding software was estimated by comparing the original data with restored data constructed with an inverse wavelet transform. A Haar wavelet basis was inappropriate for the turbulence data; the mother wavelet function recommended in this study is the Morlet wavelet. Good agreement was also shown between variances and covariances estimated with different mathematical techniques, i.e. through non-orthogonal wavelet spectra and through eddy correlation methods.
Nokhbatolfoghahaie, Hanieh; Alikhasi, Marzieh; Chiniforush, Nasim; Khoei, Farzaneh; Safavi, Nassimeh; Yaghoub Zadeh, Behnoush
2013-01-01
Introduction: Today the prevalence of teeth decays has considerably decreased. Related organizations and institutions mention several reasons for it such as improvement of decay diagnostic equipment and tools which are even capable of detecting caries in their initial stages. This resulted in reduction of costs for patients and remarkable increase in teeth life span. There are many methods for decay diagnostic, like: visual and radiographic methods, devices with fluorescence such as Quantitative light-induced fluorescence (QLF), Vista proof, Laser fluorescence (LF or DIAGNOdent), Fluorescence Camera (FC) and Digital radiography. Although DIAGNOdent is considered a valuable device for decay diagnostic ,there are concerns regarding its efficacy and accuracy. Considering the sensitivity of decaydiagnosis and the exorbitant annual expenses supported by government and people for caries treatment, finding the best method for early caries detection is of the most importance. Numerous studies were performed to compare different diagnostic methods with conflicting results. The objective of this study is a comparative review of the efficiency of DIAGNOdent in comparison to visual methods and radiographic methods in the diagnostic of teeth occlusal surfaces. Methods: Search of PubMed, Google Scholar electronic resources was performed in order to find clinical trials in English in the period between 1998 and 2013. Full texts of only 35 articles were available. Conclusion: Considering the sensitivity and specificity reported in the different studies, it seems that DIAGNOdent is an appropriate modality for caries detection as a complementary method beside other methods and its use alone to obtain treatment plan is not enough. PMID:25606325
NASA Astrophysics Data System (ADS)
de Carvalho, Fábio Romeu; Abe, Jair Minoro
2010-11-01
Two recent non-classical logics have been used to make decision: fuzzy logic and paraconsistent annotated evidential logic Et. In this paper we present a simplified version of the fuzzy decision method and its comparison with the paraconsistent one. Paraconsistent annotated evidential logic Et, introduced by Da Costa, Vago and Subrahmanian (1991), is capable of handling uncertain and contradictory data without becoming trivial. It has been used in many applications such as information technology, robotics, artificial intelligence, production engineering, decision making etc. Intuitively, one Et logic formula is type p(a, b), in which a and b belong to [0, 1] (real interval) and represent respectively the degree of favorable evidence (or degree of belief) and the degree of contrary evidence (or degree of disbelief) found in p. The set of all pairs (a; b), called annotations, when plotted, form the Cartesian Unitary Square (CUS). This set, containing a similar order relation of real number, comprises a network, called lattice of the annotations. Fuzzy logic was introduced by Zadeh (1965). It tries to systematize the knowledge study, searching mainly to study the fuzzy knowledge (you don't know what it means) and distinguish it from the imprecise one (you know what it means, but you don't know its exact value). This logic is similar to paraconsistent annotated one, since it attributes a numeric value (only one, not two values) to each proposition (then we can say that it is an one-valued logic). This number translates the intensity (the degree) with which the preposition is true. Let's X a set and A, a subset of X, identified by the function f(x). For each element x∈X, you have y = f(x)∈[0, 1]. The number y is called degree of pertinence of x in A. Decision making theories based on these logics have shown to be powerful in many aspects regarding more traditional methods, like the one based on Statistics. In this paper we present a first study for a simplified version of decision making theory based on Fuzzy Logic (SVMFD) and a comparison with the Paraconsistent Decision Method (PDM) based on Paraconsistent Annotated Evidential Logic Eτ, already presented and summarized in this paper. An example showing the two methods is presented in the paper, as well as a comparison between them.
Examining mixing methods in an evaluation of a smoking cessation program.
Betzner, Anne; Lawrenz, Frances P; Thao, Mao
2016-02-01
Three different methods were used in an evaluation of a smoking cessation study: surveys, focus groups, and phenomenological interviews. The results of each method were analyzed separately and then combined using both a pragmatic and dialectic stance to examine the effects of different approaches to mixing methods. Results show that the further apart the methods are philosophically, the more diverse the findings. Comparisons of decision maker opinions and costs of the different methods are provided along with recommendations for evaluators' uses of different methods. Copyright © 2015. Published by Elsevier Ltd.
Comparison of Provider Types Who Performed Prehospital Lifesaving Interventions: A Prospective Study
2014-12-01
In less than 2 hours, 15 critically ill children were triaged and admitted to the PICU or surge spaces. Conclusions:Identified strengths included...details increasing telemedicine uti - lization during a 4 year period and outlines program structural changes that improved utilization. Methods: The study...population survival. CSC ICU resource- allocation algorithms (ALGs) exist for adults. Our goal was to evaluate a CSC pandemic ALG for children . Methods
Randomized controlled trials and meta-analysis in medical education: what role do they play?
Cook, David A
2012-01-01
Education researchers seek to understand what works, for whom, in what circumstances. Unfortunately, educational environments are complex and research itself is highly context dependent. Faced with these challenges, some have argued that qualitative methods should supplant quantitative methods such as randomized controlled trials (RCTs) and meta-analysis. I disagree. Good qualitative and mixed-methods research are complementary to, rather than exclusive of, quantitative methods. The complexity and challenges we face should not beguile us into ignoring methods that provide strong evidence. What, then, is the proper role for RCTs and meta-analysis in medical education? First, the choice of study design depends on the research question. RCTs and meta-analysis are appropriate for many, but not all, study goals. They have compelling strengths but also numerous limitations. Second, strong methods will not compensate for a pointless question. RCTs do not advance the science when they make confounded comparisons, or make comparison with no intervention. Third, clinical medicine now faces many of the same challenges we encounter in education. We can learn much from other fields about how to handle complexity in RCTs. Finally, no single study will definitively answer any research question. We need carefully planned, theory-building, programmatic research, reflecting a variety of paradigms and approaches, as we accumulate evidence to change the art and science of education.
Alternative methods to determine headwater benefits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Y.S.; Perlack, R.D.; Sale, M.J.
1997-11-10
In 1992, the Federal Energy Regulatory Commission (FERC) began using a Flow Duration Analysis (FDA) methodology to assess headwater benefits in river basins where use of the Headwater Benefits Energy Gains (HWBEG) model may not result in significant improvements in modeling accuracy. The purpose of this study is to validate the accuracy and appropriateness of the FDA method for determining energy gains in less complex basins. This report presents the results of Oak Ridge National Laboratory`s (ORNL`s) validation of the FDA method. The validation is based on a comparison of energy gains using the FDA method with energy gains calculatedmore » using the MWBEG model. Comparisons of energy gains are made on a daily and monthly basis for a complex river basin (the Alabama River Basin) and a basin that is considered relatively simple hydrologically (the Stanislaus River Basin). In addition to validating the FDA method, ORNL was asked to suggest refinements and improvements to the FDA method. Refinements and improvements to the FDA method were carried out using the James River Basin as a test case.« less
Comparison of GEOS-5 AGCM Planetary Boundary Layer Depths Computed with Various Definitions
NASA Technical Reports Server (NTRS)
Mcgrath-Spangler, E. L.; Molod, A.
2014-01-01
Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Koppen climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes, the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.
Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions
NASA Astrophysics Data System (ADS)
McGrath-Spangler, E. L.; Molod, A.
2014-03-01
Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes, the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.