Slotnick, Scott D
2017-07-01
Analysis of functional magnetic resonance imaging (fMRI) data typically involves over one hundred thousand independent statistical tests; therefore, it is necessary to correct for multiple comparisons to control familywise error. In a recent paper, Eklund, Nichols, and Knutsson used resting-state fMRI data to evaluate commonly employed methods to correct for multiple comparisons and reported unacceptable rates of familywise error. Eklund et al.'s analysis was based on the assumption that resting-state fMRI data reflect null data; however, their 'null data' actually reflected default network activity that inflated familywise error. As such, Eklund et al.'s results provide no basis to question the validity of the thousands of published fMRI studies that have corrected for multiple comparisons or the commonly employed methods to correct for multiple comparisons.
Li, Zhiguang; Kwekel, Joshua C; Chen, Tao
2012-01-01
Functional comparison across microarray platforms is used to assess the comparability or similarity of the biological relevance associated with the gene expression data generated by multiple microarray platforms. Comparisons at the functional level are very important considering that the ultimate purpose of microarray technology is to determine the biological meaning behind the gene expression changes under a specific condition, not just to generate a list of genes. Herein, we present a method named percentage of overlapping functions (POF) and illustrate how it is used to perform the functional comparison of microarray data generated across multiple platforms. This method facilitates the determination of functional differences or similarities in microarray data generated from multiple array platforms across all the functions that are presented on these platforms. This method can also be used to compare the functional differences or similarities between experiments, projects, or laboratories.
On the method of Ermakov and Zolotukhin for multiple integration
NASA Technical Reports Server (NTRS)
Cranley, R.; Patterson, T. N. L.
1971-01-01
By introducing the idea of pseudo-implementation, a practical assessment of the method for multiple integration is made. The performance of the method is found to be unimpressive in comparison with a recent regression method.
Comparison of two stand-alone CADe systems at multiple operating points
NASA Astrophysics Data System (ADS)
Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas
2015-03-01
Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.
Brown, Angus M
2010-04-01
The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.
Limited Rationality and Its Quantification Through the Interval Number Judgments With Permutations.
Liu, Fang; Pedrycz, Witold; Zhang, Wei-Guo
2017-12-01
The relative importance of alternatives expressed in terms of interval numbers in the fuzzy analytic hierarchy process aims to capture the uncertainty experienced by decision makers (DMs) when making a series of comparisons. Under the assumption of full rationality, the judgements of DMs in the typical analytic hierarchy process could be consistent. However, since the uncertainty in articulating the opinions of DMs is unavoidable, the interval number judgements are associated with the limited rationality. In this paper, we investigate the concept of limited rationality by introducing interval multiplicative reciprocal comparison matrices. By analyzing the consistency of interval multiplicative reciprocal comparison matrices, it is observed that the interval number judgements are inconsistent. By considering the permutations of alternatives, the concepts of approximation-consistency and acceptable approximation-consistency of interval multiplicative reciprocal comparison matrices are proposed. The exchange method is designed to generate all the permutations. A novel method of determining the interval weight vector is proposed under the consideration of randomness in comparing alternatives, and a vector of interval weights is determined. A new algorithm of solving decision making problems with interval multiplicative reciprocal preference relations is provided. Two numerical examples are carried out to illustrate the proposed approach and offer a comparison with the methods available in the literature.
Evaluating Blended and Flipped Instruction in Numerical Methods at Multiple Engineering Schools
ERIC Educational Resources Information Center
Clark, Renee; Kaw, Autar; Lou, Yingyan; Scott, Andrew; Besterfield-Sacre, Mary
2018-01-01
With the literature calling for comparisons among technology-enhanced or active-learning pedagogies, a blended versus flipped instructional comparison was made for numerical methods coursework using three engineering schools with diverse student demographics. This study contributes to needed comparisons of enhanced instructional approaches in STEM…
NASA Astrophysics Data System (ADS)
Jaradat, H. M.; Syam, Muhammed; Jaradat, M. M. M.; Mustafa, Zead; Moman, S.
2018-03-01
In this paper, we investigate the multiple soliton solutions and multiple singular soliton solutions of a class of the fifth order nonlinear evolution equation with variable coefficients of t using the simplified bilinear method based on a transformation method combined with the Hirota's bilinear sense. In addition, we present analysis for some parameters such as the soliton amplitude and the characteristic line. Several equation in the literature are special cases of the class which we discuss such as Caudrey-Dodd-Gibbon equation and Sawada-Kotera. Comparison with several methods in the literature, such as Helmholtz solution of the inverse variational problem, rational exponential function method, tanh method, homotopy perturbation method, exp-function method, and coth method, are made. From these comparisons, we conclude that the proposed method is efficient and our solutions are correct. It is worth mention that the proposed solution can solve many physical problems.
Joseph, Agnel Praveen; Srinivasan, Narayanaswamy; de Brevern, Alexandre G
2012-09-01
Comparison of multiple protein structures has a broad range of applications in the analysis of protein structure, function and evolution. Multiple structure alignment tools (MSTAs) are necessary to obtain a simultaneous comparison of a family of related folds. In this study, we have developed a method for multiple structure comparison largely based on sequence alignment techniques. A widely used Structural Alphabet named Protein Blocks (PBs) was used to transform the information on 3D protein backbone conformation as a 1D sequence string. A progressive alignment strategy similar to CLUSTALW was adopted for multiple PB sequence alignment (mulPBA). Highly similar stretches identified by the pairwise alignments are given higher weights during the alignment. The residue equivalences from PB based alignments are used to obtain a three dimensional fit of the structures followed by an iterative refinement of the structural superposition. Systematic comparisons using benchmark datasets of MSTAs underlines that the alignment quality is better than MULTIPROT, MUSTANG and the alignments in HOMSTRAD, in more than 85% of the cases. Comparison with other rigid-body and flexible MSTAs also indicate that mulPBA alignments are superior to most of the rigid-body MSTAs and highly comparable to the flexible alignment methods. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Comparing Performances of Multiple Comparison Methods in Commonly Used 2 × C Contingency Tables.
Cangur, Sengul; Ankarali, Handan; Pasin, Ozge
2016-12-01
This study aims at mentioning briefly multiple comparison methods such as Bonferroni, Holm-Bonferroni, Hochberg, Hommel, Marascuilo, Tukey, Benjamini-Hochberg and Gavrilov-Benjamini-Sarkar for contingency tables, through the data obtained from a medical research and examining their performances by simulation study which was constructed as the total 36 scenarios to 2 × 4 contingency table. As results of simulation, it was observed that when the sample size is more than 100, the methods which can preserve the nominal alpha level are Gavrilov-Benjamini-Sarkar, Holm-Bonferroni and Bonferroni. Marascuilo method was found to be a more conservative than Bonferroni. It was found that Type I error rate for Hommel method is around 2 % in all scenarios. Moreover, when the proportions of the three populations are equal and the proportion value of the fourth population is far at a level of ±3 standard deviation from the other populations, the power value for Unadjusted All-Pairwise Comparison approach is at least a bit higher than the ones obtained by Gavrilov-Benjamini-Sarkar, Holm-Bonferroni and Bonferroni. Consequently, Gavrilov-Benjamini-Sarkar and Holm-Bonferroni methods have the best performance according to simulation. Hommel and Marascuilo methods are not recommended to be used because they have medium or lower performance. In addition, we have written a Minitab macro about multiple comparisons for use in scientific research.
On Two-Stage Multiple Comparison Procedures When There Are Unequal Sample Sizes in the First Stage.
ERIC Educational Resources Information Center
Wilcox, Rand R.
1984-01-01
Two stage multiple-comparison procedures give an exact solution to problems of power and Type I errors, but require equal sample sizes in the first stage. This paper suggests a method of evaluating the experimentwise Type I error probability when the first stage has unequal sample sizes. (Author/BW)
Understanding Foster Youth Outcomes: Is Propensity Scoring Better than Traditional Methods?
ERIC Educational Resources Information Center
Berzin, Stephanie Cosner
2010-01-01
Objectives: This study seeks to examine the relationship between foster care and outcomes using multiple comparison methods to account for factors that put foster youth at risk independent of care. Methods: Using the National Longitudinal Survey of Youth 1997, matching, propensity scoring, and comparisons to the general population are used to…
An Iterative Solver in the Presence and Absence of Multiplicity for Nonlinear Equations
Özkum, Gülcan
2013-01-01
We develop a high-order fixed point type method to approximate a multiple root. By using three functional evaluations per full cycle, a new class of fourth-order methods for this purpose is suggested and established. The methods from the class require the knowledge of the multiplicity. We also present a method in the absence of multiplicity for nonlinear equations. In order to attest the efficiency of the obtained methods, we employ numerical comparisons alongside obtaining basins of attraction to compare them in the complex plane according to their convergence speed and chaotic behavior. PMID:24453914
Mi, Zhibao; Novitzky, Dimitri; Collins, Joseph F; Cooper, David KC
2015-01-01
The management of brain-dead organ donors is complex. The use of inotropic agents and replacement of depleted hormones (hormonal replacement therapy) is crucial for successful multiple organ procurement, yet the optimal hormonal replacement has not been identified, and the statistical adjustment to determine the best selection is not trivial. Traditional pair-wise comparisons between every pair of treatments, and multiple comparisons to all (MCA), are statistically conservative. Hsu’s multiple comparisons with the best (MCB) – adapted from the Dunnett’s multiple comparisons with control (MCC) – has been used for selecting the best treatment based on continuous variables. We selected the best hormonal replacement modality for successful multiple organ procurement using a two-step approach. First, we estimated the predicted margins by constructing generalized linear models (GLM) or generalized linear mixed models (GLMM), and then we applied the multiple comparison methods to identify the best hormonal replacement modality given that the testing of hormonal replacement modalities is independent. Based on 10-year data from the United Network for Organ Sharing (UNOS), among 16 hormonal replacement modalities, and using the 95% simultaneous confidence intervals, we found that the combination of thyroid hormone, a corticosteroid, antidiuretic hormone, and insulin was the best modality for multiple organ procurement for transplantation. PMID:25565890
ERIC Educational Resources Information Center
Barrows, Russell D.
2007-01-01
A one-way ANOVA experiment is performed to determine whether or not the three standardization methods are statistically different in determining the concentration of the three paraffin analytes. The laboratory exercise asks students to combine the three methods in a single analytical procedure of their own design to determine the concentration of…
ERIC Educational Resources Information Center
Suh, Youngsuk; Talley, Anna E.
2015-01-01
This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…
Sun, Zong-ke; Wu, Rong; Ding, Pei; Xue, Jin-Rong
2006-07-01
To compare between rapid detection method of enzyme substrate technique and multiple-tube fermentation technique in water coliform bacteria detection. Using inoculated and real water samples to compare the equivalence and false positive rate between two methods. Results demonstrate that enzyme substrate technique shows equivalence with multiple-tube fermentation technique (P = 0.059), false positive rate between the two methods has no statistical difference. It is suggested that enzyme substrate technique can be used as a standard method for water microbiological safety evaluation.
Analysis and prediction of Multiple-Site Damage (MSD) fatigue crack growth
NASA Technical Reports Server (NTRS)
Dawicke, D. S.; Newman, J. C., Jr.
1992-01-01
A technique was developed to calculate the stress intensity factor for multiple interacting cracks. The analysis was verified through comparison with accepted methods of calculating stress intensity factors. The technique was incorporated into a fatigue crack growth prediction model and used to predict the fatigue crack growth life for multiple-site damage (MSD). The analysis was verified through comparison with experiments conducted on uniaxially loaded flat panels with multiple cracks. Configuration with nearly equal and unequal crack distribution were examined. The fatigue crack growth predictions agreed within 20 percent of the experimental lives for all crack configurations considered.
Fast alignment-free sequence comparison using spaced-word frequencies.
Leimeister, Chris-Andre; Boden, Marcus; Horwege, Sebastian; Lindner, Sebastian; Morgenstern, Burkhard
2014-07-15
Alignment-free methods for sequence comparison are increasingly used for genome analysis and phylogeny reconstruction; they circumvent various difficulties of traditional alignment-based approaches. In particular, alignment-free methods are much faster than pairwise or multiple alignments. They are, however, less accurate than methods based on sequence alignment. Most alignment-free approaches work by comparing the word composition of sequences. A well-known problem with these methods is that neighbouring word matches are far from independent. To reduce the statistical dependency between adjacent word matches, we propose to use 'spaced words', defined by patterns of 'match' and 'don't care' positions, for alignment-free sequence comparison. We describe a fast implementation of this approach using recursive hashing and bit operations, and we show that further improvements can be achieved by using multiple patterns instead of single patterns. To evaluate our approach, we use spaced-word frequencies as a basis for fast phylogeny reconstruction. Using real-world and simulated sequence data, we demonstrate that our multiple-pattern approach produces better phylogenies than approaches relying on contiguous words. Our program is freely available at http://spaced.gobics.de/. © The Author 2014. Published by Oxford University Press.
Comparison of Methods to Trace Multiple Subskills: Is LR-DBN Best?
ERIC Educational Resources Information Center
Xu, Yanbo; Mostow, Jack
2012-01-01
A long-standing challenge for knowledge tracing is how to update estimates of multiple subskills that underlie a single observable step. We characterize approaches to this problem by how they model knowledge tracing, fit its parameters, predict performance, and update subskill estimates. Previous methods allocated blame or credit among subskills…
A novel statistical method for quantitative comparison of multiple ChIP-seq datasets.
Chen, Li; Wang, Chi; Qin, Zhaohui S; Wu, Hao
2015-06-15
ChIP-seq is a powerful technology to measure the protein binding or histone modification strength in the whole genome scale. Although there are a number of methods available for single ChIP-seq data analysis (e.g. 'peak detection'), rigorous statistical method for quantitative comparison of multiple ChIP-seq datasets with the considerations of data from control experiment, signal to noise ratios, biological variations and multiple-factor experimental designs is under-developed. In this work, we develop a statistical method to perform quantitative comparison of multiple ChIP-seq datasets and detect genomic regions showing differential protein binding or histone modification. We first detect peaks from all datasets and then union them to form a single set of candidate regions. The read counts from IP experiment at the candidate regions are assumed to follow Poisson distribution. The underlying Poisson rates are modeled as an experiment-specific function of artifacts and biological signals. We then obtain the estimated biological signals and compare them through the hypothesis testing procedure in a linear model framework. Simulations and real data analyses demonstrate that the proposed method provides more accurate and robust results compared with existing ones. An R software package ChIPComp is freely available at http://web1.sph.emory.edu/users/hwu30/software/ChIPComp.html. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A method for determining the conversion efficiency of multiple-cell photovoltaic devices
NASA Astrophysics Data System (ADS)
Glatfelter, Troy; Burdick, Joseph
A method for accurately determining the conversion efficiency of any multiple-cell photovoltaic device under any arbitrary reference spectrum is presented. This method makes it possible to obtain not only the short-circuit current, but also the fill factor, the open-circuit voltage, and hence the conversion efficiency of a multiple-cell device under any reference spectrum. Results are presented which allow a comparison of the I-V parameters of two-terminal, two- and three-cell tandem devices measured under a multiple-source simulator with the same parameters measured under different reference spectra. It is determined that the uncertainty in the conversion efficiency of a multiple-cell photovoltaic device obtained with this method is less than +/-3 percent.
Multiple Testing with Modified Bonferroni Methods.
ERIC Educational Resources Information Center
Li, Jianmin; And Others
This paper discusses the issue of multiple testing and overall Type I error rates in contexts other than multiple comparisons of means. It demonstrates, using a 5 x 5 correlation matrix, the application of 5 recently developed modified Bonferroni procedures developed by the following authors: (1) Y. Hochberg (1988); (2) B. S. Holland and M. D.…
NASA Astrophysics Data System (ADS)
Mamat, Siti Salwana; Ahmad, Tahir; Awang, Siti Rahmah
2017-08-01
Analytic Hierarchy Process (AHP) is a method used in structuring, measuring and synthesizing criteria, in particular ranking of multiple criteria in decision making problems. On the other hand, Potential Method is a ranking procedure in which utilizes preference graph ς (V, A). Two nodes are adjacent if they are compared in a pairwise comparison whereby the assigned arc is oriented towards the more preferred node. In this paper Potential Method is used to solve problem on a catering service selection. The comparison of result by using Potential method is made with Extent Analysis. The Potential Method is found to produce the same rank as Extent Analysis in AHP.
Hobart, J; Cano, S
2009-02-01
In this monograph we examine the added value of new psychometric methods (Rasch measurement and Item Response Theory) over traditional psychometric approaches by comparing and contrasting their psychometric evaluations of existing sets of rating scale data. We have concentrated on Rasch measurement rather than Item Response Theory because we believe that it is the more advantageous method for health measurement from a conceptual, theoretical and practical perspective. Our intention is to provide an authoritative document that describes the principles of Rasch measurement and the practice of Rasch analysis in a clear, detailed, non-technical form that is accurate and accessible to clinicians and researchers in health measurement. A comparison was undertaken of traditional and new psychometric methods in five large sets of rating scale data: (1) evaluation of the Rivermead Mobility Index (RMI) in data from 666 participants in the Cannabis in Multiple Sclerosis (CAMS) study; (2) evaluation of the Multiple Sclerosis Impact Scale (MSIS-29) in data from 1725 people with multiple sclerosis; (3) evaluation of test-retest reliability of MSIS-29 in data from 150 people with multiple sclerosis; (4) examination of the use of Rasch analysis to equate scales purporting to measure the same health construct in 585 people with multiple sclerosis; and (5) comparison of relative responsiveness of the Barthel Index and Functional Independence Measure in data from 1400 people undergoing neurorehabilitation. Both Rasch measurement and Item Response Theory are conceptually and theoretically superior to traditional psychometric methods. Findings from each of the five studies show that Rasch analysis is empirically superior to traditional psychometric methods for evaluating rating scales, developing rating scales, analysing rating scale data, understanding and measuring stability and change, and understanding the health constructs we seek to quantify. There is considerable added value in using Rasch analysis rather than traditional psychometric methods in health measurement. Future research directions include the need to reproduce our findings in a range of clinical populations, detailed head-to-head comparisons of Rasch analysis and Item Response Theory, and the application of Rasch analysis to clinical practice.
Two enzyme-linked immunosorbent assay (ELISA) methods were evaluated for the determination of 3,5,6-trichloro-2-pyridinol (3,5,6-TCP) in multiple sample media (dust, soil, food, and urine). The dust and soil samples were analyzed by a commercial RaPID immunoassay testing kit. ...
Clarke, John R; Ragone, Andrew V; Greenwald, Lloyd
2005-09-01
We conducted a comparison of methods for predicting survival using survival risk ratios (SRRs), including new comparisons based on International Classification of Diseases, Ninth Revision (ICD-9) versus Abbreviated Injury Scale (AIS) six-digit codes. From the Pennsylvania trauma center's registry, all direct trauma admissions were collected through June 22, 1999. Patients with no comorbid medical diagnoses and both ICD-9 and AIS injury codes were used for comparisons based on a single set of data. SRRs for ICD-9 and then for AIS diagnostic codes were each calculated two ways: from the survival rate of patients with each diagnosis and when each diagnosis was an isolated diagnosis. Probabilities of survival for the cohort were calculated using each set of SRRs by the multiplicative ICISS method and, where appropriate, the minimum SRR method. These prediction sets were then internally validated against actual survival by the Hosmer-Lemeshow goodness-of-fit statistic. The 41,364 patients had 1,224 different ICD-9 injury diagnoses in 32,261 combinations and 1,263 corresponding AIS injury diagnoses in 31,755 combinations, ranging from 1 to 27 injuries per patient. All conventional ICD-9-based combinations of SRRs and methods had better Hosmer-Lemeshow goodness-of-fit statistic fits than their AIS-based counterparts. The minimum SRR method produced better calibration than the multiplicative methods, presumably because it did not magnify inaccuracies in the SRRs that might occur with multiplication. Predictions of survival based on anatomic injury alone can be performed using ICD-9 codes, with no advantage from extra coding of AIS diagnoses. Predictions based on the single worst SRR were closer to actual outcomes than those based on multiplying SRRs.
Estimating the mass variance in neutron multiplicity counting-A comparison of approaches
NASA Astrophysics Data System (ADS)
Dubi, C.; Croft, S.; Favalli, A.; Ocherashvili, A.; Pedersen, B.
2017-12-01
In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α , n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.
Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubi, C.; Croft, S.; Favalli, A.
In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches
Dubi, C.; Croft, S.; Favalli, A.; ...
2017-09-14
In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
On the method of Ermakov and Zolotukhin for multiple integration
NASA Technical Reports Server (NTRS)
Cranley, R.; Patterson, T. N. L.
1971-01-01
The method of Ermakov and Zolotukhin is discussed along with its later developments. By introducing the idea of pseudo-implementation a practical assessment of the method is made. The performance of the method is found to be unimpressive in comparison with a recent regression method.
NASA Technical Reports Server (NTRS)
Deepak, A.; Fluellen, A.
1978-01-01
An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.
Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods
NASA Astrophysics Data System (ADS)
Garbanzo-Salas, Marcial; Hocking, Wayne. K.
2015-09-01
In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.
Han, Hyemin; Glenn, Andrea L
2018-06-01
In fMRI research, the goal of correcting for multiple comparisons is to identify areas of activity that reflect true effects, and thus would be expected to replicate in future studies. Finding an appropriate balance between trying to minimize false positives (Type I error) while not being too stringent and omitting true effects (Type II error) can be challenging. Furthermore, the advantages and disadvantages of these types of errors may differ for different areas of study. In many areas of social neuroscience that involve complex processes and considerable individual differences, such as the study of moral judgment, effects are typically smaller and statistical power weaker, leading to the suggestion that less stringent corrections that allow for more sensitivity may be beneficial and also result in more false positives. Using moral judgment fMRI data, we evaluated four commonly used methods for multiple comparison correction implemented in Statistical Parametric Mapping 12 by examining which method produced the most precise overlap with results from a meta-analysis of relevant studies and with results from nonparametric permutation analyses. We found that voxelwise thresholding with familywise error correction based on Random Field Theory provides a more precise overlap (i.e., without omitting too few regions or encompassing too many additional regions) than either clusterwise thresholding, Bonferroni correction, or false discovery rate correction methods.
Mean Comparison: Manifest Variable versus Latent Variable
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Bentler, Peter M.
2006-01-01
An extension of multiple correspondence analysis is proposed that takes into account cluster-level heterogeneity in respondents' preferences/choices. The method involves combining multiple correspondence analysis and k-means in a unified framework. The former is used for uncovering a low-dimensional space of multivariate categorical variables…
McCaffrey, Nikki; Agar, Meera; Harlum, Janeane; Karnon, Jonathon; Currow, David; Eckermann, Simon
2015-01-01
Introduction Comparing multiple, diverse outcomes with cost-effectiveness analysis (CEA) is important, yet challenging in areas like palliative care where domains are unamenable to integration with survival. Generic multi-attribute utility values exclude important domains and non-health outcomes, while partial analyses—where outcomes are considered separately, with their joint relationship under uncertainty ignored—lead to incorrect inference regarding preferred strategies. Objective The objective of this paper is to consider whether such decision making can be better informed with alternative presentation and summary measures, extending methods previously shown to have advantages in multiple strategy comparison. Methods Multiple outcomes CEA of a home-based palliative care model (PEACH) relative to usual care is undertaken in cost disutility (CDU) space and compared with analysis on the cost-effectiveness plane. Summary measures developed for comparing strategies across potential threshold values for multiple outcomes include: expected net loss (ENL) planes quantifying differences in expected net benefit; the ENL contour identifying preferred strategies minimising ENL and their expected value of perfect information; and cost-effectiveness acceptability planes showing probability of strategies minimising ENL. Results Conventional analysis suggests PEACH is cost-effective when the threshold value per additional day at home ( 1) exceeds $1,068 or dominated by usual care when only the proportion of home deaths is considered. In contrast, neither alternative dominate in CDU space where cost and outcomes are jointly considered, with the optimal strategy depending on threshold values. For example, PEACH minimises ENL when 1=$2,000 and 2=$2,000 (threshold value for dying at home), with a 51.6% chance of PEACH being cost-effective. Conclusion Comparison in CDU space and associated summary measures have distinct advantages to multiple domain comparisons, aiding transparent and robust joint comparison of costs and multiple effects under uncertainty across potential threshold values for effect, better informing net benefit assessment and related reimbursement and research decisions. PMID:25751629
Double-multiple streamtube model for studying vertical-axis wind turbines
NASA Astrophysics Data System (ADS)
Paraschivoiu, Ion
1988-08-01
This work describes the present state-of-the-art in double-multiple streamtube method for modeling the Darrieus-type vertical-axis wind turbine (VAWT). Comparisons of the analytical results with the other predictions and available experimental data show a good agreement. This method, which incorporates dynamic-stall and secondary effects, can be used for generating a suitable aerodynamic-load model for structural design analysis of the Darrieus rotor.
Simultaneous Inference Procedures for Means.
ERIC Educational Resources Information Center
Krishnaiah, P. R.
Some aspects of simultaneous tests for means are reviewed. Specifically, the comparison of univariate or multivariate normal populations based on the values of the means or mean vectors when the variances or covariance matrices are equal is discussed. Tukey's and Dunnett's tests for multiple comparisons of means, Scheffe's method of examining…
Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition
NASA Technical Reports Server (NTRS)
Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd
2015-01-01
Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.
A basket two-part model to analyze medical expenditure on interdependent multiple sectors.
Sugawara, Shinya; Wu, Tianyi; Yamanishi, Kenji
2018-05-01
This study proposes a novel statistical methodology to analyze expenditure on multiple medical sectors using consumer data. Conventionally, medical expenditure has been analyzed by two-part models, which separately consider purchase decision and amount of expenditure. We extend the traditional two-part models by adding the step of basket analysis for dimension reduction. This new step enables us to analyze complicated interdependence between multiple sectors without an identification problem. As an empirical application for the proposed method, we analyze data of 13 medical sectors from the Medical Expenditure Panel Survey. In comparison with the results of previous studies that analyzed the multiple sector independently, our method provides more detailed implications of the impacts of individual socioeconomic status on the composition of joint purchases from multiple medical sectors; our method has a better prediction performance.
NASA Technical Reports Server (NTRS)
Wood, C. A.
1974-01-01
For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.
USDA-ARS?s Scientific Manuscript database
This study compared the utility of three sampling methods for ecological monitoring based on: interchangeability of data (rank correlations), precision (coefficient of variation), cost (minutes/transect), and potential of each method to generate multiple indicators. Species richness and foliar cover...
Parent-Collected Behavioral Observations: An Empirical Comparison of Methods
ERIC Educational Resources Information Center
Nadler, Cy B.; Roberts, Mark W.
2013-01-01
Treatments for disruptive behaviors are often guided by parent reports on questionnaires, rather than by multiple methods of assessment. Professional observations and clinic analogs exist to complement questionnaires, but parents can also collect useful behavioral observations to inform and guide treatment. Two parent observation methods of child…
Sicras-Mainar, Antoni; Velasco-Velasco, Soledad; Navarro-Artieda, Ruth; Blanca Tamayo, Milagrosa; Aguado Jodar, Alba; Ruíz Torrejón, Amador; Prados-Torres, Alexandra; Violan-Fors, Concepción
2012-06-01
To compare three methods of measuring multiple morbidity according to the use of health resources (cost of care) in primary healthcare (PHC). Retrospective study using computerized medical records. Thirteen PHC teams in Catalonia (Spain). Assigned patients requiring care in 2008. The socio-demographic variables were co-morbidity and costs. Methods of comparison were: a) Combined Comorbidity Index (CCI): an index itself was developed from the scores of acute and chronic episodes, b) Charlson Index (ChI), and c) Adjusted Clinical Groups case-mix: resource use bands (RUB). The cost model was constructed by differentiating between fixed (operational) and variable costs. 3 multiple lineal regression models were developed to assess the explanatory power of each measurement of co-morbidity which were compared from the determination coefficient (R(2)), p< .05. The study included 227,235 patients. The mean unit of cost was €654.2. The CCI explained an R(2)=50.4%, the ChI an R(2)=29.2% and BUR an R(2)=39.7% of the variability of the cost. The behaviour of the ICC is acceptable, albeit with low scores (1 to 3 points), showing inconclusive results. The CCI may be a simple method of predicting PHC costs in routine clinical practice. If confirmed, these results will allow improvements in the comparison of the case-mix. Copyright © 2011 Elsevier España, S.L. All rights reserved.
A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield
NASA Astrophysics Data System (ADS)
Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan
2018-04-01
In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.
A Comparison of Cut Scores Using Multiple Standard Setting Methods.
ERIC Educational Resources Information Center
Impara, James C.; Plake, Barbara S.
This paper reports the results of using several alternative methods of setting cut scores. The methods used were: (1) a variation of the Angoff method (1971); (2) a variation of the borderline group method; and (3) an advanced impact method (G. Dillon, 1996). The results discussed are from studies undertaken to set the cut scores for fourth grade…
Nested Sampling for Bayesian Model Comparison in the Context of Salmonella Disease Dynamics
Dybowski, Richard; McKinley, Trevelyan J.; Mastroeni, Pietro; Restif, Olivier
2013-01-01
Understanding the mechanisms underlying the observed dynamics of complex biological systems requires the statistical assessment and comparison of multiple alternative models. Although this has traditionally been done using maximum likelihood-based methods such as Akaike's Information Criterion (AIC), Bayesian methods have gained in popularity because they provide more informative output in the form of posterior probability distributions. However, comparison between multiple models in a Bayesian framework is made difficult by the computational cost of numerical integration over large parameter spaces. A new, efficient method for the computation of posterior probabilities has recently been proposed and applied to complex problems from the physical sciences. Here we demonstrate how nested sampling can be used for inference and model comparison in biological sciences. We present a reanalysis of data from experimental infection of mice with Salmonella enterica showing the distribution of bacteria in liver cells. In addition to confirming the main finding of the original analysis, which relied on AIC, our approach provides: (a) integration across the parameter space, (b) estimation of the posterior parameter distributions (with visualisations of parameter correlations), and (c) estimation of the posterior predictive distributions for goodness-of-fit assessments of the models. The goodness-of-fit results suggest that alternative mechanistic models and a relaxation of the quasi-stationary assumption should be considered. PMID:24376528
Association analysis of multiple traits by an approach of combining P values.
Chen, Lili; Wang, Yong; Zhou, Yajing
2018-03-01
Increasing evidence shows that one variant can affect multiple traits, which is a widespread phenomenon in complex diseases. Joint analysis of multiple traits can increase statistical power of association analysis and uncover the underlying genetic mechanism. Although there are many statistical methods to analyse multiple traits, most of these methods are usually suitable for detecting common variants associated with multiple traits. However, because of low minor allele frequency of rare variant, these methods are not optimal for rare variant association analysis. In this paper, we extend an adaptive combination of P values method (termed ADA) for single trait to test association between multiple traits and rare variants in the given region. For a given region, we use reverse regression model to test each rare variant associated with multiple traits and obtain the P value of single-variant test. Further, we take the weighted combination of these P values as the test statistic. Extensive simulation studies show that our approach is more powerful than several other comparison methods in most cases and is robust to the inclusion of a high proportion of neutral variants and the different directions of effects of causal variants.
Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin
2017-08-15
Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.
Evaluation of MIMIC-Model Methods for DIF Testing with Comparison to Two-Group Analysis
ERIC Educational Resources Information Center
Woods, Carol M.
2009-01-01
Differential item functioning (DIF) occurs when an item on a test or questionnaire has different measurement properties for 1 group of people versus another, irrespective of mean differences on the construct. This study focuses on the use of multiple-indicator multiple-cause (MIMIC) structural equation models for DIF testing, parameterized as item…
A Comparison of Two Scoring Methods for an Automated Speech Scoring System
ERIC Educational Resources Information Center
Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David
2012-01-01
This paper compares two alternative scoring methods--multiple regression and classification trees--for an automated speech scoring system used in a practice environment. The two methods were evaluated on two criteria: construct representation and empirical performance in predicting human scores. The empirical performance of the two scoring models…
ERIC Educational Resources Information Center
Roid, Gale; And Others
Several measurement theorists have convincingly argued that methods of writing test questions, particularly for criterion-referenced tests, should be based on operationally defined rules. This study was designed to examine and further refine a method for objectively generating multiple-choice questions for prose instructional materials. Important…
Khotanlou, Hassan; Afrasiabi, Mahlagha
2012-10-01
This paper presents a new feature selection approach for automatically extracting multiple sclerosis (MS) lesions in three-dimensional (3D) magnetic resonance (MR) images. Presented method is applicable to different types of MS lesions. In this method, T1, T2, and fluid attenuated inversion recovery (FLAIR) images are firstly preprocessed. In the next phase, effective features to extract MS lesions are selected by using a genetic algorithm (GA). The fitness function of the GA is the Similarity Index (SI) of a support vector machine (SVM) classifier. The results obtained on different types of lesions have been evaluated by comparison with manual segmentations. This algorithm is evaluated on 15 real 3D MR images using several measures. As a result, the SI between MS regions determined by the proposed method and radiologists was 87% on average. Experiments and comparisons with other methods show the effectiveness and the efficiency of the proposed approach.
A comparison of multiple imputation methods for incomplete longitudinal binary data.
Yamaguchi, Yusuke; Misumi, Toshihiro; Maruo, Kazushi
2018-01-01
Longitudinal binary data are commonly encountered in clinical trials. Multiple imputation is an approach for getting a valid estimation of treatment effects under an assumption of missing at random mechanism. Although there are a variety of multiple imputation methods for the longitudinal binary data, a limited number of researches have reported on relative performances of the methods. Moreover, when focusing on the treatment effect throughout a period that has often been used in clinical evaluations of specific disease areas, no definite investigations comparing the methods have been available. We conducted an extensive simulation study to examine comparative performances of six multiple imputation methods available in the SAS MI procedure for longitudinal binary data, where two endpoints of responder rates at a specified time point and throughout a period were assessed. The simulation study suggested that results from naive approaches of a single imputation with non-responders and a complete case analysis could be very sensitive against missing data. The multiple imputation methods using a monotone method and a full conditional specification with a logistic regression imputation model were recommended for obtaining unbiased and robust estimations of the treatment effect. The methods were illustrated with data from a mental health research.
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...
2015-10-24
Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less
NASA Astrophysics Data System (ADS)
Sabri, Karim; Colson, Gérard E.; Mbangala, Augustin M.
2008-10-01
Multi-period differences of technical and financial performances are analysed by comparing five North African railways over the period (1990-2004). A first approach is based on the Malmquist DEA TFP index for measuring the total factors productivity change, decomposed into technical efficiency change and technological changes. A multiple criteria analysis is also performed using the PROMETHEE II method and the software ARGOS. These methods provide complementary detailed information, especially by discriminating the technological and management progresses by Malmquist and the two dimensions of performance by Promethee: that are the service to the community and the enterprises performances, often in conflict.
Reaction schemes visualized in network form: the syntheses of strychnine as an example.
Proudfoot, John R
2013-05-24
Representation of synthesis sequences in a network form provides an effective method for the comparison of multiple reaction schemes and an opportunity to emphasize features such as reaction scale that are often relegated to experimental sections. An example of data formatting that allows construction of network maps in Cytoscape is presented, along with maps that illustrate the comparison of multiple reaction sequences, comparison of scaffold changes within sequences, and consolidation to highlight common key intermediates used across sequences. The 17 different synthetic routes reported for strychnine are used as an example basis set. The reaction maps presented required a significant data extraction and curation, and a standardized tabular format for reporting reaction information, if applied in a consistent way, could allow the automated combination of reaction information across different sources.
NASA Technical Reports Server (NTRS)
Westmeyer, Paul A. (Inventor); Wertenberg, Russell F. (Inventor); Krage, Frederick J. (Inventor); Riegel, Jack F. (Inventor)
2017-01-01
An authentication procedure utilizes multiple independent sources of data to determine whether usage of a device, such as a desktop computer, is authorized. When a comparison indicates an anomaly from the base-line usage data, the system, provides a notice that access of the first device is not authorized.
High-energy multiple muons and heavy primary cosmic-rays
NASA Technical Reports Server (NTRS)
Mizutani, K.; Sato, T.; Takahashi, T.; Higashi, S.
1985-01-01
Three-dimensional simulations were carried out on high-energy multiple muons. On the lateral spread, the comparison with the deep underground observations indicates that the primary cosmic rays include heavy nuclei of high content. A method to determine the average mass number of primary particles in the energy around 10 to the 15th power eV is suggested.
ERIC Educational Resources Information Center
Tjaden, Kris; Lam, Jennifer; Wilding, Greg
2013-01-01
Purpose: The impact of clear speech, increased vocal intensity, and rate reduction on acoustic characteristics of vowels was compared in speakers with Parkinson's disease (PD), speakers with multiple sclerosis (MS), and healthy controls. Method: Speakers read sentences in habitual, clear, loud, and slow conditions. Variations in clarity,…
eShadow: A tool for comparing closely related sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovcharenko, Ivan; Boffelli, Dario; Loots, Gabriela G.
2004-01-15
Primate sequence comparisons are difficult to interpret due to the high degree of sequence similarity shared between such closely related species. Recently, a novel method, phylogenetic shadowing, has been pioneered for predicting functional elements in the human genome through the analysis of multiple primate sequence alignments. We have expanded this theoretical approach to create a computational tool, eShadow, for the identification of elements under selective pressure in multiple sequence alignments of closely related genomes, such as in comparisons of human to primate or mouse to rat DNA. This tool integrates two different statistical methods and allows for the dynamic visualizationmore » of the resulting conservation profile. eShadow also includes a versatile optimization module capable of training the underlying Hidden Markov Model to differentially predict functional sequences. This module grants the tool high flexibility in the analysis of multiple sequence alignments and in comparing sequences with different divergence rates. Here, we describe the eShadow comparative tool and its potential uses for analyzing both multiple nucleotide and protein alignments to predict putative functional elements. The eShadow tool is publicly available at http://eshadow.dcode.org/« less
ERIC Educational Resources Information Center
Grow, Laura L.; Kodak, Tiffany; Carr, James E.
2014-01-01
Previous research has demonstrated that the conditional-only method (starting with a multiple-stimulus array) is more efficient than the simple-conditional method (progressive incorporation of more stimuli into the array) for teaching receptive labeling to children with autism spectrum disorders (Grow, Carr, Kodak, Jostad, & Kisamore, 2011).…
Kim, Eun Sook; Cao, Chunhua
2015-01-01
Considering that group comparisons are common in social science, we examined two latent group mean testing methods when groups of interest were either at the between or within level of multilevel data: multiple-group multilevel confirmatory factor analysis (MG ML CFA) and multilevel multiple-indicators multiple-causes modeling (ML MIMIC). The performance of these methods were investigated through three Monte Carlo studies. In Studies 1 and 2, either factor variances or residual variances were manipulated to be heterogeneous between groups. In Study 3, which focused on within-level multiple-group analysis, six different model specifications were considered depending on how to model the intra-class group correlation (i.e., correlation between random effect factors for groups within cluster). The results of simulations generally supported the adequacy of MG ML CFA and ML MIMIC for multiple-group analysis with multilevel data. The two methods did not show any notable difference in the latent group mean testing across three studies. Finally, a demonstration with real data and guidelines in selecting an appropriate approach to multilevel multiple-group analysis are provided.
Comparison of multiple gene assembly methods for metabolic engineering
Chenfeng Lu; Karen Mansoorabadi; Thomas Jeffries
2007-01-01
A universal, rapid DNA assembly method for efficient multigene plasmid construction is important for biological research and for optimizing gene expression in industrial microbes. Three different approaches to achieve this goal were evaluated. These included creating long complementary extensions using a uracil-DNA glycosylase technique, overlap extension polymerase...
ERIC Educational Resources Information Center
Everson, Howard T.; And Others
This paper explores the feasibility of neural computing methods such as artificial neural networks (ANNs) and abductory induction mechanisms (AIM) for use in educational measurement. ANNs and AIMS methods are contrasted with more traditional statistical techniques, such as multiple regression and discriminant function analyses, for making…
Agopian, A J; Evans, Jane A; Lupo, Philip J
2018-01-15
It is estimated that 20 to 30% of infants with birth defects have two or more birth defects. Among these infants with multiple congenital anomalies (MCA), co-occurring anomalies may represent either chance (i.e., unrelated etiologies) or pathogenically associated patterns of anomalies. While some MCA patterns have been recognized and described (e.g., known syndromes), others have not been identified or characterized. Elucidating these patterns may result in a better understanding of the etiologies of these MCAs. This article reviews the literature with regard to analytic methods that have been used to evaluate patterns of MCAs, in particular those using birth defect registry data. A popular method for MCA assessment involves a comparison of the observed to expected ratio for a given combination of MCAs, or one of several modified versions of this comparison. Other methods include use of numerical taxonomy or other clustering techniques, multiple regression analysis, and log-linear analysis. Advantages and disadvantages of these approaches, as well as specific applications, were outlined. Despite the availability of multiple analytic approaches, relatively few MCA combinations have been assessed. The availability of large birth defects registries and computing resources that allow for automated, big data strategies for prioritizing MCA patterns may provide for new avenues for better understanding co-occurrence of birth defects. Thus, the selection of an analytic approach may depend on several considerations. Birth Defects Research 110:5-11, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc
2013-06-01
An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.
Implementation of false discovery rate for exploring novel paradigms and trait dimensions with ERPs.
Crowley, Michael J; Wu, Jia; McCreary, Scott; Miller, Kelly; Mayes, Linda C
2012-01-01
False discovery rate (FDR) is a multiple comparison procedure that targets the expected proportion of false discoveries among the discoveries. Employing FDR methods in event-related potential (ERP) research provides an approach to explore new ERP paradigms and ERP-psychological trait/behavior relations. In Study 1, we examined neural responses to escape behavior from an aversive noise. In Study 2, we correlated a relatively unexplored trait dimension, ostracism, with neural response. In both situations we focused on the frontal cortical region, applying a channel by time plots to display statistically significant uncorrected data and FDR corrected data, controlling for multiple comparisons.
Jensen, Scott A; Blumberg, Sean; Browning, Megan
2017-09-01
Although time-out has been demonstrated to be effective across multiple settings, little research exists on effective methods for training others to implement time-out. The present set of studies is an exploratory analysis of a structured feedback method for training time-out using repeated role-plays. The three studies examined (a) a between-subjects comparison to more a traditional didactic/video modeling method of time-out training, (b) a within-subjects comparison to traditional didactic/video modeling training for another skill, and (c) the impact of structured feedback training on in-home time-out implementation. Though findings are only preliminary and more research is needed, the structured feedback method appears across studies to be an efficient, effective method that demonstrates good maintenance of skill up to 3 months post training. Findings suggest, though do not confirm, a benefit of the structured feedback method over a more traditional didactic/video training model. Implications and further research on the method are discussed.
On assessing bioequivalence and interchangeability between generics based on indirect comparisons.
Zheng, Jiayin; Chow, Shein-Chung; Yuan, Mengdie
2017-08-30
As more and more generics become available in the market place, the safety/efficacy concerns may arise as the result of interchangeably use of approved generics. However, bioequivalence assessment for regulatory approval among generics of the innovative drug product is not required. In practice, approved generics are often used interchangeably without any mechanism of safety monitoring. In this article, based on indirect comparisons, we proposed several methods to assessing bioequivalence and interchangeability between generics. The applicability of the methods and the similarity assumptions were discussed, as well as the inappropriateness of directly adopting adjusted indirect comparison to the field of generics' comparison. Besides, some extensions were given to take into consideration the important topics in clinical trials for bioequivalence assessments, for example, multiple comparisons and simultaneously testing bioequivalence among three generics. Extensive simulation studies were conducted to investigate the performances of the proposed methods. The studies of malaria generics and HIV/AIDS generics prequalified by the WHO were used as real examples to demonstrate the use of the methods. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
D. V. Shaw; R. W. Allard
1981-01-01
Two methods of estimating the proportion of self-fertilization as opposed to outcrossing in plant populations are described. The first method makes use of marker loci one at a time; the second method makes use of multiple marker loci simultaneously. Comparisons of the estimates of proportions of selfing and outcrossing obtained using the two methods are shown to yield...
Multiple Comparisons of Observation Means--Are the Means Significantly Different?
ERIC Educational Resources Information Center
Fahidy, T. Z.
2009-01-01
Several currently popular methods of ascertaining which treatment (population) means are different, via random samples obtained under each treatment, are briefly described and illustrated by evaluating catalyst performance in a chemical reactor.
Statistical technique for analysing functional connectivity of multiple spike trains.
Masud, Mohammad Shahed; Borisyuk, Roman
2011-03-15
A new statistical technique, the Cox method, used for analysing functional connectivity of simultaneously recorded multiple spike trains is presented. This method is based on the theory of modulated renewal processes and it estimates a vector of influence strengths from multiple spike trains (called reference trains) to the selected (target) spike train. Selecting another target spike train and repeating the calculation of the influence strengths from the reference spike trains enables researchers to find all functional connections among multiple spike trains. In order to study functional connectivity an "influence function" is identified. This function recognises the specificity of neuronal interactions and reflects the dynamics of postsynaptic potential. In comparison to existing techniques, the Cox method has the following advantages: it does not use bins (binless method); it is applicable to cases where the sample size is small; it is sufficiently sensitive such that it estimates weak influences; it supports the simultaneous analysis of multiple influences; it is able to identify a correct connectivity scheme in difficult cases of "common source" or "indirect" connectivity. The Cox method has been thoroughly tested using multiple sets of data generated by the neural network model of the leaky integrate and fire neurons with a prescribed architecture of connections. The results suggest that this method is highly successful for analysing functional connectivity of simultaneously recorded multiple spike trains. Copyright © 2011 Elsevier B.V. All rights reserved.
Construct and Compare Gene Coexpression Networks with DAPfinder and DAPview.
Skinner, Jeff; Kotliarov, Yuri; Varma, Sudhir; Mine, Karina L; Yambartsev, Anatoly; Simon, Richard; Huyen, Yentram; Morgun, Andrey
2011-07-14
DAPfinder and DAPview are novel BRB-ArrayTools plug-ins to construct gene coexpression networks and identify significant differences in pairwise gene-gene coexpression between two phenotypes. Each significant difference in gene-gene association represents a Differentially Associated Pair (DAP). Our tools include several choices of filtering methods, gene-gene association metrics, statistical testing methods and multiple comparison adjustments. Network results are easily displayed in Cytoscape. Analyses of glioma experiments and microarray simulations demonstrate the utility of these tools. DAPfinder is a new friendly-user tool for reconstruction and comparison of biological networks.
ERIC Educational Resources Information Center
Chin, Doris B.; Chi, Min; Schwartz, Daniel L.
2016-01-01
A common approach for introducing students to a new science concept is to present them with multiple cases of the phenomenon and ask them to explore. The expectation is that students will naturally take advantage of the multiple cases to support their learning and seek an underlying principle for the phenomenon. However, the success of such tasks…
Fletcher, Jack M.; Stuebing, Karla K.; Barth, Amy E.; Miciak, Jeremy; Francis, David J.; Denton, Carolyn A.
2013-01-01
Purpose Agreement across methods for identifying students as inadequate responders or as learning disabled is often poor. We report (1) an empirical examination of final status (post-intervention benchmarks) and dual-discrepancy growth methods based on growth during the intervention and final status for assessing response to intervention; and (2) a statistical simulation of psychometric issues that may explain low agreement. Methods After a Tier 2 intervention, final status benchmark criteria were used to identify 104 inadequate and 85 adequate responders to intervention, with comparisons of agreement and coverage for these methods and a dual-discrepancy method. Factors affecting agreement were investigated using computer simulation to manipulate reliability, the intercorrelation between measures, cut points, normative samples, and sample size. Results Identification of inadequate responders based on individual measures showed that single measures tended not to identify many members of the pool of 104 inadequate responders. Poor to fair levels of agreement for identifying inadequate responders were apparent between pairs of measures In the simulation, comparisons across two simulated measures generated indices of agreement (kappa) that were generally low because of multiple psychometric issues inherent in any test. Conclusions Expecting excellent agreement between two correlated tests with even small amounts of unreliability may not be realistic. Assessing outcomes based on multiple measures, such as level of CBM performance and short norm-referenced assessments of fluency may improve the reliability of diagnostic decisions. PMID:25364090
Multiple-3D-object secure information system based on phase shifting method and single interference.
Li, Wei-Na; Shi, Chen-Xiao; Piao, Mei-Lan; Kim, Nam
2016-05-20
We propose a multiple-3D-object secure information system for encrypting multiple three-dimensional (3D) objects based on the three-step phase shifting method. During the decryption procedure, five phase functions (PFs) are decreased to three PFs, in comparison with our previous method, which implies that one cross beam splitter is utilized to implement the single decryption interference. Moreover, the advantages of the proposed scheme also include: each 3D object can be decrypted discretionarily without decrypting a series of other objects earlier; the quality of the decrypted slice image of each object is high according to the correlation coefficient values, none of which is lower than 0.95; no iterative algorithm is involved. The feasibility of the proposed scheme is demonstrated by computer simulation results.
Propulsion Diagnostic Method Evaluation Strategy (ProDiMES) User's Guide
NASA Technical Reports Server (NTRS)
Simon, Donald L.
2010-01-01
This report is a User's Guide for the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES). ProDiMES is a standard benchmarking problem and a set of evaluation metrics to enable the comparison of candidate aircraft engine gas path diagnostic methods. This Matlab (The Mathworks, Inc.) based software tool enables users to independently develop and evaluate diagnostic methods. Additionally, a set of blind test case data is also distributed as part of the software. This will enable the side-by-side comparison of diagnostic approaches developed by multiple users. The Users Guide describes the various components of ProDiMES, and provides instructions for the installation and operation of the tool.
Stream macroinvertebrate collection methods described in the Rapid Bioassessment Protocols (RBPs) have been used widely throughout the U.S. The first edition of the RBP manual in 1989 described a single habitat approach that focused on riffles and runs, where macroinvertebrate d...
Methods for detecting total coliform bacteria in drinking water were compared using 1483 different drinking water samples from 15 small community water systems in Vermont and New Hampshire. The methods included the membrane filter (MF) technique, a ten tube fermentation tube tech...
ERIC Educational Resources Information Center
Fathurrohman, Maman; Porter, Anne; Worthy, Annette L.
2014-01-01
In this paper, the use of guided hyperlearning, unguided hyperlearning, and conventional learning methods in mathematics are compared. The design of the research involved a quasi-experiment with a modified single-factor multiple treatment design comparing the three learning methods, guided hyperlearning, unguided hyperlearning, and conventional…
Krumm, Rainer; Dugas, Martin
2016-01-01
Introduction Medical documentation is applied in various settings including patient care and clinical research. Since procedures of medical documentation are heterogeneous and developed further, secondary use of medical data is complicated. Development of medical forms, merging of data from different sources and meta-analyses of different data sets are currently a predominantly manual process and therefore difficult and cumbersome. Available applications to automate these processes are limited. In particular, tools to compare multiple documentation forms are missing. The objective of this work is to design, implement and evaluate the new system ODMSummary for comparison of multiple forms with a high number of semantically annotated data elements and a high level of usability. Methods System requirements are the capability to summarize and compare a set of forms, enable to estimate the documentation effort, track changes in different versions of forms and find comparable items in different forms. Forms are provided in Operational Data Model format with semantic annotations from the Unified Medical Language System. 12 medical experts were invited to participate in a 3-phase evaluation of the tool regarding usability. Results ODMSummary (available at https://odmtoolbox.uni-muenster.de/summary/summary.html) provides a structured overview of multiple forms and their documentation fields. This comparison enables medical experts to assess multiple forms or whole datasets for secondary use. System usability was optimized based on expert feedback. Discussion The evaluation demonstrates that feedback from domain experts is needed to identify usability issues. In conclusion, this work shows that automatic comparison of multiple forms is feasible and the results are usable for medical experts. PMID:27736972
Li, Yin; Liao, Ming; He, Xiao; Zhou, Yi; Luo, Rong; Li, Hongtao; Wang, Yun; He, Min
2012-11-01
To compare the effects of acetonitrile precipitation, ethanol precipitation and multiple affinity chromatography column Human 14 removal to eliminate high-abundance proteins in human serum. Elimination of serum high-abundance proteins performed with acetonitrile precipitation, ethanol precipitation and multiple affinity chromatography column Human 14 removal. Bis-Tris Mini Gels electrophoresis and two-dimensional gel electrophoresis to detect the effect. Grey value analysis from 1-DE figure showed that after serum processed by acetonitrile method, multiple affinity chromatography column Human 14 removal method and ethanol method, the grey value of albumin changed into 157.2, 40.8 and 8.2 respectively from the original value of 19. 2-DE analysis results indicated that using multiple affinity chromatography column Human 14 method, the protein points noticeable increased by 137 compared to the original serum. After processed by acetonitrile method and ethanol method, the protein point reduced, but the low abundance protein point emerged. The acetonitrile precipitation could eliminate the vast majority of high abundance proteins in serum and gain more proteins of low molecular weight, ethanol precipitation could eliminate part of high abundance proteins in serum, but low abundance proteins less harvested, and multiple affinity chromatography column Human 14 method could effectively removed the high abundance proteins, and keep a large number of low abundance proteins.
Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean
2017-12-04
Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further using multiple metrics with much larger scale comparisons, prospective testing as well as assessment of different fingerprints and DNN architectures beyond those used.
ERIC Educational Resources Information Center
Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E.
2009-01-01
A common question in test evaluation is whether an a priori assignment of items to subtests is supported by empirical data. If the analysis results indicate the assignment of items to subtests under study is not supported by data, the assignment is often adjusted. In this study the authors compare two methods on the quality of their suggestions to…
Bradley, Jennifer; Simpson, Emma; Poliakov, Ivan; Matthews, John N S; Olivier, Patrick; Adamson, Ashley J; Foster, Emma
2016-06-09
Online dietary assessment tools offer a convenient, low cost alternative to traditional dietary assessment methods such as weighed records and face-to-face interviewer-led 24-h recalls. INTAKE24 is an online multiple pass 24-h recall tool developed for use with 11-24 year-old. The aim of the study was to undertake a comparison of INTAKE24 (the test method) with interviewer-led multiple pass 24-h recalls (the comparison method) in 180 people aged 11-24 years. Each participant completed both an INTAKE24 24-h recall and an interviewer-led 24-h recall on the same day on four occasions over a one-month period. The daily energy and nutrient intakes reported in INTAKE24 were compared to those reported in the interviewer-led recall. Mean intakes reported using INTAKE24 were similar to the intakes reported in the interviewer-led recall for energy and macronutrients. INTAKE24 was found to underestimate energy intake by 1% on average compared to the interviewer-led recall with the limits of agreement ranging from minus 49% to plus 93%. Mean intakes of all macronutrients and micronutrients (except non-milk extrinsic sugars) were within 4% of the interviewer-led recall. Dietary assessment that utilises technology may offer a viable alternative and be more engaging than paper based methods, particularly for children and young adults.
Comparing Institution Nitrogen Footprints: Metrics for Assessing and Tracking Environmental Impact
Leach, Allison M.; Compton, Jana E.; Galloway, James N.; Andrews, Jennifer
2017-01-01
Abstract When multiple institutions with strong sustainability initiatives use a new environmental impact assessment tool, there is an impulse to compare. The first seven institutions to calculate nitrogen footprints using the Nitrogen Footprint Tool have worked collaboratively to improve calculation methods, share resources, and suggest methods for reducing their footprints. This article compares those seven institutions’ results to reveal the common and unique drivers of institution nitrogen footprints. The footprints were compared by scope and sector, and the results were normalized by multiple factors (e.g., population, amount of food served). The comparisons found many consistencies across the footprints, including the large contribution of food. The comparisons identified metrics that could be used to track progress, such as an overall indicator for the nitrogen sustainability of food purchases. The comparisons also pointed to differences in system bounds of the calculations, which are important to standardize when comparing across institutions. The footprints were influenced by factors both within and outside of the institutions’ ability to control, such as size, location, population, and campus use. However, these comparisons also point to a pathway forward for standardizing nitrogen footprint tool calculations, identifying metrics that can be used to track progress, and determining a sustainable institution nitrogen footprint. PMID:29350218
ERIC Educational Resources Information Center
Stevens, J. M.; And Others
1977-01-01
Five of the medical schools in the University of London collaborated in administering one multiple choice question paper in obstetrics and gynecology, and results showed differences in performance between the five schools on questions and alternatives within questions. The rank order of the schools may result from differences in teaching methods.…
McFarquhar, Martyn; McKie, Shane; Emsley, Richard; Suckling, John; Elliott, Rebecca; Williams, Stephen
2016-01-01
Repeated measurements and multimodal data are common in neuroimaging research. Despite this, conventional approaches to group level analysis ignore these repeated measurements in favour of multiple between-subject models using contrasts of interest. This approach has a number of drawbacks as certain designs and comparisons of interest are either not possible or complex to implement. Unfortunately, even when attempting to analyse group level data within a repeated-measures framework, the methods implemented in popular software packages make potentially unrealistic assumptions about the covariance structure across the brain. In this paper, we describe how this issue can be addressed in a simple and efficient manner using the multivariate form of the familiar general linear model (GLM), as implemented in a new MATLAB toolbox. This multivariate framework is discussed, paying particular attention to methods of inference by permutation. Comparisons with existing approaches and software packages for dependent group-level neuroimaging data are made. We also demonstrate how this method is easily adapted for dependency at the group level when multiple modalities of imaging are collected from the same individuals. Follow-up of these multimodal models using linear discriminant functions (LDA) is also discussed, with applications to future studies wishing to integrate multiple scanning techniques into investigating populations of interest. PMID:26921716
2013-01-01
Background Perturbations in intestinal microbiota composition have been associated with a variety of gastrointestinal tract-related diseases. The alleviation of symptoms has been achieved using treatments that alter the gastrointestinal tract microbiota toward that of healthy individuals. Identifying differences in microbiota composition through the use of 16S rRNA gene hypervariable tag sequencing has profound health implications. Current computational methods for comparing microbial communities are usually based on multiple alignments and phylogenetic inference, making them time consuming and requiring exceptional expertise and computational resources. As sequencing data rapidly grows in size, simpler analysis methods are needed to meet the growing computational burdens of microbiota comparisons. Thus, we have developed a simple, rapid, and accurate method, independent of multiple alignments and phylogenetic inference, to support microbiota comparisons. Results We create a metric, called compression-based distance (CBD) for quantifying the degree of similarity between microbial communities. CBD uses the repetitive nature of hypervariable tag datasets and well-established compression algorithms to approximate the total information shared between two datasets. Three published microbiota datasets were used as test cases for CBD as an applicable tool. Our study revealed that CBD recaptured 100% of the statistically significant conclusions reported in the previous studies, while achieving a decrease in computational time required when compared to similar tools without expert user intervention. Conclusion CBD provides a simple, rapid, and accurate method for assessing distances between gastrointestinal tract microbiota 16S hypervariable tag datasets. PMID:23617892
Yang, Fang; Chia, Nicholas; White, Bryan A; Schook, Lawrence B
2013-04-23
Perturbations in intestinal microbiota composition have been associated with a variety of gastrointestinal tract-related diseases. The alleviation of symptoms has been achieved using treatments that alter the gastrointestinal tract microbiota toward that of healthy individuals. Identifying differences in microbiota composition through the use of 16S rRNA gene hypervariable tag sequencing has profound health implications. Current computational methods for comparing microbial communities are usually based on multiple alignments and phylogenetic inference, making them time consuming and requiring exceptional expertise and computational resources. As sequencing data rapidly grows in size, simpler analysis methods are needed to meet the growing computational burdens of microbiota comparisons. Thus, we have developed a simple, rapid, and accurate method, independent of multiple alignments and phylogenetic inference, to support microbiota comparisons. We create a metric, called compression-based distance (CBD) for quantifying the degree of similarity between microbial communities. CBD uses the repetitive nature of hypervariable tag datasets and well-established compression algorithms to approximate the total information shared between two datasets. Three published microbiota datasets were used as test cases for CBD as an applicable tool. Our study revealed that CBD recaptured 100% of the statistically significant conclusions reported in the previous studies, while achieving a decrease in computational time required when compared to similar tools without expert user intervention. CBD provides a simple, rapid, and accurate method for assessing distances between gastrointestinal tract microbiota 16S hypervariable tag datasets.
Multiple infrared bands absorber based on multilayer gratings
NASA Astrophysics Data System (ADS)
Liu, Xiaoyi; Gao, Jinsong; Yang, Haigui; Wang, Xiaoyi; Guo, Chengli
2018-03-01
The present study offers an Ag/Si multilayer-grating microstructure based on an Si substrate. The microstructure exhibits designable narrowband absorption in multiple infrared wavebands, especially in mid- and long-wave infrared atmospheric windows. We investigate its resonance mode mechanism, and calculate the resonance wavelengths by the Fabry-Perot and metal-insulator-metal theories for comparison with the simulation results. Furthermore, we summarize the controlling rules of the absorption peak wavelength of the microstructure to provide a new method for generating a Si-based device with multiple working bands in infrared.
Benefits of Using Planned Comparisons Rather Than Post Hoc Tests: A Brief Review with Examples.
ERIC Educational Resources Information Center
DuRapau, Theresa M.
The rationale behind analysis of variance (including analysis of covariance and multiple analyses of variance and covariance) methods is reviewed, and unplanned and planned methods of evaluating differences between means are briefly described. Two advantages of using planned or a priori tests over unplanned or post hoc tests are presented. In…
Evaluation of an Efficient Method for Training Staff to Implement Stimulus Preference Assessments
ERIC Educational Resources Information Center
Roscoe, Eileen M.; Fisher, Wayne W.
2008-01-01
We used a brief training procedure that incorporated feedback and role-play practice to train staff members to conduct stimulus preference assessments, and we used group-comparison methods to evaluate the effects of training. Staff members were trained to implement the multiple-stimulus-without-replacement assessment in a single session and the…
Comparing Methods for Assessing Forest Soil Net Nitrogen Mineralization and Net Nitrification
S. S. Jefts; I. J. Fernandez; L.E. Rustad; D. B. Dail
2004-01-01
A variety of analytical techniques are used to evaluate rates of nitrogen (N) mineralization and nitrification in soils. The diversity of methods takes on added significance in forest ecosystem research where high soil heterogeneity and multiple soil horizons can make comparisons over time and space even more complex than in agricultural Ap horizons. This study...
Generalized fourier analyses of the advection-diffusion equation - Part II: two-dimensional domains
NASA Astrophysics Data System (ADS)
Voth, Thomas E.; Martinez, Mario J.; Christon, Mark A.
2004-07-01
Part I of this work presents a detailed multi-methods comparison of the spatial errors associated with the one-dimensional finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. In Part II we extend the analysis to two-dimensional domains and also consider the effects of wave propagation direction and grid aspect ratio on the phase speed, and the discrete and artificial diffusivities. The observed dependence of dispersive and diffusive behaviour on propagation direction makes comparison of methods more difficult relative to the one-dimensional results. For this reason, integrated (over propagation direction and wave number) error and anisotropy metrics are introduced to facilitate comparison among the various methods. With respect to these metrics, the consistent mass Galerkin and consistent mass control-volume finite element methods, and their streamline upwind derivatives, exhibit comparable accuracy, and generally out-perform their lumped mass counterparts and finite-difference based schemes. While this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common mathematical framework. Published in 2004 by John Wiley & Sons, Ltd.
Multiple comparisons permutation test for image based data mining in radiotherapy.
Chen, Chun; Witte, Marnix; Heemsbergen, Wilma; van Herk, Marcel
2013-12-23
: Comparing incidental dose distributions (i.e. images) of patients with different outcomes is a straightforward way to explore dose-response hypotheses in radiotherapy. In this paper, we introduced a permutation test that compares images, such as dose distributions from radiotherapy, while tackling the multiple comparisons problem. A test statistic Tmax was proposed that summarizes the differences between the images into a single value and a permutation procedure was employed to compute the adjusted p-value. We demonstrated the method in two retrospective studies: a prostate study that relates 3D dose distributions to failure, and an esophagus study that relates 2D surface dose distributions of the esophagus to acute esophagus toxicity. As a result, we were able to identify suspicious regions that are significantly associated with failure (prostate study) or toxicity (esophagus study). Permutation testing allows direct comparison of images from different patient categories and is a useful tool for data mining in radiotherapy.
Levecke, Bruno; Behnke, Jerzy M.; Ajjampur, Sitara S. R.; Albonico, Marco; Ame, Shaali M.; Charlier, Johannes; Geiger, Stefan M.; Hoa, Nguyen T. V.; Kamwa Ngassam, Romuald I.; Kotze, Andrew C.; McCarthy, James S.; Montresor, Antonio; Periago, Maria V.; Roy, Sheela; Tchuem Tchuenté, Louis-Albert; Thach, D. T. C.; Vercruysse, Jozef
2011-01-01
Background The Kato-Katz thick smear (Kato-Katz) is the diagnostic method recommended for monitoring large-scale treatment programs implemented for the control of soil-transmitted helminths (STH) in public health, yet it is difficult to standardize. A promising alternative is the McMaster egg counting method (McMaster), commonly used in veterinary parasitology, but rarely so for the detection of STH in human stool. Methodology/Principal Findings The Kato-Katz and McMaster methods were compared for the detection of STH in 1,543 subjects resident in five countries across Africa, Asia and South America. The consistency of the performance of both methods in different trials, the validity of the fixed multiplication factor employed in the Kato-Katz method and the accuracy of these methods for estimating ‘true’ drug efficacies were assessed. The Kato-Katz method detected significantly more Ascaris lumbricoides infections (88.1% vs. 75.6%, p<0.001), whereas the difference in sensitivity between the two methods was non-significant for hookworm (78.3% vs. 72.4%) and Trichuris trichiura (82.6% vs. 80.3%). The sensitivity of the methods varied significantly across trials and magnitude of fecal egg counts (FEC). Quantitative comparison revealed a significant correlation (Rs >0.32) in FEC between both methods, and indicated no significant difference in FEC, except for A. lumbricoides, where the Kato-Katz resulted in significantly higher FEC (14,197 eggs per gram of stool (EPG) vs. 5,982 EPG). For the Kato-Katz, the fixed multiplication factor resulted in significantly higher FEC than the multiplication factor adjusted for mass of feces examined for A. lumbricoides (16,538 EPG vs. 15,396 EPG) and T. trichiura (1,490 EPG vs. 1,363 EPG), but not for hookworm. The McMaster provided more accurate efficacy results (absolute difference to ‘true’ drug efficacy: 1.7% vs. 4.5%). Conclusions/Significance The McMaster is an alternative method for monitoring large-scale treatment programs. It is a robust (accurate multiplication factor) and accurate (reliable efficacy results) method, which can be easily standardized. PMID:21695104
Van Sanden, Suzy; Ito, Tetsuro; Diels, Joris; Vogel, Martin; Belch, Andrew; Oriol, Albert
2018-03-01
Daratumumab (a human CD38-directed monoclonal antibody) and pomalidomide (an immunomodulatory drug) plus dexamethasone are both relatively new treatment options for patients with heavily pretreated multiple myeloma. A matching adjusted indirect comparison (MAIC) was used to compare absolute treatment effects of daratumumab versus pomalidomide + low-dose dexamethasone (LoDex; 40 mg) on overall survival (OS), while adjusting for differences between the trial populations. The MAIC method reduces the risk of bias associated with naïve indirect comparisons. Data from 148 patients receiving daratumumab (16 mg/kg), pooled from the GEN501 and SIRIUS studies, were compared separately with data from patients receiving pomalidomide + LoDex in the MM-003 and STRATUS studies. The MAIC-adjusted hazard ratio (HR) for OS of daratumumab versus pomalidomide + LoDex was 0.56 (95% confidence interval [CI], 0.38-0.83; p = .0041) for MM-003 and 0.51 (95% CI, 0.37-0.69; p < .0001) for STRATUS. The treatment benefit was even more pronounced when the daratumumab population was restricted to pomalidomide-naïve patients (MM-003: HR, 0.33; 95% CI, 0.17-0.66; p = .0017; STRATUS: HR, 0.41; 95% CI, 0.21-0.79; p = .0082). An additional analysis indicated a consistent trend of the OS benefit across subgroups based on M-protein level reduction (≥50%, ≥25%, and <25%). The MAIC results suggest that daratumumab improves OS compared with pomalidomide + LoDex in patients with heavily pretreated multiple myeloma. This matching adjusted indirect comparison of clinical trial data from four studies analyzes the survival outcomes of patients with heavily pretreated, relapsed/refractory multiple myeloma who received either daratumumab monotherapy or pomalidomide plus low-dose dexamethasone. Using this method, daratumumab conferred a significant overall survival benefit compared with pomalidomide plus low-dose dexamethasone. In the absence of head-to-head trials, these indirect comparisons provide useful insights to clinicians and reimbursement authorities around the relative efficacy of treatments. © AlphaMed Press 2017.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovacik, Meric A.; Androulakis, Ioannis P., E-mail: yannis@rci.rutgers.edu; Biomedical Engineering Department, Rutgers University, Piscataway, NJ 08854
2013-09-15
Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogeneticmore » relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy.« less
NASA Astrophysics Data System (ADS)
Ratnam, Challa; Lakshmana Rao, Vadlamudi; Lachaa Goud, Sivagouni
2006-10-01
In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.
Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less
Alvin H. Yu; Garry Chick
2010-01-01
This study compared the utility of two different post-hoc tests after detecting significant differences within factors on multiple dependent variables using multivariate analysis of variance (MANOVA). We compared the univariate F test (the Scheffé method) to descriptive discriminant analysis (DDA) using an educational-tour survey of university study-...
Performance Comparison of Superresolution Array Processing Algorithms. Revised
1998-06-15
plane waves is finite is the MUSIC algorithm [16]. MUSIC , which denotes Multiple Signal Classification, is an extension of the method of Pisarenko [18... MUSIC Is but one member of a class of methods based upon the decomposition of covariance data into eigenvectors and eigenvalues. Such techniques...techniques relative to the classical methods, however, results for MUSIC are included in this report. All of the techniques reviewed have application to
Mulsow, Jason; Finneran, James J; Houser, Dorian S
2011-04-01
Although electrophysiological methods of measuring the hearing sensitivity of pinnipeds are not yet as refined as those for dolphins and porpoises, they appear to be a promising supplement to traditional psychophysical procedures. In order to further standardize electrophysiological methods with pinnipeds, a within-subject comparison of psychophysical and auditory steady-state response (ASSR) measures of aerial hearing sensitivity was conducted with a 1.5-yr-old California sea lion. The psychophysical audiogram was similar to those previously reported for otariids, with a U-shape, and thresholds near 10 dB re 20 μPa at 8 and 16 kHz. ASSR thresholds measured using both single and multiple simultaneous amplitude-modulated tones closely reproduced the psychophysical audiogram, although the mean ASSR thresholds were elevated relative to psychophysical thresholds. Differences between psychophysical and ASSR thresholds were greatest at the low- and high-frequency ends of the audiogram. Thresholds measured using the multiple ASSR method were not different from those measured using the single ASSR method. The multiple ASSR method was more rapid than the single ASSR method, and allowed for threshold measurements at seven frequencies in less than 20 min. The multiple ASSR method may be especially advantageous for hearing sensitivity measurements with otariid subjects that are untrained for psychophysical procedures.
Methods of comparing associative models and an application to retrospective revaluation.
Witnauer, James E; Hutchings, Ryan; Miller, Ralph R
2017-11-01
Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on the specific data being fit (e.g., hill climbing), so that the comparisons hinge on the psychological mechanisms assumed by each model rather than being biased by using parameters that differ in quality across models with respect to the data being fit. Statistics like the Bayesian information criterion facilitate comparisons among models that have different numbers of free parameters. These issues are examined using retrospective revaluation data. Copyright © 2017 Elsevier B.V. All rights reserved.
Alonso, Joan Francesc; Romero, Sergio; Mañanas, Miguel Ángel; Rojas, Mónica; Riba, Jordi; Barbanoj, Manel José
2015-10-01
The identification of the brain regions involved in the neuropharmacological action is a potential procedure for drug development. These regions are commonly determined by the voxels showing significant statistical differences after comparing placebo-induced effects with drug-elicited effects. LORETA is an electroencephalography (EEG) source imaging technique frequently used to identify brain structures affected by the drug. The aim of the present study was to evaluate different methods for the correction of multiple comparisons in the LORETA maps. These methods which have been commonly used in neuroimaging and also simulated studies have been applied on a real case of pharmaco-EEG study where the effects of increasing benzodiazepine doses on the central nervous system measured by LORETA were investigated. Data consisted of EEG recordings obtained from nine volunteers who received single oral doses of alprazolam 0.25, 0.5, and 1 mg, and placebo in a randomized crossover double-blind design. The identification of active regions was highly dependent on the selected multiple test correction procedure. The combined criteria approach known as cluster mass was useful to reveal that increasing drug doses led to higher intensity and spread of the pharmacologically induced changes in intracerebral current density.
Eisinga, Rob; Heskes, Tom; Pelzer, Ben; Te Grotenhuis, Manfred
2017-01-25
The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to such tests rely on large-sample approximations, due to the numerical complexity of computing the exact distribution. These approximate methods lead to inaccurate estimates in the tail of the distribution, which is most relevant for p-value calculation. We propose an efficient, combinatorial exact approach for calculating the probability mass distribution of the rank sum difference statistic for pairwise comparison of Friedman rank sums, and compare exact results with recommended asymptotic approximations. Whereas the chi-squared approximation performs inferiorly to exact computation overall, others, particularly the normal, perform well, except for the extreme tail. Hence exact calculation offers an improvement when small p-values occur following multiple testing correction. Exact inference also enhances the identification of significant differences whenever the observed values are close to the approximate critical value. We illustrate the proposed method in the context of biological machine learning, were Friedman rank sum difference tests are commonly used for the comparison of classifiers over multiple datasets. We provide a computationally fast method to determine the exact p-value of the absolute rank sum difference of a pair of Friedman rank sums, making asymptotic tests obsolete. Calculation of exact p-values is easy to implement in statistical software and the implementation in R is provided in one of the Additional files and is also available at http://www.ru.nl/publish/pages/726696/friedmanrsd.zip .
A Standard Platform for Testing and Comparison of MDAO Architectures
NASA Technical Reports Server (NTRS)
Gray, Justin S.; Moore, Kenneth T.; Hearn, Tristan A.; Naylor, Bret A.
2012-01-01
The Multidisciplinary Design Analysis and Optimization (MDAO) community has developed a multitude of algorithms and techniques, called architectures, for performing optimizations on complex engineering systems which involve coupling between multiple discipline analyses. These architectures seek to efficiently handle optimizations with computationally expensive analyses including multiple disciplines. We propose a new testing procedure that can provide a quantitative and qualitative means of comparison among architectures. The proposed test procedure is implemented within the open source framework, OpenMDAO, and comparative results are presented for five well-known architectures: MDF, IDF, CO, BLISS, and BLISS-2000. We also demonstrate how using open source soft- ware development methods can allow the MDAO community to submit new problems and architectures to keep the test suite relevant.
An Empirical Comparison of Five Linear Equating Methods for the NEAT Design
ERIC Educational Resources Information Center
Suh, Youngsuk; Mroch, Andrew A.; Kane, Michael T.; Ripkey, Douglas R.
2009-01-01
In this study, a data base containing the responses of 40,000 candidates to 90 multiple-choice questions was used to mimic data sets for 50-item tests under the "nonequivalent groups with anchor test" (NEAT) design. Using these smaller data sets, we evaluated the performance of five linear equating methods for the NEAT design with five levels of…
ERIC Educational Resources Information Center
Novosel, Leslie C.
2012-01-01
Employing multiple methods, including a comparison group pre/posttest design and student interviews and self-reflections, this study represents an initial attempt to investigate the efficacy of a social and emotional learning self-regulation strategy relative to the general reading ability, reading self-concept, and social and emotional well-being…
NASA Astrophysics Data System (ADS)
Kang, Ziho
This dissertation is divided into four parts: 1) Development of effective methods for comparing visual scanning paths (or scanpaths) for a dynamic task of multiple moving targets, 2) application of the methods to compare the scanpaths of experts and novices for a conflict detection task of multiple aircraft on radar screen, 3) a post-hoc analysis of other eye movement characteristics of experts and novices, and 4) finding out whether the scanpaths of experts can be used to teach the novices. In order to compare experts' and novices' scanpaths, two methods are developed. The first proposed method is the matrix comparisons using the Mantel test. The second proposed method is the maximum transition-based agglomerative hierarchical clustering (MTAHC) where comparisons of multi-level visual groupings are held out. The matrix comparison method was useful for a small number of targets during the preliminary experiment, but turned out to be inapplicable to a realistic case when tens of aircraft were presented on screen; however, MTAHC was effective with large number of aircraft on screen. The experiments with experts and novices on the aircraft conflict detection task showed that their scanpaths are different. The MTAHC result was able to explicitly show how experts visually grouped multiple aircraft based on similar altitudes while novices tended to group them based on convergence. Also, the MTAHC results showed that novices paid much attention to the converging aircraft groups even if they are safely separated by altitude; therefore, less attention was given to the actual conflicting pairs resulting in low correct conflict detection rates. Since the analysis showed the scanpath differences, experts' scanpaths were shown to novices in order to find out its effectiveness. The scanpath treatment group showed indications that they changed their visual movements from trajectory-based to altitude-based movements. Between the treatment and the non-treatment group, there were no significant differences in terms of number of correct detections; however, the treatment group made significantly fewer false alarms.
Li, Haibin; He, Yun; Nie, Xiaobo
2018-01-01
Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.
Deng, Wenping; Zhang, Kui; Liu, Sanzhen; Zhao, Patrick; Xu, Shizhong; Wei, Hairong
2018-04-30
Joint reconstruction of multiple gene regulatory networks (GRNs) using gene expression data from multiple tissues/conditions is very important for understanding common and tissue/condition-specific regulation. However, there are currently no computational models and methods available for directly constructing such multiple GRNs that not only share some common hub genes but also possess tissue/condition-specific regulatory edges. In this paper, we proposed a new graphic Gaussian model for joint reconstruction of multiple gene regulatory networks (JRmGRN), which highlighted hub genes, using gene expression data from several tissues/conditions. Under the framework of Gaussian graphical model, JRmGRN method constructs the GRNs through maximizing a penalized log likelihood function. We formulated it as a convex optimization problem, and then solved it with an alternating direction method of multipliers (ADMM) algorithm. The performance of JRmGRN was first evaluated with synthetic data and the results showed that JRmGRN outperformed several other methods for reconstruction of GRNs. We also applied our method to real Arabidopsis thaliana RNA-seq data from two light regime conditions in comparison with other methods, and both common hub genes and some conditions-specific hub genes were identified with higher accuracy and precision. JRmGRN is available as a R program from: https://github.com/wenpingd. hairong@mtu.edu. Proof of theorem, derivation of algorithm and supplementary data are available at Bioinformatics online.
The computational complexity of elliptic curve integer sub-decomposition (ISD) method
NASA Astrophysics Data System (ADS)
Ajeena, Ruma Kareem K.; Kamarulhaili, Hailiza
2014-07-01
The idea of the GLV method of Gallant, Lambert and Vanstone (Crypto 2001) is considered a foundation stone to build a new procedure to compute the elliptic curve scalar multiplication. This procedure, that is integer sub-decomposition (ISD), will compute any multiple kP of elliptic curve point P which has a large prime order n with two low-degrees endomorphisms ψ1 and ψ2 of elliptic curve E over prime field Fp. The sub-decomposition of values k1 and k2, not bounded by ±C√n , gives us new integers k11, k12, k21 and k22 which are bounded by ±C√n and can be computed through solving the closest vector problem in lattice. The percentage of a successful computation for the scalar multiplication increases by ISD method, which improved the computational efficiency in comparison with the general method for computing scalar multiplication in elliptic curves over the prime fields. This paper will present the mechanism of ISD method and will shed light mainly on the computation complexity of the ISD approach that will be determined by computing the cost of operations. These operations include elliptic curve operations and finite field operations.
NASA Astrophysics Data System (ADS)
Giudici, Mauro; Casabianca, Davide; Comunian, Alessandro
2015-04-01
The basic classical inverse problem of groundwater hydrology aims at determining aquifer transmissivity (T ) from measurements of hydraulic head (h), estimates or measures of source terms and with the least possible knowledge on hydraulic transmissivity. The theory of inverse problems shows that this is an example of ill-posed problem, for which non-uniqueness and instability (or at least ill-conditioning) might preclude the computation of a physically acceptable solution. One of the methods to reduce the problems with non-uniqueness, ill-conditioning and instability is a tomographic approach, i.e., the use of data corresponding to independent flow situations. The latter might correspond to different hydraulic stimulations of the aquifer, i.e., to different pumping schedules and flux rates. Three inverse methods have been analyzed and tested to profit from the use of multiple sets of data: the Differential System Method (DSM), the Comparison Model Method (CMM) and the Double Constraint Method (DCM). DSM and CMM need h all over the domain and thus the first step for their application is the interpolation of measurements of h at sparse points. Moreover, they also need the knowledge of the source terms (aquifer recharge, well pumping rates) all over the aquifer. DSM is intrinsically based on the use of multiple data sets, which permit to write a first-order partial differential equation for T , whereas CMM and DCM were originally proposed to invert a single data set and have been extended to work with multiple data sets in this work. CMM and DCM are based on Darcy's law, which is used to update an initial guess of the T field with formulas based on a comparison of different hydraulic gradients. In particular, the CMM algorithm corrects the T estimate with ratio of the observed hydraulic gradient and that obtained with a comparison model which shares the same boundary conditions and source terms as the model to be calibrated, but a tentative T field. On the other hand the DCM algorithm applies the ratio of the hydraulic gradients obtained for two different forward models, one with the same boundary conditions and source terms as the model to be calibrated and the other one with prescribed head at the positions where in- or out-flow is known and h is measured. For DCM and CMM, multiple stimulation is used by updating the T field separately for each data set and then combining the resulting updated fields with different possible statistics (arithmetic, geometric or harmonic mean, median, least change, etc.). The three algorithms are tested and their characteristics and results are compared with a field data set, which was provided by prof. Fritz Stauffer (ETH) and corresponding to a pumping test in a thin alluvial aquifer in northern Switzerland. Three data sets are available and correspond to the undisturbed state, to the flow field created by a single pumping well and to the situation created by an 'hydraulic dipole', i.e., an extraction and an injection wells. These data sets permit to test the three inverse methods and the different options which can be chosen for their use.
Taubitz, Jörg; Lüning, Ulrich; Grotemeyer, Jürgen
2004-11-07
Resonance enhanced multi-photon ionization-reflectron time of flight mass spectrometry is the analytical method of choice to observe hydrogen bonded supramolecules in the gas phase when protonation of basic centers competes with cluster formation.
Why We (Usually) Don't Have to Worry about Multiple Comparisons
ERIC Educational Resources Information Center
Gelman, Andrew; Hill, Jennifer; Yajima, Masanao
2012-01-01
Applied researchers often find themselves making statistical inferences in settings that would seem to require multiple comparisons adjustments. We challenge the Type I error paradigm that underlies these corrections. Moreover we posit that the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian…
NASA Astrophysics Data System (ADS)
Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei
2015-12-01
Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.
Fitting Multimeric Protein Complexes into Electron Microscopy Maps Using 3D Zernike Descriptors
Esquivel-Rodríguez, Juan; Kihara, Daisuke
2012-01-01
A novel computational method for fitting high-resolution structures of multiple proteins into a cryoelectron microscopy map is presented. The method named EMLZerD generates a pool of candidate multiple protein docking conformations of component proteins, which are later compared with a provided electron microscopy (EM) density map to select the ones that fit well into the EM map. The comparison of docking conformations and the EM map is performed using the 3D Zernike descriptor (3DZD), a mathematical series expansion of three-dimensional functions. The 3DZD provides a unified representation of the surface shape of multimeric protein complex models and EM maps, which allows a convenient, fast quantitative comparison of the three dimensional structural data. Out of 19 multimeric complexes tested, near native complex structures with a root mean square deviation of less than 2.5 Å were obtained for 14 cases while medium range resolution structures with correct topology were computed for the additional 5 cases. PMID:22417139
Fitting multimeric protein complexes into electron microscopy maps using 3D Zernike descriptors.
Esquivel-Rodríguez, Juan; Kihara, Daisuke
2012-06-14
A novel computational method for fitting high-resolution structures of multiple proteins into a cryoelectron microscopy map is presented. The method named EMLZerD generates a pool of candidate multiple protein docking conformations of component proteins, which are later compared with a provided electron microscopy (EM) density map to select the ones that fit well into the EM map. The comparison of docking conformations and the EM map is performed using the 3D Zernike descriptor (3DZD), a mathematical series expansion of three-dimensional functions. The 3DZD provides a unified representation of the surface shape of multimeric protein complex models and EM maps, which allows a convenient, fast quantitative comparison of the three-dimensional structural data. Out of 19 multimeric complexes tested, near native complex structures with a root-mean-square deviation of less than 2.5 Å were obtained for 14 cases while medium range resolution structures with correct topology were computed for the additional 5 cases.
NASA Technical Reports Server (NTRS)
Stone, J. R.
1976-01-01
It was demonstrated that static and in flight jet engine exhaust noise can be predicted with reasonable accuracy when the multiple source nature of the problem is taken into account. Jet mixing noise was predicted from the interim prediction method. Provisional methods of estimating internally generated noise and shock noise flight effects were used, based partly on existing prediction methods and partly on recent reported engine data.
An Efficient Method for Verifying Gyrokinetic Microstability Codes
NASA Astrophysics Data System (ADS)
Bravenec, R.; Candy, J.; Dorland, W.; Holland, C.
2009-11-01
Benchmarks for gyrokinetic microstability codes can be developed through successful ``apples-to-apples'' comparisons among them. Unlike previous efforts, we perform the comparisons for actual discharges, rendering the verification efforts relevant to existing experiments and future devices (ITER). The process requires i) assembling the experimental analyses at multiple times, radii, discharges, and devices, ii) creating the input files ensuring that the input parameters are faithfully translated code-to-code, iii) running the codes, and iv) comparing the results, all in an organized fashion. The purpose of this work is to automate this process as much as possible: At present, a python routine is used to generate and organize GYRO input files from TRANSP or ONETWO analyses. Another routine translates the GYRO input files into GS2 input files. (Translation software for other codes has not yet been written.) Other python codes submit the multiple GYRO and GS2 jobs, organize the results, and collect them into a table suitable for plotting. (These separate python routines could easily be consolidated.) An example of the process -- a linear comparison between GYRO and GS2 for a DIII-D discharge at multiple radii -- will be presented.
Weinberg, W A; McLean, A; Snider, R L; Rintelmann, J W; Brumback, R A
1989-12-01
Eight groups of learning disabled children (N = 100), categorized by the clinical Lexical Paradigm as good readers or poor readers, were individually administered the Gilmore Oral Reading Test, Form D, by one of four input/retrieval methods: (1) the standardized method of administration in which the child reads each paragraph aloud and then answers five questions relating to the paragraph [read/recall method]; (2) the child reads each paragraph aloud and then for each question selects the correct answer from among three choices read by the examiner [read/choice method]; (3) the examiner reads each paragraph aloud and reads each of the five questions to the child to answer [listen/recall method]; and (4) the examiner reads each paragraph aloud and then for each question reads three multiple-choice answers from which the child selects the correct answer [listen/choice method]. The major difference in scores was between the groups tested by the recall versus the orally read multiple-choice methods. This study indicated that poor readers who listened to the material and were tested by orally read multiple-choice format could perform as well as good readers. The performance of good readers was not affected by listening or by the method of testing. The multiple-choice testing improved the performance of poor readers independent of the input method. This supports the arguments made previously that a "bypass approach" to education of poor readers in which testing is accomplished using an orally read multiple-choice format can enhance the child's school performance on reading-related tasks. Using a listening while reading input method may further enhance performance.
McFarquhar, Martyn; McKie, Shane; Emsley, Richard; Suckling, John; Elliott, Rebecca; Williams, Stephen
2016-05-15
Repeated measurements and multimodal data are common in neuroimaging research. Despite this, conventional approaches to group level analysis ignore these repeated measurements in favour of multiple between-subject models using contrasts of interest. This approach has a number of drawbacks as certain designs and comparisons of interest are either not possible or complex to implement. Unfortunately, even when attempting to analyse group level data within a repeated-measures framework, the methods implemented in popular software packages make potentially unrealistic assumptions about the covariance structure across the brain. In this paper, we describe how this issue can be addressed in a simple and efficient manner using the multivariate form of the familiar general linear model (GLM), as implemented in a new MATLAB toolbox. This multivariate framework is discussed, paying particular attention to methods of inference by permutation. Comparisons with existing approaches and software packages for dependent group-level neuroimaging data are made. We also demonstrate how this method is easily adapted for dependency at the group level when multiple modalities of imaging are collected from the same individuals. Follow-up of these multimodal models using linear discriminant functions (LDA) is also discussed, with applications to future studies wishing to integrate multiple scanning techniques into investigating populations of interest. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
New approach to CT pixel-based photon dose calculations in heterogeneous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, J.W.; Henkelman, R.M.
The effects of small cavities on dose in water and the dose in a homogeneous nonunit density medium illustrate that inhomogeneities do not act independently in photon dose perturbation, and serve as two constraints which should be satisfied by approximate methods of computed tomography (CT) pixel-based dose calculations. Current methods at best satisfy only one of the two constraints and show inadequacies in some intermediate geometries. We have developed an approximate method that satisfies both these constraints and treats much of the synergistic effect of multiple inhomogeneities correctly. The method calculates primary and first-scatter doses by first-order ray tracing withmore » the first-scatter contribution augmented by a component of second scatter that behaves like first scatter. Multiple-scatter dose perturbation values extracted from small cavity experiments are used in a function which approximates the small residual multiple-scatter dose. For a wide range of geometries tested, our method agrees very well with measurements. The average deviation is less than 2% with a maximum of 3%. In comparison, calculations based on existing methods can have errors larger than 10%.« less
NASA Astrophysics Data System (ADS)
Brankov, Elvira
This thesis presents a methodology for examining the relationship between synoptic-scale atmospheric transport patterns and observed pollutant concentration levels. It involves calculating a large number of back-trajectories from the observational site and subjecting them to cluster analysis. The pollutant concentration data observed at that site are then segregated according to the back-trajectory clusters. If the pollutant observations extend over several seasons, it is important to filter out seasonal and long-term components from the time series data before pollutant cluster-segregation, because only the short-term component of the time series data is related to the synoptic-scale transport. Multiple comparison procedures are used to test for significant differences in the chemical composition of pollutant data associated with each cluster. This procedure is useful in indicating potential pollutant source regions and isolating meteorological regimes associated with pollutant transport from those regions. If many observational sites are available, the spatial and temporal scales of the pollution transport from a given direction can be extracted through the time-lagged inter- site correlation analysis of pollutant concentrations. The proposed methodology is applicable to any pollutant at any site if sufficiently abundant data set is available. This is illustrated through examination of five-year long time series data of ozone concentrations at several sites in the Northeast. The results provide evidence of ozone transport to these sites, revealing the characteristic spatial and temporal scales involved in the transport and identifying source regions for this pollutant. Problems related to statistical analyses of censored data are addressed in the second half of this thesis. Although censoring (reporting concentrations in a non-quantitative way) is typical for trace-level measurements, methods for statistical analysis, inference and interpretation of such data are complex and still under development. In this study, multiple comparison of censored data sets was required in order to examine the influence of synoptic- scale circulations on concentration levels of several trace-level toxic pollutants observed in the Northeast (e.g., As, Se, Mn, V, etc.). Since the traditional multiple comparison procedures are not readily applicable to such data sets, a Monte Carlo simulation study was performed to assess several nonparametric methods for multiple comparison of censored data sets. Application of an appropriate comparison procedure to clusters of toxic trace elements observed in the Northeast led to the identification of potential source regions and atmospheric patterns associated with the long-range transport of these pollutants. A method for comparison of proportions and elemental ratio calculations were used to confirm/clarify these inferences with a greater degree of confidence.
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Elliott, J. P.; Spreiter, J. R.
1983-01-01
An investigation was conducted to continue the development of perturbation procedures and associated computational codes for rapidly determining approximations to nonlinear flow solutions, with the purpose of establishing a method for minimizing computational requirements associated with parametric design studies of transonic flows in turbomachines. The results reported here concern the extension of the previously developed successful method for single parameter perturbations to simultaneous multiple-parameter perturbations, and the preliminary application of the multiple-parameter procedure in combination with an optimization method to blade design/optimization problem. In order to provide as severe a test as possible of the method, attention is focused in particular on transonic flows which are highly supercritical. Flows past both isolated blades and compressor cascades, involving simultaneous changes in both flow and geometric parameters, are considered. Comparisons with the corresponding exact nonlinear solutions display remarkable accuracy and range of validity, in direct correspondence with previous results for single-parameter perturbations.
Skeleton-based human action recognition using multiple sequence alignment
NASA Astrophysics Data System (ADS)
Ding, Wenwen; Liu, Kai; Cheng, Fei; Zhang, Jin; Li, YunSong
2015-05-01
Human action recognition and analysis is an active research topic in computer vision for many years. This paper presents a method to represent human actions based on trajectories consisting of 3D joint positions. This method first decompose action into a sequence of meaningful atomic actions (actionlets), and then label actionlets with English alphabets according to the Davies-Bouldin index value. Therefore, an action can be represented using a sequence of actionlet symbols, which will preserve the temporal order of occurrence of each of the actionlets. Finally, we employ sequence comparison to classify multiple actions through using string matching algorithms (Needleman-Wunsch). The effectiveness of the proposed method is evaluated on datasets captured by commodity depth cameras. Experiments of the proposed method on three challenging 3D action datasets show promising results.
NASA Technical Reports Server (NTRS)
Stolzer, Alan J.; Halford, Carl
2007-01-01
In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.
Multiple comparisons permutation test for image based data mining in radiotherapy
2013-01-01
Comparing incidental dose distributions (i.e. images) of patients with different outcomes is a straightforward way to explore dose-response hypotheses in radiotherapy. In this paper, we introduced a permutation test that compares images, such as dose distributions from radiotherapy, while tackling the multiple comparisons problem. A test statistic Tmax was proposed that summarizes the differences between the images into a single value and a permutation procedure was employed to compute the adjusted p-value. We demonstrated the method in two retrospective studies: a prostate study that relates 3D dose distributions to failure, and an esophagus study that relates 2D surface dose distributions of the esophagus to acute esophagus toxicity. As a result, we were able to identify suspicious regions that are significantly associated with failure (prostate study) or toxicity (esophagus study). Permutation testing allows direct comparison of images from different patient categories and is a useful tool for data mining in radiotherapy. PMID:24365155
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Multiple graph regularized protein domain ranking
2012-01-01
Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331
Cacha, L A; Parida, S; Dehuri, S; Cho, S-B; Poznanski, R R
2016-12-01
The huge number of voxels in fMRI over time poses a major challenge to for effective analysis. Fast, accurate, and reliable classifiers are required for estimating the decoding accuracy of brain activities. Although machine-learning classifiers seem promising, individual classifiers have their own limitations. To address this limitation, the present paper proposes a method based on the ensemble of neural networks to analyze fMRI data for cognitive state classification for application across multiple subjects. Similarly, the fuzzy integral (FI) approach has been employed as an efficient tool for combining different classifiers. The FI approach led to the development of a classifiers ensemble technique that performs better than any of the single classifier by reducing the misclassification, the bias, and the variance. The proposed method successfully classified the different cognitive states for multiple subjects with high accuracy of classification. Comparison of the performance improvement, while applying ensemble neural networks method, vs. that of the individual neural network strongly points toward the usefulness of the proposed method.
Influence of Additive and Multiplicative Structure and Direction of Comparison on the Reversal Error
ERIC Educational Resources Information Center
González-Calero, José Antonio; Arnau, David; Laserna-Belenguer, Belén
2015-01-01
An empirical study has been carried out to evaluate the potential of word order matching and static comparison as explanatory models of reversal error. Data was collected from 214 undergraduate students who translated a set of additive and multiplicative comparisons expressed in Spanish into algebraic language. In these multiplicative comparisons…
The Challenges of Measuring Glycemic Variability
Rodbard, David
2012-01-01
This commentary reviews several of the challenges encountered when attempting to quantify glycemic variability and correlate it with risk of diabetes complications. These challenges include (1) immaturity of the field, including problems of data accuracy, precision, reliability, cost, and availability; (2) larger relative error in the estimates of glycemic variability than in the estimates of the mean glucose; (3) high correlation between glycemic variability and mean glucose level; (4) multiplicity of measures; (5) correlation of the multiple measures; (6) duplication or reinvention of methods; (7) confusion of measures of glycemic variability with measures of quality of glycemic control; (8) the problem of multiple comparisons when assessing relationships among multiple measures of variability and multiple clinical end points; and (9) differing needs for routine clinical practice and clinical research applications. PMID:22768904
Czerwiński, M; Mroczka, J; Girasole, T; Gouesbet, G; Gréhan, G
2001-03-20
Our aim is to present a method of predicting light transmittances through dense three-dimensional layered media. A hybrid method is introduced as a combination of the four-flux method with coefficients predicted from a Monte Carlo statistical model to take into account the actual three-dimensional geometry of the problem under study. We present the principles of the hybrid method, some exemplifying results of numerical simulations, and their comparison with results obtained from Bouguer-Lambert-Beer law and from Monte Carlo simulations.
ERIC Educational Resources Information Center
Kapes, Jerome T.; And Others
Three models of multiple regression analysis (MRA): single equation, commonality analysis, and path analysis, were applied to longitudinal data from the Pennsylvania Vocational Development Study. Variables influencing weekly income of vocational education students one year after high school graduation were examined: grade point averages (grades…
A Comparison of Three Tests of Mediation
ERIC Educational Resources Information Center
Warbasse, Rosalia E.
2009-01-01
A simulation study was conducted to evaluate the performance of three tests of mediation: the bias-corrected and accelerated bootstrap (Efron & Tibshirani, 1993), the asymmetric confidence limits test (MacKinnon, 2008), and a multiple regression approach described by Kenny, Kashy, and Bolger (1998). The evolution of these methods is reviewed and…
ERIC Educational Resources Information Center
Markle, Ross Edward
2010-01-01
The impact of socioeconomic status (SES) on educational outcomes has been widely demonstrated in the fields of sociology, psychology, and educational research. Across these fields however, measurement models of SES vary, including single indicators (parental income, education, and occupation), multiple indicators, hierarchical models, and most…
Interteaching and Lecture: A Comparison of Long-Term Recognition Memory
ERIC Educational Resources Information Center
Saville, Bryan K.; Bureau, Alex; Eckenrode, Claire; Fullerton, Alison; Herbert, Reanna; Maley, Michelle; Porter, Allen; Zombakis, Julie
2014-01-01
Although a number of studies suggest that interteaching is an effective alternative to traditional teaching methods, no studies have systematically examined whether interteaching improves long-term memory. In this study, we assigned students to different teaching conditions--interteaching, lecture, or control--and then gave them a multiple-choice…
Long-Term Effect of Prefrontal Lobotomy on Verbal Fluency in Patients with Schizophrenia
ERIC Educational Resources Information Center
Stip, Emmanuel; Bigras, Marie-Josee.; Mancini-Marie, Adham; Cosset, Marie-Eve.; Black, Deborah; Lecours, Andre-Roch
2004-01-01
Objective: This study investigated the long-term effects of bilateral prefrontal leukotomy on lexical abilities in schizophrenia subjects. Method: We compared performances of leukotomized (LSP), non-leukotomized schizophrenia patients (NLSP) and normal controls, using a test of verbal fluency. Multiple case and triple comparison design were…
A Comparison of Three Methods to Measure Percent Body Fat on Mentally Retarded Adults.
ERIC Educational Resources Information Center
Burkett, Lee N.; And Others
1994-01-01
Reports a study that compared three measures for determining percent body fat in mentally retarded adults (multiple skinfolds and circumference measurements, Infrared Interactance, and Bioelectrical Impedance). Results indicated the Bioelectrical Impedance Analyzer and Infrared Interactance Analyzer produced values for percent body fat that were…
Andrew D. Bower; Bryce A. Richardson; Valerie Hipkins; Regina Rochefort; Carol Aubry
2011-01-01
Analysis of "neutral" molecular markers and "adaptive" quantitative traits are common methods of assessing genetic diversity and population structure. Molecular markers typically reflect the effects of demographic and stochastic processes but are generally assumed to not reflect natural selection. Conversely, quantitative (or "adaptive")...
ERIC Educational Resources Information Center
Zhou, P.; Ang, B. W.; Zhou, D. Q.
2010-01-01
Composite indicators (CIs) have increasingly been accepted as a useful tool for benchmarking, performance comparisons, policy analysis and public communication in many different fields. Several recent studies show that as a data aggregation technique in CI construction the weighted product (WP) method has some desirable properties. However, a…
Detecting and removing multiplicative spatial bias in high-throughput screening technologies.
Caraus, Iurie; Mazoure, Bogdan; Nadon, Robert; Makarenkov, Vladimir
2017-10-15
Considerable attention has been paid recently to improve data quality in high-throughput screening (HTS) and high-content screening (HCS) technologies widely used in drug development and chemical toxicity research. However, several environmentally- and procedurally-induced spatial biases in experimental HTS and HCS screens decrease measurement accuracy, leading to increased numbers of false positives and false negatives in hit selection. Although effective bias correction methods and software have been developed over the past decades, almost all of these tools have been designed to reduce the effect of additive bias only. Here, we address the case of multiplicative spatial bias. We introduce three new statistical methods meant to reduce multiplicative spatial bias in screening technologies. We assess the performance of the methods with synthetic and real data affected by multiplicative spatial bias, including comparisons with current bias correction methods. We also describe a wider data correction protocol that integrates methods for removing both assay and plate-specific spatial biases, which can be either additive or multiplicative. The methods for removing multiplicative spatial bias and the data correction protocol are effective in detecting and cleaning experimental data generated by screening technologies. As our protocol is of a general nature, it can be used by researchers analyzing current or next-generation high-throughput screens. The AssayCorrector program, implemented in R, is available on CRAN. makarenkov.vladimir@uqam.ca. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Comparing Institution Nitrogen Footprints: Metrics for ...
When multiple institutions with strong sustainability initiatives use a new environmental impact assessment tool, there is an impulse to compare. The first seven institutions to calculate their nitrogen footprints using the nitrogen footprint tool have worked collaboratively to improve calculation methods, share resources, and suggest methods for reducing their footprints. This paper compares the results of those seven results to reveal the common and unique drivers of institution nitrogen footprints. The footprints were compared by scope and sector, and the results were normalized by multiple factors (e.g., population, number of meals served). The comparisons found many consistencies across the footprints, including the large contribution of food. The comparisons identified metrics that could be used to track progress, such as an overall indicator for the nitrogen sustainability of food purchases. The results also found differences in system bounds of the calculations, which are important to standardize when comparing across institutions. The footprints were influenced by factors that are both within and outside of the institutions’ ability to control, such as size, location, population, and campus use. However, these comparisons also point to a pathway forward for standardizing nitrogen footprint tool calculations, identifying metrics that can be used to track progress, and determining a sustainable institution nitrogen footprint. This paper is being submitt
1972-01-01
The membrane methods described in Report 71 on the bacteriological examination of water supplies (Report, 1969) for the enumeration of coliform organisms and Escherichia coli in waters, together with a glutamate membrane method, were compared with the glutamate multiple tube method recommended in Report 71 and an incubation procedure similar to that used for membranes with the first 4 hr. at 30° C., and with MacConkey broth in multiple tubes. Although there were some differences between individual laboratories, the combined results from all participating laboratories showed that standard and extended membrane methods gave significantly higher results than the glutamate tube method for coliform organisms in both chlorinated and unchlorinated waters, but significantly lower results for Esch. coli with chlorinated waters and equivocal results with unchlorinated waters. Extended membranes gave higher results than glutamate tubes in larger proportions of samples than did standard membranes. Although transport membranes did not do so well as standard membrane methods, the results were usually in agreement with glutamate tubes except for Esch. coli in chlorinated waters. The glutamate membranes were unsatisfactory. Preliminary incubation of glutamate at 30° C. made little difference to the results. PMID:4567313
Applied Graph-Mining Algorithms to Study Biomolecular Interaction Networks
2014-01-01
Protein-protein interaction (PPI) networks carry vital information on the organization of molecular interactions in cellular systems. The identification of functionally relevant modules in PPI networks is one of the most important applications of biological network analysis. Computational analysis is becoming an indispensable tool to understand large-scale biomolecular interaction networks. Several types of computational methods have been developed and employed for the analysis of PPI networks. Of these computational methods, graph comparison and module detection are the two most commonly used strategies. This review summarizes current literature on graph kernel and graph alignment methods for graph comparison strategies, as well as module detection approaches including seed-and-extend, hierarchical clustering, optimization-based, probabilistic, and frequent subgraph methods. Herein, we provide a comprehensive review of the major algorithms employed under each theme, including our recently published frequent subgraph method, for detecting functional modules commonly shared across multiple cancer PPI networks. PMID:24800226
ERIC Educational Resources Information Center
Noell, George H.; Gresham, Frank M.
2001-01-01
Describes design logic and potential uses of a variant of the multiple-baseline design. The multiple-baseline multiple-sequence (MBL-MS) consists of multiple-baseline designs that are interlaced with one another and include all possible sequences of treatments. The MBL-MS design appears to be primarily useful for comparison of treatments taking…
Fast and Accurate Approximation to Significance Tests in Genome-Wide Association Studies
Zhang, Yu; Liu, Jun S.
2011-01-01
Genome-wide association studies commonly involve simultaneous tests of millions of single nucleotide polymorphisms (SNP) for disease association. The SNPs in nearby genomic regions, however, are often highly correlated due to linkage disequilibrium (LD, a genetic term for correlation). Simple Bonferonni correction for multiple comparisons is therefore too conservative. Permutation tests, which are often employed in practice, are both computationally expensive for genome-wide studies and limited in their scopes. We present an accurate and computationally efficient method, based on Poisson de-clumping heuristics, for approximating genome-wide significance of SNP associations. Compared with permutation tests and other multiple comparison adjustment approaches, our method computes the most accurate and robust p-value adjustments for millions of correlated comparisons within seconds. We demonstrate analytically that the accuracy and the efficiency of our method are nearly independent of the sample size, the number of SNPs, and the scale of p-values to be adjusted. In addition, our method can be easily adopted to estimate false discovery rate. When applied to genome-wide SNP datasets, we observed highly variable p-value adjustment results evaluated from different genomic regions. The variation in adjustments along the genome, however, are well conserved between the European and the African populations. The p-value adjustments are significantly correlated with LD among SNPs, recombination rates, and SNP densities. Given the large variability of sequence features in the genome, we further discuss a novel approach of using SNP-specific (local) thresholds to detect genome-wide significant associations. This article has supplementary material online. PMID:22140288
Chen, Xiwei; Yu, Jihnhee
2014-01-01
Abstract Many clinical and biomedical studies evaluate treatment effects based on multiple biomarkers that commonly consist of pre- and post-treatment measurements. Some biomarkers can show significant positive treatment effects, while other biomarkers can reflect no effects or even negative effects of the treatments, giving rise to a necessity to develop methodologies that may correctly and efficiently evaluate the treatment effects based on multiple biomarkers as a whole. In the setting of pre- and post-treatment measurements of multiple biomarkers, we propose to apply a receiver operating characteristic (ROC) curve methodology based on the best combination of biomarkers maximizing the area under the receiver operating characteristic curve (AUC)-type criterion among all possible linear combinations. In the particular case with independent pre- and post-treatment measurements, we show that the proposed method represents the well-known Su and Liu's (1993) result. Further, proceeding from derived best combinations of biomarkers' measurements, we propose an efficient technique via likelihood ratio tests to compare treatment effects. We show an extensive Monte Carlo study that confirms the superiority of the proposed test in comparison with treatment effects based on multiple biomarkers in a paired data setting. For practical applications, the proposed method is illustrated with a randomized trial of chlorhexidine gluconate on oral bacterial pathogens in mechanically ventilated patients as well as a treatment study for children with attention deficit-hyperactivity disorder and severe mood dysregulation. PMID:25019920
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.
Tuta, Jure; Juric, Matjaz B
2018-03-24
This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method
Juric, Matjaz B.
2018-01-01
This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage. PMID:29587352
MUSIC imaging method for electromagnetic inspection of composite multi-layers
NASA Astrophysics Data System (ADS)
Rodeghiero, Giacomo; Ding, Ping-Ping; Zhong, Yu; Lambert, Marc; Lesselier, Dominique
2015-03-01
A first-order asymptotic formulation of the electric field scattered by a small inclusion (with respect to the wavelength in dielectric regime or to the skin depth in conductive regime) embedded in composite material is given. It is validated by comparison with results obtained using a Method of Moments (MoM). A non-iterative MUltiple SIgnal Classification (MUSIC) imaging method is utilized in the same configuration to locate the position of small defects. The effectiveness of the imaging algorithm is illustrated through some numerical examples.
McKay, Fiona; Schibeci, Stephen; Heard, Robert; Stewart, Graeme; Booth, David
2006-03-20
Persistent high-titre neutralizing antibodies (NAB) to therapeutic interferon-beta(IFNbeta)in multiple sclerosis patients reduce therapeutic efficacy. Difficulties in standardization of cell-based bioactivity assays have hindered interlaboratory comparison of NAB titres and the determination of a clinically relevant definition of seropositivity. We determined NAB status in Australasian multiple sclerosis patients receiving IFNbetausing both the antiviral cytopathic effect (CPE) assay (n = 227) and the more specific ELISA for the type I interferon-inducible MxA protein (n = 350). While the log(10) titres determined in the two assays were highly correlated (p < 0.0001; r = 0.967) with similar distributions, the MxA assay was more sensitive, detecting lower concentrations of NAB than the CPE assay. The range of titres determined in the CPE assay was 10 to >7290; and 9 to 53,700 in the MxA assay, with ranked titre distribution highlighting the arbitrary nature of currently accepted definitions of NAB seropositivity. Bioactivity of injected IFNbetawas significantly reduced in NAB-positive patients (p = 0.006; NAB MxA titres = 184 to 5340) compared to NAB-negative patients as assessed ex vivo using real-time RT-PCR analysis of MxA gene induction. The range of MxA mRNA levels in healthy controls was remarkably consistent with previously published results, regardless of the assay standardization method [Gilli, F., Sala, A., Marnetto, F., Lindberg, R.L., Leppert, D. and Bertolotto, A. (2003) Comparison of IFNbeta bioavailability evaluations by MxA mRNA using two independent quantification methods. Abstract, ECTRIMS Meeting, Milan, Italy; Pachner, A., Narayan, K., Price, N., Hurd, M. and Dail, D. (2003a) MxA Gene Expression Analysis as an Interferon-beta Bioactivity Measurement in Patients with Multiple Sclerosis and the Identification of Antibody-Mediated Decreased Bioactivity. Mol. Diagn. 7, 17-25]. Assessment of IFNbetaresponse ex vivo accounts for both circulating factors and the cellular response to IFNbeta, and the data support the development of the MxA gene induction assay for the routine screening of patients receiving IFNbeta.
Double-multiple streamtube model for Darrieus in turbines
NASA Technical Reports Server (NTRS)
Paraschivoiu, I.
1981-01-01
An analytical model is proposed for calculating the rotor performance and aerodynamic blade forces for Darrieus wind turbines with curved blades. The method of analysis uses a multiple-streamtube model, divided into two parts: one modeling the upstream half-cycle of the rotor and the other, the downstream half-cycle. The upwind and downwind components of the induced velocities at each level of the rotor were obtained using the principle of two actuator disks in tandem. Variation of the induced velocities in the two parts of the rotor produces larger forces in the upstream zone and smaller forces in the downstream zone. Comparisons of the overall rotor performance with previous methods and field test data show the important improvement obtained with the present model. The calculations were made using the computer code CARDAA developed at IREQ. The double-multiple streamtube model presented has two major advantages: it requires a much shorter computer time than the three-dimensional vortex model and is more accurate than multiple-streamtube model in predicting the aerodynamic blade loads.
Building Diversified Multiple Trees for classification in high dimensional noisy biomedical data.
Li, Jiuyong; Liu, Lin; Liu, Jixue; Green, Ryan
2017-12-01
It is common that a trained classification model is applied to the operating data that is deviated from the training data because of noise. This paper will test an ensemble method, Diversified Multiple Tree (DMT), on its capability for classifying instances in a new laboratory using the classifier built on the instances of another laboratory. DMT is tested on three real world biomedical data sets from different laboratories in comparison with four benchmark ensemble methods, AdaBoost, Bagging, Random Forests, and Random Trees. Experiments have also been conducted on studying the limitation of DMT and its possible variations. Experimental results show that DMT is significantly more accurate than other benchmark ensemble classifiers on classifying new instances of a different laboratory from the laboratory where instances are used to build the classifier. This paper demonstrates that an ensemble classifier, DMT, is more robust in classifying noisy data than other widely used ensemble methods. DMT works on the data set that supports multiple simple trees.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleming, P. A.; Van Wingerden, J. W.; Wright, A. D.
2012-01-01
In this paper we present results from an ongoing controller comparison study at the National Renewable Energy Laboratory's (NREL's) National Wind Technology Center (NWTC). The intention of the study is to demonstrate the advantage of using modern multivariable methods for designing control systems for wind turbines versus conventional approaches. We will demonstrate the advantages through field-test results from experimental turbines located at the NWTC. At least two controllers are being developed side-by-side to meet an incrementally increasing number of turbine load-reduction objectives. The first, a multiple single-input, single-output (m-SISO) approach, uses separately developed decoupled and classicially tuned controllers, which is,more » to the best of our knowledge, common practice in the wind industry. The remaining controllers are developed using state-space multiple-input and multiple-output (MIMO) techniques to explicity account for coupling between loops and to optimize given known frequency structures of the turbine and disturbance. In this first publication from the study, we present the structure of the ongoing controller comparison experiment, the design process for the two controllers compared in this phase, and initial comparison results obtained in field-testing.« less
Incorporation of multiple cloud layers for ultraviolet radiation modeling studies
NASA Technical Reports Server (NTRS)
Charache, Darryl H.; Abreu, Vincent J.; Kuhn, William R.; Skinner, Wilbert R.
1994-01-01
Cloud data sets compiled from surface observations were used to develop an algorithm for incorporating multiple cloud layers into a multiple-scattering radiative transfer model. Aerosol extinction and ozone data sets were also incorporated to estimate the seasonally averaged ultraviolet (UV) flux reaching the surface of the Earth in the Detroit, Michigan, region for the years 1979-1991, corresponding to Total Ozone Mapping Spectrometer (TOMS) version 6 ozone observations. The calculated UV spectrum was convolved with an erythema action spectrum to estimate the effective biological exposure for erythema. Calculations show that decreasing the total column density of ozone by 1% leads to an increase in erythemal exposure by approximately 1.1-1.3%, in good agreement with previous studies. A comparison of the UV radiation budget at the surface between a single cloud layer method and a multiple cloud layer method presented here is discussed, along with limitations of each technique. With improved parameterization of cloud properties, and as knowledge of biological effects of UV exposure increase, inclusion of multiple cloud layers may be important in accurately determining the biologically effective UV budget at the surface of the Earth.
A General Simulation Method for Multiple Bodies in Proximate Flight
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
2003-01-01
Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.
An evaluation of exact methods for the multiple subset maximum cardinality selection problem.
Brusco, Michael J; Köhn, Hans-Friedrich; Steinley, Douglas
2016-05-01
The maximum cardinality subset selection problem requires finding the largest possible subset from a set of objects, such that one or more conditions are satisfied. An important extension of this problem is to extract multiple subsets, where the addition of one more object to a larger subset would always be preferred to increases in the size of one or more smaller subsets. We refer to this as the multiple subset maximum cardinality selection problem (MSMCSP). A recently published branch-and-bound algorithm solves the MSMCSP as a partitioning problem. Unfortunately, the computational requirement associated with the algorithm is often enormous, thus rendering the method infeasible from a practical standpoint. In this paper, we present an alternative approach that successively solves a series of binary integer linear programs to obtain a globally optimal solution to the MSMCSP. Computational comparisons of the methods using published similarity data for 45 food items reveal that the proposed sequential method is computationally far more efficient than the branch-and-bound approach. © 2016 The British Psychological Society.
ERIC Educational Resources Information Center
Zhou, P.; Ang, B. W.
2009-01-01
Composite indicators have been increasingly recognized as a useful tool for performance monitoring, benchmarking comparisons and public communication in a wide range of fields. The usefulness of a composite indicator depends heavily on the underlying data aggregation scheme where multiple criteria decision analysis (MCDA) is commonly used. A…
Semantic Variability and Word Comprehension. Educational Reports Umea, No. 17.
ERIC Educational Resources Information Center
Backman, Jarl
Swedes in four different age groups (9, 12, 15 and 18 years) judged written words which varied in three dimensions: syntactic category, objective frequency, and polysemy (multiple meaning). The subjects judged ease of comprehension of 24 words in a factorial arrangement. The method used was Thurstone's paired comparisons. A predicted complex…
ERIC Educational Resources Information Center
Mistler, Stephen A.; Enders, Craig K.
2017-01-01
Multiple imputation methods can generally be divided into two broad frameworks: joint model (JM) imputation and fully conditional specification (FCS) imputation. JM draws missing values simultaneously for all incomplete variables using a multivariate distribution, whereas FCS imputes variables one at a time from a series of univariate conditional…
A COMPARISON OF RESPONSE CONFIRMATION TECHNIQUES FOR AN ADJUNCTIVE SELF-STUDY PROGRAM.
ERIC Educational Resources Information Center
MEYER, DONALD E.
AN EXPERIMENT COMPARED THE EFFECTIVENESS OF FOUR METHODS OF CONFIRMING RESPONSES TO AN ADJUNCTIVE SELF-STUDY PROGRAM. THE PROGRAM WAS DESIGNED FOR AIR FORCE AIRCREWS UNDERTAKING A REFRESHER COURSE IN ENGINEERING. A SERIES OF SEQUENCED MULTIPLE CHOICE QUESTIONS EACH REFERRED TO A PAGE AND PARAGRAPH OF A PUBLICATION CONTAINING DETAILED INFORMATION…
ABSTRACT: Few studies have addressed the efficacy of composite sampling for measurement of indicator bacteria by QPCR. In this study, composite results were compared to single sample results for culture- and QPCR-based water quality monitoring. Composite results for both methods ...
Measuring Social Support and School Belonging in Black/African American and White Children
ERIC Educational Resources Information Center
Wegmann, Kate M.
2017-01-01
Objective: To determine the suitability of the Elementary School Success Profile for Children (ESSP-C) for assessment and comparison of social support and school belonging between Black/African American and White students. Methods: Multiple-group confirmatory factor analysis and invariance testing were conducted to determine the ESSP-C's validity…
ERIC Educational Resources Information Center
Badeau, Ryan; White, Daniel R.; Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.
2017-01-01
The ability to solve physics problems that require multiple concepts from across the physics curriculum--"synthesis" problems--is often a goal of physics instruction. Three experiments were designed to evaluate the effectiveness of two instructional methods employing worked examples on student performance with synthesis problems; these…
Predicting End-of-Year Achievement Test Performance: A Comparison of Assessment Methods
ERIC Educational Resources Information Center
Kettler, Ryan J.; Elliott, Stephen N.; Kurz, Alexander; Zigmond, Naomi; Lemons, Christopher J.; Kloo, Amanda; Shrago, Jacqueline; Beddow, Peter A.; Williams, Leila; Bruen, Charles; Lupp, Lynda; Farmer, Jeanie; Mosiman, Melanie
2014-01-01
Motivated by the multiple-measures clause of recent federal policy regarding student eligibility for alternate assessments based on modified academic achievement standards (AA-MASs), this study examined how scores or combinations of scores from a diverse set of assessments predicted students' end-of-year proficiency status on statewide achievement…
A Comparison of Four Approaches to Account for Method Effects in Latent State-Trait Analyses
ERIC Educational Resources Information Center
Geiser, Christian; Lockhart, Ginger
2012-01-01
Latent state-trait (LST) analysis is frequently applied in psychological research to determine the degree to which observed scores reflect stable person-specific effects, effects of situations and/or person-situation interactions, and random measurement error. Most LST applications use multiple repeatedly measured observed variables as indicators…
Ortiz, Genaro Gabriel; Macías-Islas, Miguel Ángel; Pacheco-Moisés, Fermín P.; Cruz-Ramos, José A.; Sustersik, Silvia; Barba, Elías Alejandro; Aguayo, Adriana
2009-01-01
Objective: To determine the oxidative stress markers in serum from patients with relapsing-remitting multiple sclerosis. Methods: Blood samples from healthy controls and 22 patients 15 women (7 aged from 20 to 30 and 8 were > 40 years old) and 7 men (5 aged from 20 to 30 and 2 were > 40 years old) fulfilling the McDonald Criteria and classified as having Relapsing-Remitting Multiple Sclerosis accordingly with Lublin were collected for oxidative stress markers quantification. Results: Nitric oxide metabolites (nitrates/nitrites), lipid peroxidation products (malondialdehyde plus 4-hidroxialkenals), and glutathione peroxidase activity were significantly increased in serum of subjects with relapsing-remitting multiple sclerosis in comparison with that of healthy controls. These data support the hypothesis that multiple sclerosis is a component closely linked to oxidative stress. PMID:19242067
Detection of faults and software reliability analysis
NASA Technical Reports Server (NTRS)
Knight, J. C.
1986-01-01
Multiversion or N-version programming was proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. Specific topics addressed are: failure probabilities in N-version systems, consistent comparison in N-version systems, descriptions of the faults found in the Knight and Leveson experiment, analytic models of comparison testing, characteristics of the input regions that trigger faults, fault tolerance through data diversity, and the relationship between failures caused by automatically seeded faults.
Program CONTRAST--A general program for the analysis of several survival or recovery rate estimates
Hines, J.E.; Sauer, J.R.
1989-01-01
This manual describes the use of program CONTRAST, which implements a generalized procedure for the comparison of several rate estimates. This method can be used to test both simple and composite hypotheses about rate estimates, and we discuss its application to multiple comparisons of survival rate estimates. Several examples of the use of program CONTRAST are presented. Program CONTRAST will run on IBM-cimpatible computers, and requires estimates of the rates to be tested, along with associated variance and covariance estimates.
Eckermann, Simon; Willan, Andrew R
2011-07-01
Multiple strategy comparisons in health technology assessment (HTA) are becoming increasingly important, with multiple alternative therapeutic actions, combinations of therapies and diagnostic and genetic testing alternatives. Comparison under uncertainty of incremental cost, effects and cost effectiveness across more than two strategies is conceptually and practically very different from that for two strategies, where all evidence can be summarized in a single bivariate distribution on the incremental cost-effectiveness plane. Alternative methods for comparing multiple strategies in HTA have been developed in (i) presenting cost and effects on the cost-disutility plane and (ii) summarizing evidence with multiple strategy cost-effectiveness acceptability (CEA) and expected net loss (ENL) curves and frontiers. However, critical questions remain for the analyst and decision maker of how these techniques can be best employed across multiple strategies to (i) inform clinical and cost inference in presenting evidence, and (ii) summarize evidence of cost effectiveness to inform societal reimbursement decisions where preferences may be risk neutral or somewhat risk averse under the Arrow-Lind theorem. We critically consider how evidence across multiple strategies can be best presented and summarized to inform inference and societal reimbursement decisions, given currently available methods. In the process, we make a number of important original findings. First, in presenting evidence for multiple strategies, the joint distribution of costs and effects on the cost-disutility plane with associated flexible comparators varying across replicates for cost and effect axes ensure full cost and effect inference. Such inference is usually confounded on the cost-effectiveness plane with comparison relative to a fixed origin and axes. Second, in summarizing evidence for risk-neutral societal decision making, ENL curves and frontiers are shown to have advantages over the CEA frontier in directly presenting differences in expected net benefit (ENB). The CEA frontier, while identifying strategies that maximize ENB, only presents their probability of maximizing net benefit (NB) and, hence, fails to explain why strategies maximize ENB at any given threshold value. Third, in summarizing evidence for somewhat risk-averse societal decision making, trade-offs between the strategy maximizing ENB and other potentially optimal strategies with higher probability of maximizing NB should be presented over discrete threshold values where they arise. However, the probabilities informing these trade-offs and associated discrete threshold value regions should be derived from bilateral CEA curves to prevent confounding by other strategies inherent in multiple strategy CEA curves. Based on these findings, a series of recommendations are made for best presenting and summarizing cost-effectiveness evidence for reimbursement decisions when comparing multiple strategies, which are contrasted with advice for comparing two strategies. Implications for joint research and reimbursement decisions are also discussed.
Solving the problem of comparing whole bacterial genomes across different sequencing platforms.
Kaas, Rolf S; Leekitcharoenphon, Pimlapas; Aarestrup, Frank M; Lund, Ole
2014-01-01
Whole genome sequencing (WGS) shows great potential for real-time monitoring and identification of infectious disease outbreaks. However, rapid and reliable comparison of data generated in multiple laboratories and using multiple technologies is essential. So far studies have focused on using one technology because each technology has a systematic bias making integration of data generated from different platforms difficult. We developed two different procedures for identifying variable sites and inferring phylogenies in WGS data across multiple platforms. The methods were evaluated on three bacterial data sets and sequenced on three different platforms (Illumina, 454, Ion Torrent). We show that the methods are able to overcome the systematic biases caused by the sequencers and infer the expected phylogenies. It is concluded that the cause of the success of these new procedures is due to a validation of all informative sites that are included in the analysis. The procedures are available as web tools.
Quadrature rules with multiple nodes for evaluating integrals with strong singularities
NASA Astrophysics Data System (ADS)
Milovanovic, Gradimir V.; Spalevic, Miodrag M.
2006-05-01
We present a method based on the Chakalov-Popoviciu quadrature formula of Lobatto type, a rather general case of quadrature with multiple nodes, for approximating integrals defined by Cauchy principal values or by Hadamard finite parts. As a starting point we use the results obtained by L. Gori and E. Santi (cf. On the evaluation of Hilbert transforms by means of a particular class of Turan quadrature rules, Numer. Algorithms 10 (1995), 27-39; Quadrature rules based on s-orthogonal polynomials for evaluating integrals with strong singularities, Oberwolfach Proceedings: Applications and Computation of Orthogonal Polynomials, ISNM 131, Birkhauser, Basel, 1999, pp. 109-119). We generalize their results by using some of our numerical procedures for stable calculation of the quadrature formula with multiple nodes of Gaussian type and proposed methods for estimating the remainder term in such type of quadrature formulae. Numerical examples, illustrations and comparisons are also shown.
Comparing physiographic maps with different categorisations
NASA Astrophysics Data System (ADS)
Zawadzka, J.; Mayr, T.; Bellamy, P.; Corstanje, R.
2015-02-01
This paper addresses the need for a robust map comparison method suitable for finding similarities between thematic maps with different forms of categorisations. In our case, the requirement was to establish the information content of newly derived physiographic maps with regards to set of reference maps for a study area in England and Wales. Physiographic maps were derived from the 90 m resolution SRTM DEM, using a suite of existing and new digital landform mapping methods with the overarching purpose of enhancing the physiographic unit component of the Soil and Terrain database (SOTER). Reference maps were seven soil and landscape datasets mapped at scales ranging from 1:200,000 to 1:5,000,000. A review of commonly used statistical methods for categorical comparisons was performed and of these, the Cramer's V statistic was identified as the most appropriate for comparison of maps with different legends. Interpretation of multiple Cramer's V values resulting from one-by-one comparisons of the physiographic and baseline maps was facilitated by multi-dimensional scaling and calculation of average distances between the maps. The method allowed for finding similarities and dissimilarities amongst physiographic maps and baseline maps and informed the recommendation of the most suitable methodology for terrain analysis in the context of soil mapping.
Relativistic scattered wave calculations on UF6
NASA Technical Reports Server (NTRS)
Case, D. A.; Yang, C. Y.
1980-01-01
Self-consistent Dirac-Slater multiple scattering calculations are presented for UF6. The results are compared critically to other relativistic calculations, showing that the results of all molecular orbital calculations are in qualitative agreement, as measured by energy levels, population analyses, and spin-orbit splittings. A detailed comparison is made to the relativistic X alpha(RX alpha) method of Wood and Boring, which also uses multiple scattering theory, but incorporates relativistic effects in a more approximate fashion. For the most part, the RX alpha results are in agreement with the present results.
Ondeck, Nathaniel T; Fu, Michael C; Skrip, Laura A; McLynn, Ryan P; Su, Edwin P; Grauer, Jonathan N
2018-03-01
Despite the advantages of large, national datasets, one continuing concern is missing data values. Complete case analysis, where only cases with complete data are analyzed, is commonly used rather than more statistically rigorous approaches such as multiple imputation. This study characterizes the potential selection bias introduced using complete case analysis and compares the results of common regressions using both techniques following unicompartmental knee arthroplasty. Patients undergoing unicompartmental knee arthroplasty were extracted from the 2005 to 2015 National Surgical Quality Improvement Program. As examples, the demographics of patients with and without missing preoperative albumin and hematocrit values were compared. Missing data were then treated with both complete case analysis and multiple imputation (an approach that reproduces the variation and associations that would have been present in a full dataset) and the conclusions of common regressions for adverse outcomes were compared. A total of 6117 patients were included, of which 56.7% were missing at least one value. Younger, female, and healthier patients were more likely to have missing preoperative albumin and hematocrit values. The use of complete case analysis removed 3467 patients from the study in comparison with multiple imputation which included all 6117 patients. The 2 methods of handling missing values led to differing associations of low preoperative laboratory values with commonly studied adverse outcomes. The use of complete case analysis can introduce selection bias and may lead to different conclusions in comparison with the statistically rigorous multiple imputation approach. Joint surgeons should consider the methods of handling missing values when interpreting arthroplasty research. Copyright © 2017 Elsevier Inc. All rights reserved.
Pairwise Classifier Ensemble with Adaptive Sub-Classifiers for fMRI Pattern Analysis.
Kim, Eunwoo; Park, HyunWook
2017-02-01
The multi-voxel pattern analysis technique is applied to fMRI data for classification of high-level brain functions using pattern information distributed over multiple voxels. In this paper, we propose a classifier ensemble for multiclass classification in fMRI analysis, exploiting the fact that specific neighboring voxels can contain spatial pattern information. The proposed method converts the multiclass classification to a pairwise classifier ensemble, and each pairwise classifier consists of multiple sub-classifiers using an adaptive feature set for each class-pair. Simulated and real fMRI data were used to verify the proposed method. Intra- and inter-subject analyses were performed to compare the proposed method with several well-known classifiers, including single and ensemble classifiers. The comparison results showed that the proposed method can be generally applied to multiclass classification in both simulations and real fMRI analyses.
Comparison of traditional and interactive teaching methods in a UK emergency department.
Armstrong, Peter; Elliott, Tim; Ronald, Julie; Paterson, Brodie
2009-12-01
Didactic teaching remains a core component of undergraduate education, but developing computer assisted learning (CAL) packages may provide useful alternatives. We compared the effectiveness of interactive multimedia-based tutorials with traditional, lecture-based models for teaching arterial blood gas interpretation to fourth year medical students. Participants were randomized to complete a tutorial in either lecture or multimedia format containing identical content. Upon completion, students answered five multiple choice questions assessing post-tutorial knowledge, and provided feedback on their allocated learning method. Marks revealed no significant difference between either group. All lecture candidates rated their teaching as good, compared with 89% of the CAL group. All CAL users found multiple choice questions assessment useful, compared with 83% of lecture participants. Both groups highlighted the importance of interaction. CAL complements other teaching methods, but should be seen as an adjunct to, rather than a replacement for, traditional methods, thus offering students a blended learning environment.
Advanced Image Processing for Defect Visualization in Infrared Thermography
NASA Technical Reports Server (NTRS)
Plotnikov, Yuri A.; Winfree, William P.
1997-01-01
Results of a defect visualization process based on pulse infrared thermography are presented. Algorithms have been developed to reduce the amount of operator participation required in the process of interpreting thermographic images. The algorithms determine the defect's depth and size from the temporal and spatial thermal distributions that exist on the surface of the investigated object following thermal excitation. A comparison of the results from thermal contrast, time derivative, and phase analysis methods for defect visualization are presented. These comparisons are based on three dimensional simulations of a test case representing a plate with multiple delaminations. Comparisons are also based on experimental data obtained from a specimen with flat bottom holes and a composite panel with delaminations.
Registration and Fusion of Multiple Source Remotely Sensed Image Data
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline
2004-01-01
Earth and Space Science often involve the comparison, fusion, and integration of multiple types of remotely sensed data at various temporal, radiometric, and spatial resolutions. Results of this integration may be utilized for global change analysis, global coverage of an area at multiple resolutions, map updating or validation of new instruments, as well as integration of data provided by multiple instruments carried on multiple platforms, e.g. in spacecraft constellations or fleets of planetary rovers. Our focus is on developing methods to perform fast, accurate and automatic image registration and fusion. General methods for automatic image registration are being reviewed and evaluated. Various choices for feature extraction, feature matching and similarity measurements are being compared, including wavelet-based algorithms, mutual information and statistically robust techniques. Our work also involves studies related to image fusion and investigates dimension reduction and co-kriging for application-dependent fusion. All methods are being tested using several multi-sensor datasets, acquired at EOS Core Sites, and including multiple sensors such as IKONOS, Landsat-7/ETM+, EO1/ALI and Hyperion, MODIS, and SeaWIFS instruments. Issues related to the coregistration of data from the same platform (i.e., AIRS and MODIS from Aqua) or from several platforms of the A-train (i.e., MLS, HIRDLS, OMI from Aura with AIRS and MODIS from Terra and Aqua) will also be considered.
Work Stress Interventions in Hospital Care: Effectiveness of the DISCovery Method
Niks, Irene; Gevers, Josette
2018-01-01
Effective interventions to prevent work stress and to improve health, well-being, and performance of employees are of the utmost importance. This quasi-experimental intervention study presents a specific method for diagnosis of psychosocial risk factors at work and subsequent development and implementation of tailored work stress interventions, the so-called DISCovery method. This method aims at improving employee health, well-being, and performance by optimizing the balance between job demands, job resources, and recovery from work. The aim of the study is to quantitatively assess the effectiveness of the DISCovery method in hospital care. Specifically, we used a three-wave longitudinal, quasi-experimental multiple-case study approach with intervention and comparison groups in health care work. Positive changes were found for members of the intervention groups, relative to members of the corresponding comparison groups, with respect to targeted work-related characteristics and targeted health, well-being, and performance outcomes. Overall, results lend support for the effectiveness of the DISCovery method in hospital care. PMID:29438350
Work Stress Interventions in Hospital Care: Effectiveness of the DISCovery Method.
Niks, Irene; de Jonge, Jan; Gevers, Josette; Houtman, Irene
2018-02-13
Effective interventions to prevent work stress and to improve health, well-being, and performance of employees are of the utmost importance. This quasi-experimental intervention study presents a specific method for diagnosis of psychosocial risk factors at work and subsequent development and implementation of tailored work stress interventions, the so-called DISCovery method. This method aims at improving employee health, well-being, and performance by optimizing the balance between job demands, job resources, and recovery from work. The aim of the study is to quantitatively assess the effectiveness of the DISCovery method in hospital care. Specifically, we used a three-wave longitudinal, quasi-experimental multiple-case study approach with intervention and comparison groups in health care work. Positive changes were found for members of the intervention groups, relative to members of the corresponding comparison groups, with respect to targeted work-related characteristics and targeted health, well-being, and performance outcomes. Overall, results lend support for the effectiveness of the DISCovery method in hospital care.
Multiple imputation of missing fMRI data in whole brain analysis
Vaden, Kenneth I.; Gebregziabher, Mulugeta; Kuchinsky, Stefanie E.; Eckert, Mark A.
2012-01-01
Whole brain fMRI analyses rarely include the entire brain because of missing data that result from data acquisition limits and susceptibility artifact, in particular. This missing data problem is typically addressed by omitting voxels from analysis, which may exclude brain regions that are of theoretical interest and increase the potential for Type II error at cortical boundaries or Type I error when spatial thresholds are used to establish significance. Imputation could significantly expand statistical map coverage, increase power, and enhance interpretations of fMRI results. We examined multiple imputation for group level analyses of missing fMRI data using methods that leverage the spatial information in fMRI datasets for both real and simulated data. Available case analysis, neighbor replacement, and regression based imputation approaches were compared in a general linear model framework to determine the extent to which these methods quantitatively (effect size) and qualitatively (spatial coverage) increased the sensitivity of group analyses. In both real and simulated data analysis, multiple imputation provided 1) variance that was most similar to estimates for voxels with no missing data, 2) fewer false positive errors in comparison to mean replacement, and 3) fewer false negative errors in comparison to available case analysis. Compared to the standard analysis approach of omitting voxels with missing data, imputation methods increased brain coverage in this study by 35% (from 33,323 to 45,071 voxels). In addition, multiple imputation increased the size of significant clusters by 58% and number of significant clusters across statistical thresholds, compared to the standard voxel omission approach. While neighbor replacement produced similar results, we recommend multiple imputation because it uses an informed sampling distribution to deal with missing data across subjects that can include neighbor values and other predictors. Multiple imputation is anticipated to be particularly useful for 1) large fMRI data sets with inconsistent missing voxels across subjects and 2) addressing the problem of increased artifact at ultra-high field, which significantly limit the extent of whole brain coverage and interpretations of results. PMID:22500925
Hadoop-MCC: Efficient Multiple Compound Comparison Algorithm Using Hadoop.
Hua, Guan-Jie; Hung, Che-Lun; Tang, Chuan Yi
2018-01-01
In the past decade, the drug design technologies have been improved enormously. The computer-aided drug design (CADD) has played an important role in analysis and prediction in drug development, which makes the procedure more economical and efficient. However, computation with big data, such as ZINC containing more than 60 million compounds data and GDB-13 with more than 930 million small molecules, is a noticeable issue of time-consuming problem. Therefore, we propose a novel heterogeneous high performance computing method, named as Hadoop-MCC, integrating Hadoop and GPU, to copy with big chemical structure data efficiently. Hadoop-MCC gains the high availability and fault tolerance from Hadoop, as Hadoop is used to scatter input data to GPU devices and gather the results from GPU devices. Hadoop framework adopts mapper/reducer computation model. In the proposed method, mappers response for fetching SMILES data segments and perform LINGO method on GPU, then reducers collect all comparison results produced by mappers. Due to the high availability of Hadoop, all of LINGO computational jobs on mappers can be completed, even if some of the mappers encounter problems. A comparison of LINGO is performed on each the GPU device in parallel. According to the experimental results, the proposed method on multiple GPU devices can achieve better computational performance than the CUDA-MCC on a single GPU device. Hadoop-MCC is able to achieve scalability, high availability, and fault tolerance granted by Hadoop, and high performance as well by integrating computational power of both of Hadoop and GPU. It has been shown that using the heterogeneous architecture as Hadoop-MCC effectively can enhance better computational performance than on a single GPU device. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Kling, Teresia; Johansson, Patrik; Sanchez, José; Marinescu, Voichita D.; Jörnsten, Rebecka; Nelander, Sven
2015-01-01
Statistical network modeling techniques are increasingly important tools to analyze cancer genomics data. However, current tools and resources are not designed to work across multiple diagnoses and technical platforms, thus limiting their applicability to comprehensive pan-cancer datasets such as The Cancer Genome Atlas (TCGA). To address this, we describe a new data driven modeling method, based on generalized Sparse Inverse Covariance Selection (SICS). The method integrates genetic, epigenetic and transcriptional data from multiple cancers, to define links that are present in multiple cancers, a subset of cancers, or a single cancer. It is shown to be statistically robust and effective at detecting direct pathway links in data from TCGA. To facilitate interpretation of the results, we introduce a publicly accessible tool (cancerlandscapes.org), in which the derived networks are explored as interactive web content, linked to several pathway and pharmacological databases. To evaluate the performance of the method, we constructed a model for eight TCGA cancers, using data from 3900 patients. The model rediscovered known mechanisms and contained interesting predictions. Possible applications include prediction of regulatory relationships, comparison of network modules across multiple forms of cancer and identification of drug targets. PMID:25953855
Methods for constraining fine structure constant evolution with OH microwave transitions.
Darling, Jeremy
2003-07-04
We investigate the constraints that OH microwave transitions in megamasers and molecular absorbers at cosmological distances may place on the evolution of the fine structure constant alpha=e(2)/ variant Planck's over 2pi c. The centimeter OH transitions are a combination of hyperfine splitting and lambda doubling that can constrain the cosmic evolution of alpha from a single species, avoiding systematic errors in alpha measurements from multiple species which may have relative velocity offsets. The most promising method compares the 18 and 6 cm OH lines, includes a calibration of systematic errors, and offers multiple determinations of alpha in a single object. Comparisons of OH lines to the HI 21 cm line and CO rotational transitions also show promise.
Pasquali, Matias; Serchi, Tommaso; Planchon, Sebastien; Renaut, Jenny
2017-01-01
The two-dimensional difference gel electrophoresis method is a valuable approach for proteomics. The method, using cyanine fluorescent dyes, allows the co-migration of multiple protein samples in the same gel and their simultaneous detection, thus reducing experimental and analytical time. 2D-DIGE, compared to traditional post-staining 2D-PAGE protocols (e.g., colloidal Coomassie or silver nitrate), provides faster and more reliable gel matching, limiting the impact of gel to gel variation, and allows also a good dynamic range for quantitative comparisons. By the use of internal standards, it is possible to normalize for experimental variations in spot intensities and gel patterns. Here we describe the experimental steps we follow in our routine 2D-DIGE procedure that we then apply to multiple biological questions.
Theodorsson-Norheim, E
1986-08-01
Multiple t tests at a fixed p level are frequently used to analyse biomedical data where analysis of variance followed by multiple comparisons or the adjustment of the p values according to Bonferroni would be more appropriate. The Kruskal-Wallis test is a nonparametric 'analysis of variance' which may be used to compare several independent samples. The present program is written in an elementary subset of BASIC and will perform Kruskal-Wallis test followed by multiple comparisons between the groups on practically any computer programmable in BASIC.
Quantifying properties of hot and dense QCD matter through systematic model-to-data comparison
Bernhard, Jonah E.; Marcy, Peter W.; Coleman-Smith, Christopher E.; ...
2015-05-22
We systematically compare an event-by-event heavy-ion collision model to data from the CERN Large Hadron Collider. Using a general Bayesian method, we probe multiple model parameters including fundamental quark-gluon plasma properties such as the specific shear viscosity η/s, calibrate the model to optimally reproduce experimental data, and extract quantitative constraints for all parameters simultaneously. Furthermore, the method is universal and easily extensible to other data and collision models.
Time-frequency analysis of band-limited EEG with BMFLC and Kalman filter for BCI applications
2013-01-01
Background Time-Frequency analysis of electroencephalogram (EEG) during different mental tasks received significant attention. As EEG is non-stationary, time-frequency analysis is essential to analyze brain states during different mental tasks. Further, the time-frequency information of EEG signal can be used as a feature for classification in brain-computer interface (BCI) applications. Methods To accurately model the EEG, band-limited multiple Fourier linear combiner (BMFLC), a linear combination of truncated multiple Fourier series models is employed. A state-space model for BMFLC in combination with Kalman filter/smoother is developed to obtain accurate adaptive estimation. By virtue of construction, BMFLC with Kalman filter/smoother provides accurate time-frequency decomposition of the bandlimited signal. Results The proposed method is computationally fast and is suitable for real-time BCI applications. To evaluate the proposed algorithm, a comparison with short-time Fourier transform (STFT) and continuous wavelet transform (CWT) for both synthesized and real EEG data is performed in this paper. The proposed method is applied to BCI Competition data IV for ERD detection in comparison with existing methods. Conclusions Results show that the proposed algorithm can provide optimal time-frequency resolution as compared to STFT and CWT. For ERD detection, BMFLC-KF outperforms STFT and BMFLC-KS in real-time applicability with low computational requirement. PMID:24274109
Elias, Andrew; Crayton, Samuel H; Warden-Rothman, Robert; Tsourkas, Andrew
2014-07-28
Given the rapidly expanding library of disease biomarkers and targeting agents, the number of unique targeted nanoparticles is growing exponentially. The high variability and expense of animal testing often makes it unfeasible to examine this large number of nanoparticles in vivo. This often leads to the investigation of a single formulation that performed best in vitro. However, nanoparticle performance in vivo depends on many variables, many of which cannot be adequately assessed with cell-based assays. To address this issue, we developed a lanthanide-doped nanoparticle method that allows quantitative comparison of multiple targeted nanoparticles simultaneously. Specifically, superparamagnetic iron oxide (SPIO) nanoparticles with different targeting ligands were created, each with a unique lanthanide dopant. Following the simultaneous injection of the various SPIO compositions into tumor-bearing mice, inductively coupled plasma mass spectroscopy was used to quantitatively and orthogonally assess the concentration of each SPIO composition in serial blood and resected tumor samples.
Mass univariate analysis of event-related brain potentials/fields I: a critical tutorial review.
Groppe, David M; Urbach, Thomas P; Kutas, Marta
2011-12-01
Event-related potentials (ERPs) and magnetic fields (ERFs) are typically analyzed via ANOVAs on mean activity in a priori windows. Advances in computing power and statistics have produced an alternative, mass univariate analyses consisting of thousands of statistical tests and powerful corrections for multiple comparisons. Such analyses are most useful when one has little a priori knowledge of effect locations or latencies, and for delineating effect boundaries. Mass univariate analyses complement and, at times, obviate traditional analyses. Here we review this approach as applied to ERP/ERF data and four methods for multiple comparison correction: strong control of the familywise error rate (FWER) via permutation tests, weak control of FWER via cluster-based permutation tests, false discovery rate control, and control of the generalized FWER. We end with recommendations for their use and introduce free MATLAB software for their implementation. Copyright © 2011 Society for Psychophysiological Research.
Kneebone, Ian I.; Dewar, Sophie J.
2016-01-01
Background: The current study aimed to examine the psychometric properties of an attributional style measure that can be administered remotely, to people who have multiple sclerosis (MS). Methods: A total of 495 participants with MS were recruited. Participants completed the Attributional Style Questionnaire-Survey (ASQ-S) and two comparison measures of cognitive variables via postal survey on three occasions, each 12 months apart. Internal reliability, test-retest reliability and congruent validity were considered. Results: The internal reliability of the ASQ-S was good (α > 0.7). The test-retest correlations were significant, but failed to reach the 0.7 set. The congruent validity of the ASQ-S was established relative to the comparisons. Conclusions: The psychometric properties of the ASQ-S indicate that it shows promise as a tool for researchers investigating depression in people with MS and is likely sound to use clinically in this population. PMID:28450893
Adhikari, Badri; Hou, Jie; Cheng, Jianlin
2018-03-01
In this study, we report the evaluation of the residue-residue contacts predicted by our three different methods in the CASP12 experiment, focusing on studying the impact of multiple sequence alignment, residue coevolution, and machine learning on contact prediction. The first method (MULTICOM-NOVEL) uses only traditional features (sequence profile, secondary structure, and solvent accessibility) with deep learning to predict contacts and serves as a baseline. The second method (MULTICOM-CONSTRUCT) uses our new alignment algorithm to generate deep multiple sequence alignment to derive coevolution-based features, which are integrated by a neural network method to predict contacts. The third method (MULTICOM-CLUSTER) is a consensus combination of the predictions of the first two methods. We evaluated our methods on 94 CASP12 domains. On a subset of 38 free-modeling domains, our methods achieved an average precision of up to 41.7% for top L/5 long-range contact predictions. The comparison of the three methods shows that the quality and effective depth of multiple sequence alignments, coevolution-based features, and machine learning integration of coevolution-based features and traditional features drive the quality of predicted protein contacts. On the full CASP12 dataset, the coevolution-based features alone can improve the average precision from 28.4% to 41.6%, and the machine learning integration of all the features further raises the precision to 56.3%, when top L/5 predicted long-range contacts are evaluated. And the correlation between the precision of contact prediction and the logarithm of the number of effective sequences in alignments is 0.66. © 2017 Wiley Periodicals, Inc.
Multi-criteria comparative evaluation of spallation reaction models
NASA Astrophysics Data System (ADS)
Andrianov, Andrey; Andrianova, Olga; Konobeev, Alexandr; Korovin, Yury; Kuptsov, Ilya
2017-09-01
This paper presents an approach to a comparative evaluation of the predictive ability of spallation reaction models based on widely used, well-proven multiple-criteria decision analysis methods (MAVT/MAUT, AHP, TOPSIS, PROMETHEE) and the results of such a comparison for 17 spallation reaction models in the presence of the interaction of high-energy protons with natPb.
Assessment of Neutrophil Function in Patients with Septic Shock: Comparison of Methods
Wenisch, C.; Fladerer, P.; Patruta, S.; Krause, R.; Hörl, W.
2001-01-01
Patients with septic shock are shown to have decreased neutrophil phagocytic function by multiple assays, and their assessment by whole-blood assays (fluorescence-activated cell sorter analysis) correlates with assays requiring isolated neutrophils (microscopic and spectrophotometric assays). For patients with similar underlying conditions but without septic shock, this correlation does not occur. PMID:11139215
ERIC Educational Resources Information Center
Longabach, Tanya; Peyton, Vicki
2018-01-01
K-12 English language proficiency tests that assess multiple content domains (e.g., listening, speaking, reading, writing) often have subsections based on these content domains; scores assigned to these subsections are commonly known as subscores. Testing programs face increasing customer demands for the reporting of subscores in addition to the…
A Comparison of Domain-Referenced and Classic Psychometric Test Construction Methods.
ERIC Educational Resources Information Center
Willoughby, Lee; And Others
This study compared a domain referenced approach with a traditional psychometric approach in the construction of a test. Results of the December, 1975 Quarterly Profile Exam (QPE) administered to 400 examinees at a university were the source of data. The 400 item QPE is a five alternative multiple choice test of information a "safe"…
Sample size and power considerations in network meta-analysis
2012-01-01
Background Network meta-analysis is becoming increasingly popular for establishing comparative effectiveness among multiple interventions for the same disease. Network meta-analysis inherits all methodological challenges of standard pairwise meta-analysis, but with increased complexity due to the multitude of intervention comparisons. One issue that is now widely recognized in pairwise meta-analysis is the issue of sample size and statistical power. This issue, however, has so far only received little attention in network meta-analysis. To date, no approaches have been proposed for evaluating the adequacy of the sample size, and thus power, in a treatment network. Findings In this article, we develop easy-to-use flexible methods for estimating the ‘effective sample size’ in indirect comparison meta-analysis and network meta-analysis. The effective sample size for a particular treatment comparison can be interpreted as the number of patients in a pairwise meta-analysis that would provide the same degree and strength of evidence as that which is provided in the indirect comparison or network meta-analysis. We further develop methods for retrospectively estimating the statistical power for each comparison in a network meta-analysis. We illustrate the performance of the proposed methods for estimating effective sample size and statistical power using data from a network meta-analysis on interventions for smoking cessation including over 100 trials. Conclusion The proposed methods are easy to use and will be of high value to regulatory agencies and decision makers who must assess the strength of the evidence supporting comparative effectiveness estimates. PMID:22992327
Assessment of bifacial photovoltaic module power rating methodologies–inside and out
Deline, Chris; MacAlpine, Sara; Marion, Bill; ...
2017-01-26
One-sun power ratings for bifacial modules are currently undefined. This is partly because there is no standard definition of rear irradiance given 1000 W·m -2 on the front. Using field measurements and simulations, we evaluate multiple deployment scenarios for bifacial modules and provide details on the amount of irradiance that could be expected. A simplified case that represents a single module deployed under conditions consistent with existing one-sun irradiance standards lead to a bifacial reference condition of 1000 W·m -2 G front and 130-140 W·m -2 G rear. For fielded systems of bifacial modules, Grear magnitude and spatial uniformity willmore » be affected by self-shade from adjacent modules, varied ground cover, and ground-clearance height. A standard measurement procedure for bifacial modules is also currently undefined. A proposed international standard is under development, which provides the motivation for this paper. Here, we compare field measurements of bifacial modules under natural illumination with proposed indoor test methods, where irradiance is only applied to one side at a time. The indoor method has multiple advantages, including controlled and repeatable irradiance and thermal environment, along with allowing the use of conventional single-sided flash test equipment. The comparison results are promising, showing that indoor and outdoor methods agree within 1%-2% for multiple rear-irradiance conditions and bifacial module construction. Furthermore, a comparison with single-diode theory also shows good agreement to indoor measurements, within 1%-2% for power and other current-voltage curve parameters.« less
MetaPhinder—Identifying Bacteriophage Sequences in Metagenomic Data Sets
Villarroel, Julia; Lund, Ole; Voldby Larsen, Mette; Nielsen, Morten
2016-01-01
Bacteriophages are the most abundant biological entity on the planet, but at the same time do not account for much of the genetic material isolated from most environments due to their small genome sizes. They also show great genetic diversity and mosaic genomes making it challenging to analyze and understand them. Here we present MetaPhinder, a method to identify assembled genomic fragments (i.e.contigs) of phage origin in metagenomic data sets. The method is based on a comparison to a database of whole genome bacteriophage sequences, integrating hits to multiple genomes to accomodate for the mosaic genome structure of many bacteriophages. The method is demonstrated to out-perform both BLAST methods based on single hits and methods based on k-mer comparisons. MetaPhinder is available as a web service at the Center for Genomic Epidemiology https://cge.cbs.dtu.dk/services/MetaPhinder/, while the source code can be downloaded from https://bitbucket.org/genomicepidemiology/metaphinder or https://github.com/vanessajurtz/MetaPhinder. PMID:27684958
MetaPhinder-Identifying Bacteriophage Sequences in Metagenomic Data Sets.
Jurtz, Vanessa Isabell; Villarroel, Julia; Lund, Ole; Voldby Larsen, Mette; Nielsen, Morten
Bacteriophages are the most abundant biological entity on the planet, but at the same time do not account for much of the genetic material isolated from most environments due to their small genome sizes. They also show great genetic diversity and mosaic genomes making it challenging to analyze and understand them. Here we present MetaPhinder, a method to identify assembled genomic fragments (i.e.contigs) of phage origin in metagenomic data sets. The method is based on a comparison to a database of whole genome bacteriophage sequences, integrating hits to multiple genomes to accomodate for the mosaic genome structure of many bacteriophages. The method is demonstrated to out-perform both BLAST methods based on single hits and methods based on k-mer comparisons. MetaPhinder is available as a web service at the Center for Genomic Epidemiology https://cge.cbs.dtu.dk/services/MetaPhinder/, while the source code can be downloaded from https://bitbucket.org/genomicepidemiology/metaphinder or https://github.com/vanessajurtz/MetaPhinder.
Non-parametric combination and related permutation tests for neuroimaging.
Winkler, Anderson M; Webster, Matthew A; Brooks, Jonathan C; Tracey, Irene; Smith, Stephen M; Nichols, Thomas E
2016-04-01
In this work, we show how permutation methods can be applied to combination analyses such as those that include multiple imaging modalities, multiple data acquisitions of the same modality, or simply multiple hypotheses on the same data. Using the well-known definition of union-intersection tests and closed testing procedures, we use synchronized permutations to correct for such multiplicity of tests, allowing flexibility to integrate imaging data with different spatial resolutions, surface and/or volume-based representations of the brain, including non-imaging data. For the problem of joint inference, we propose and evaluate a modification of the recently introduced non-parametric combination (NPC) methodology, such that instead of a two-phase algorithm and large data storage requirements, the inference can be performed in a single phase, with reasonable computational demands. The method compares favorably to classical multivariate tests (such as MANCOVA), even when the latter is assessed using permutations. We also evaluate, in the context of permutation tests, various combining methods that have been proposed in the past decades, and identify those that provide the best control over error rate and power across a range of situations. We show that one of these, the method of Tippett, provides a link between correction for the multiplicity of tests and their combination. Finally, we discuss how the correction can solve certain problems of multiple comparisons in one-way ANOVA designs, and how the combination is distinguished from conjunctions, even though both can be assessed using permutation tests. We also provide a common algorithm that accommodates combination and correction. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Atypical nucleus accumbens morphology in psychopathy: another limbic piece in the puzzle.
Boccardi, Marina; Bocchetta, Martina; Aronen, Hannu J; Repo-Tiihonen, Eila; Vaurio, Olli; Thompson, Paul M; Tiihonen, Jari; Frisoni, Giovanni B
2013-01-01
Psychopathy has been associated with increased putamen and striatum volumes. The nucleus accumbens - a key structure in reversal learning, less effective in psychopathy - has not yet received specific attention. Moreover, basal ganglia morphology has never been explored. We examined the morphology of the caudate, putamen and accumbens, manually segmented from magnetic resonance images of 26 offenders (age: 32.5 ± 8.4) with medium-high psychopathy (mean PCL-R=30 ± 5) and 25 healthy controls (age: 34.6 ± 10.8). Local differences were statistically modeled using a surface-based radial distance mapping method (p<0.05; multiple comparisons correction through permutation tests). In psychopathy, the caudate and putamen had normal global volume, but different morphology, significant after correction for multiple comparisons, for the right dorsal putamen (permutation test: p=0.02). The volume of the nucleus accumbens was 13% smaller in psychopathy (p corrected for multiple comparisons <0.006). The atypical morphology consisted of predominant anterior hypotrophy bilaterally (10-30%). Caudate and putamen local morphology displayed negative correlation with the lifestyle factor of the PCL-R (permutation test: p=0.05 and 0.03). From these data, psychopathy appears to be associated with an atypical striatal morphology, with highly significant global and local differences of the accumbens. This is consistent with the clinical syndrome and with theories of limbic involvement. Copyright © 2013 Elsevier Ltd. All rights reserved.
Remily-Wood, Elizabeth R.; Benson, Kaaron; Baz, Rachid C.; Chen, Y. Ann; Hussein, Mohamad; Hartley-Brown, Monique A.; Sprung, Robert W.; Perez, Brianna; Liu, Richard Z.; Yoder, Sean; Teer, Jamie; Eschrich, Steven A.; Koomen, John M.
2014-01-01
Purpose Quantitative mass spectrometry assays for immunoglobulins (Igs) are compared with existing clinical methods in samples from patients with plasma cell dyscrasias, e.g. multiple myeloma. Experimental design Using LC-MS/MS data, Ig constant region peptides and transitions were selected for liquid chromatography-multiple reaction monitoring mass spectrometry (LC-MRM). Quantitative assays were used to assess Igs in serum from 83 patients. Results LC-MRM assays quantify serum levels of Igs and their isoforms (IgG1–4, IgA1–2, IgM, IgD, and IgE, as well as kappa(κ) and lambda(λ) light chains). LC-MRM quantification has been applied to single samples from a patient cohort and a longitudinal study of an IgE patient undergoing treatment, to enable comparison with existing clinical methods. Proof-of-concept data for defining and monitoring variable region peptides are provided using the H929 multiple myeloma cell line and two MM patients. Conclusions and Clinical Relevance LC-MRM assays targeting constant region peptides determine the type and isoform of the involved immunoglobulin and quantify its expression; the LC-MRM approach has improved sensitivity compared with the current clinical method, but slightly higher interassay variability. Detection of variable region peptides is a promising way to improve Ig quantification, which could produce a dramatic increase in sensitivity over existing methods, and could further complement current clinical techniques. PMID:24723328
Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures
Manolakos, Elias S.
2015-01-01
Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub. PMID:26605332
Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures.
Sharma, Anuj; Manolakos, Elias S
2015-01-01
Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub.
Using Comparison of Multiple Strategies in the Mathematics Classroom: Lessons Learned and Next Steps
ERIC Educational Resources Information Center
Durkin, Kelley; Star, Jon R.; Rittle-Johnson, Bethany
2017-01-01
Comparison is a fundamental cognitive process that can support learning in a variety of domains, including mathematics. The current paper aims to summarize empirical findings that support recommendations on using comparison of multiple strategies in mathematics classrooms. We report the results of our classroom-based research on using comparison…
Interacting multiple model forward filtering and backward smoothing for maneuvering target tracking
NASA Astrophysics Data System (ADS)
Nandakumaran, N.; Sutharsan, S.; Tharmarasa, R.; Lang, Tom; McDonald, Mike; Kirubarajan, T.
2009-08-01
The Interacting Multiple Model (IMM) estimator has been proven to be effective in tracking agile targets. Smoothing or retrodiction, which uses measurements beyond the current estimation time, provides better estimates of target states. Various methods have been proposed for multiple model smoothing in the literature. In this paper, a new smoothing method, which involves forward filtering followed by backward smoothing while maintaining the fundamental spirit of the IMM, is proposed. The forward filtering is performed using the standard IMM recursion, while the backward smoothing is performed using a novel interacting smoothing recursion. This backward recursion mimics the IMM estimator in the backward direction, where each mode conditioned smoother uses standard Kalman smoothing recursion. Resulting algorithm provides improved but delayed estimates of target states. Simulation studies are performed to demonstrate the improved performance with a maneuvering target scenario. The comparison with existing methods confirms the improved smoothing accuracy. This improvement results from avoiding the augmented state vector used by other algorithms. In addition, the new technique to account for model switching in smoothing is a key in improving the performance.
Multivariate meta-analysis using individual participant data.
Riley, R D; Price, M J; Jackson, D; Wardle, M; Gueyffier, F; Wang, J; Staessen, J A; White, I R
2015-06-01
When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment-covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. © 2014 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.
Hu, Xiaofeng; Hu, Rui; Zhang, Zhaowei; Li, Peiwu; Zhang, Qi; Wang, Min
2016-09-01
A sensitive and specific immunoaffinity column to clean up and isolate multiple mycotoxins was developed along with a rapid one-step sample preparation procedure for ultra-performance liquid chromatography-tandem mass spectrometry analysis. Monoclonal antibodies against aflatoxin B1, aflatoxin B2, aflatoxin G1, aflatoxin G2, zearalenone, ochratoxin A, sterigmatocystin, and T-2 toxin were coupled to microbeads for mycotoxin purification. We optimized a homogenization and extraction procedure as well as column loading and elution conditions to maximize recoveries from complex feed matrices. This method allowed rapid, simple, and simultaneous determination of mycotoxins in feeds with a single chromatographic run. Detection limits for these toxins ranged from 0.006 to 0.12 ng mL(-1), and quantitation limits ranged from 0.06 to 0.75 ng mL(-1). Concentration curves were linear from 0.12 to 40 μg kg(-1) with correlation coefficients of R (2) > 0.99. Intra-assay and inter-assay comparisons indicated excellent repeatability and reproducibility of the multiple immunoaffinity columns. As a proof of principle, 80 feed samples were tested and several contained multiple mycotoxins. This method is sensitive, rapid, and durable enough for multiple mycotoxin determinations that fulfill European Union and Chinese testing criteria.
An approximation method for configuration optimization of trusses
NASA Technical Reports Server (NTRS)
Hansen, Scott R.; Vanderplaats, Garret N.
1988-01-01
Two- and three-dimensional elastic trusses are designed for minimum weight by varying the areas of the members and the location of the joints. Constraints on member stresses and Euler buckling are imposed and multiple static loading conditions are considered. The method presented here utilizes an approximate structural analysis based on first order Taylor series expansions of the member forces. A numerical optimizer minimizes the weight of the truss using information from the approximate structural analysis. Comparisons with results from other methods are made. It is shown that the method of forming an approximate structural analysis based on linearized member forces leads to a highly efficient method of truss configuration optimization.
Whole-Range Assessment: A Simple Method for Analysing Allelopathic Dose-Response Data
An, Min; Pratley, J. E.; Haig, T.; Liu, D.L.
2005-01-01
Based on the typical biological responses of an organism to allelochemicals (hormesis), concepts of whole-range assessment and inhibition index were developed for improved analysis of allelopathic data. Examples of their application are presented using data drawn from the literature. The method is concise and comprehensive, and makes data grouping and multiple comparisons simple, logical, and possible. It improves data interpretation, enhances research outcomes, and is a statistically efficient summary of the plant response profiles. PMID:19330165
Endo, Yuka; Maddukuri, Prasad V; Vieira, Marcelo L C; Pandian, Natesa G; Patel, Ayan R
2006-11-01
Measurement of right ventricular (RV) volumes and right ventricular ejection fraction (RVEF) by three-dimensional echocardiographic (3DE) short-axis disc summation method has been validated in multiple studies. However, in some patients, short-axis images are of insufficient quality for accurate tracing of the RV endocardial border. This study examined the accuracy of long-axis analysis in multiple planes (longitudinal axial plane method) for assessment of RV volumes and RVEF. 3DE images were analyzed in 40 subjects with a broad range of RV function. RV end-diastolic (RVEDV) and end-systolic volumes (RVESV) and RVEF were calculated by both short-axis disc summation method and longitudinal axial plane method. Excellent correlation was obtained between the two methods for RVEDV, RVESV, and RVEF (r = 0.99, 0.99, 0.94, respectively; P < 0.0001 for all comparisons). 3DE longitudinal-axis analysis is a promising technique for the evaluation of RV function, and may provide an alternative method of assessment in patients with suboptimal short-axis images.
A method based on multi-sensor data fusion for fault detection of planetary gearboxes.
Lei, Yaguo; Lin, Jing; He, Zhengjia; Kong, Detong
2012-01-01
Studies on fault detection and diagnosis of planetary gearboxes are quite limited compared with those of fixed-axis gearboxes. Different from fixed-axis gearboxes, planetary gearboxes exhibit unique behaviors, which invalidate fault diagnosis methods that work well for fixed-axis gearboxes. It is a fact that for systems as complex as planetary gearboxes, multiple sensors mounted on different locations provide complementary information on the health condition of the systems. On this basis, a fault detection method based on multi-sensor data fusion is introduced in this paper. In this method, two features developed for planetary gearboxes are used to characterize the gear health conditions, and an adaptive neuro-fuzzy inference system (ANFIS) is utilized to fuse all features from different sensors. In order to demonstrate the effectiveness of the proposed method, experiments are carried out on a planetary gearbox test rig, on which multiple accelerometers are mounted for data collection. The comparisons between the proposed method and the methods based on individual sensors show that the former achieves much higher accuracies in detecting planetary gearbox faults.
Analysis of Classes of Singular Steady State Reaction Diffusion Equations
NASA Astrophysics Data System (ADS)
Son, Byungjae
We study positive radial solutions to classes of steady state reaction diffusion problems on the exterior of a ball with both Dirichlet and nonlinear boundary conditions. We study both Laplacian as well as p-Laplacian problems with reaction terms that are p-sublinear at infinity. We consider both positone and semipositone reaction terms and establish existence, multiplicity and uniqueness results. Our existence and multiplicity results are achieved by a method of sub-supersolutions and uniqueness results via a combination of maximum principles, comparison principles, energy arguments and a-priori estimates. Our results significantly enhance the literature on p-sublinear positone and semipositone problems. Finally, we provide exact bifurcation curves for several one-dimensional problems. In the autonomous case, we extend and analyze a quadrature method, and in the nonautonomous case, we employ shooting methods. We use numerical solvers in Mathematica to generate the bifurcation curves.
High Dynamic Range Imaging Using Multiple Exposures
NASA Astrophysics Data System (ADS)
Hou, Xinglin; Luo, Haibo; Zhou, Peipei; Zhou, Wei
2017-06-01
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range (LDR) camera. This paper presents an approach for improving the dynamic range of cameras by using multiple exposure images of same scene taken under different exposure times. First, the camera response function (CRF) is recovered by solving a high-order polynomial in which only the ratios of the exposures are used. Then, the HDR radiance image is reconstructed by weighted summation of the each radiance maps. After that, a novel local tone mapping (TM) operator is proposed for the display of the HDR radiance image. By solving the high-order polynomial, the CRF can be recovered quickly and easily. Taken the local image feature and characteristic of histogram statics into consideration, the proposed TM operator could preserve the local details efficiently. Experimental result demonstrates the effectiveness of our method. By comparison, the method outperforms other methods in terms of imaging quality.
Huh, Yeamin; Smith, David E.; Feng, Meihau Rose
2014-01-01
Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879
Thom, Howard H Z; Capkun, Gorana; Cerulli, Annamaria; Nixon, Richard M; Howard, Luke S
2015-04-12
Network meta-analysis (NMA) is a methodology for indirectly comparing, and strengthening direct comparisons of two or more treatments for the management of disease by combining evidence from multiple studies. It is sometimes not possible to perform treatment comparisons as evidence networks restricted to randomized controlled trials (RCTs) may be disconnected. We propose a Bayesian NMA model that allows to include single-arm, before-and-after, observational studies to complete these disconnected networks. We illustrate the method with an indirect comparison of treatments for pulmonary arterial hypertension (PAH). Our method uses a random effects model for placebo improvements to include single-arm observational studies into a general NMA. Building on recent research for binary outcomes, we develop a covariate-adjusted continuous-outcome NMA model that combines individual patient data (IPD) and aggregate data from two-arm RCTs with the single-arm observational studies. We apply this model to a complex comparison of therapies for PAH combining IPD from a phase-III RCT of imatinib as add-on therapy for PAH and aggregate data from RCTs and single-arm observational studies, both identified by a systematic review. Through the inclusion of observational studies, our method allowed the comparison of imatinib as add-on therapy for PAH with other treatments. This comparison had not been previously possible due to the limited RCT evidence available. However, the credible intervals of our posterior estimates were wide so the overall results were inconclusive. The comparison should be treated as exploratory and should not be used to guide clinical practice. Our method for the inclusion of single-arm observational studies allows the performance of indirect comparisons that had previously not been possible due to incomplete networks composed solely of available RCTs. We also built on many recent innovations to enable researchers to use both aggregate data and IPD. This method could be used in similar situations where treatment comparisons have not been possible due to restrictions to RCT evidence and where a mixture of aggregate data and IPD are available.
ERIC Educational Resources Information Center
Ludwig, Timothy D.; Goomas, David T.
2007-01-01
Field study was conducted in auto-parts after-market distribution centers where selectors used handheld computers to receive instructions and feedback about their product selection process. A wireless voice-interaction technology was then implemented in a multiple baseline fashion across three departments of a warehouse (N = 14) and was associated…
ERIC Educational Resources Information Center
Xu, Beijie; Recker, Mimi; Qi, Xiaojun; Flann, Nicholas; Ye, Lei
2013-01-01
This article examines clustering as an educational data mining method. In particular, two clustering algorithms, the widely used K-means and the model-based Latent Class Analysis, are compared, using usage data from an educational digital library service, the Instructional Architect (IA.usu.edu). Using a multi-faceted approach and multiple data…
ERIC Educational Resources Information Center
Bridgeman, Brent; Pollack, Judith; Burton, Nancy
2008-01-01
Two methods of showing the ability of high school grades (high school grade point averages) and SAT scores to predict cumulative grades in different types of college courses were evaluated in a sample of 26 colleges. Each college contributed data from three cohorts of entering freshmen, and each cohort was followed for at least four years.…
ERIC Educational Resources Information Center
Hagiliassis, Nick; Gulbenkoglu, Hrepsime; Di Marco, Mark; Young, Suzanne; Hudson, Alan
2005-01-01
Background: This paper describes the evaluation of a group program designed specifically to meet the anger management needs of a group of individuals with various levels of intellectual disability and/or complex communication needs. Method: Twenty-nine individuals were randomly assigned to an intervention group or a waiting-list comparison group.…
ERIC Educational Resources Information Center
Meyer, J. Patrick; Setzer, J. Carl
2009-01-01
Recent changes to federal guidelines for the collection of data on race and ethnicity allow respondents to select multiple race categories. Redefining race subgroups in this manner poses problems for research spanning both sets of definitions. NAEP long-term trends have used the single-race subgroup definitions for over thirty years. Little is…
ERIC Educational Resources Information Center
Seok, Soonhwa; DaCosta, Boaventura; Yu, Byeong Min
2015-01-01
The present study compared a spelling practice intervention using a tablet personal computer (PC) and picture cards with three students diagnosed with developmental disabilities. An alternating-treatments design with a non-concurrent multiple-baseline across participants was used. The aims of the present study were: (a) to determine if…
ERIC Educational Resources Information Center
Tyner, Bryan C.; Fienup, Daniel M.
2015-01-01
Graphing is socially significant for behavior analysts; however, graphing can be difficult to learn. Video modeling (VM) may be a useful instructional method but lacks evidence for effective teaching of computer skills. A between-groups design compared the effects of VM, text-based instruction, and no instruction on graphing performance.…
USDA-ARS?s Scientific Manuscript database
Traditional microbiological techniques for estimating populations of viable bacteria can be laborious and time consuming. The Most Probable Number (MPN) technique is especially tedious as multiple series of tubes must be inoculated at several different dilutions. Recently, an instrument (TEMPOTM) ...
ERIC Educational Resources Information Center
Chen, Xinguang; Stanton, Bonita; Li, Xiaoming; Fang, Xiaoyi; Lin, Danhua; Xiong, Qing
2009-01-01
Objective: To determine whether rural-to-urban migrants in China are more likely than rural and urban residents to engage in risk behaviors. Methods: Comparative analysis of survey data between migrants and rural and urban residents using age standardized rate and multiple logistic regression. Results: The prevalence and frequency of tobacco…
ERIC Educational Resources Information Center
Kover, Sara T.; McDuffie, Andrea; Abbeduto, Leonard; Brown, W. Ted
2012-01-01
Purpose: In this study, the authors examined the impact of sampling context on multiple aspects of expressive language in male participants with fragile X syndrome in comparison to male participants with Down syndrome or typical development. Method: Participants with fragile X syndrome (n = 27), ages 10-17 years, were matched groupwise on…
Evaluating ICT Integration in Turkish K-12 Schools through Teachers' Views
ERIC Educational Resources Information Center
Aydin, Mehmet Kemal; Gürol, Mehmet; Vanderlinde, Ruben
2016-01-01
The current study aims to explore ICT integration in Turkish K-12 schools purposively selected as a representation of F@tih and non-F@tih public schools together with a private school. A convergent mixed methods design was employed with a multiple case strategy as such it will enable to make casewise comparisons. The quantitative data was…
A Comparison of Methods to Screen Middle School Students for Reading and Math Difficulties
ERIC Educational Resources Information Center
Nelson, Peter M.; Van Norman, Ethan R.; Lackner, Stacey K.
2016-01-01
The current study explored multiple ways in which middle schools can use and integrate data sources to predict proficiency on future high-stakes state achievement tests. The diagnostic accuracy of (a) prior achievement data, (b) teacher rating scale scores, (c) a composite score combining state test scores and rating scale responses, and (d) two…
ERIC Educational Resources Information Center
Early, Jessica Singer; Saidy, Christina
2014-01-01
This mixed method investigation included a quasi-experiment examining if revision instruction enhanced the substantive revising behavior of 15 English language learner (ELL) and multilingual 10th grade students enrolled in an English class for underperforming students in comparison to 14 non-ELL and multilingual students from the same class who…
2011-01-01
Background The performance of 3D-based virtual screening similarity functions is affected by the applied conformations of compounds. Therefore, the results of 3D approaches are often less robust than 2D approaches. The application of 3D methods on multiple conformer data sets normally reduces this weakness, but entails a significant computational overhead. Therefore, we developed a special conformational space encoding by means of Gaussian mixture models and a similarity function that operates on these models. The application of a model-based encoding allows an efficient comparison of the conformational space of compounds. Results Comparisons of our 4D flexible atom-pair approach with over 15 state-of-the-art 2D- and 3D-based virtual screening similarity functions on the 40 data sets of the Directory of Useful Decoys show a robust performance of our approach. Even 3D-based approaches that operate on multiple conformers yield inferior results. The 4D flexible atom-pair method achieves an averaged AUC value of 0.78 on the filtered Directory of Useful Decoys data sets. The best 2D- and 3D-based approaches of this study yield an AUC value of 0.74 and 0.72, respectively. As a result, the 4D flexible atom-pair approach achieves an average rank of 1.25 with respect to 15 other state-of-the-art similarity functions and four different evaluation metrics. Conclusions Our 4D method yields a robust performance on 40 pharmaceutically relevant targets. The conformational space encoding enables an efficient comparison of the conformational space. Therefore, the weakness of the 3D-based approaches on single conformations is circumvented. With over 100,000 similarity calculations on a single desktop CPU, the utilization of the 4D flexible atom-pair in real-world applications is feasible. PMID:21733172
Real-time inextensible surgical thread simulation.
Xu, Lang; Liu, Qian
2018-03-27
This paper discusses a real-time simulation method of inextensible surgical thread based on the Cosserat rod theory using position-based dynamics (PBD). The method realizes stable twining and knotting of surgical thread while including inextensibility, bending, twisting and coupling effects. The Cosserat rod theory is used to model the nonlinear elastic behavior of surgical thread. The surgical thread model is solved with PBD to achieve a real-time, extremely stable simulation. Due to the one-dimensional linear structure of surgical thread, the direct solution of the distance constraint based on tridiagonal matrix algorithm is used to enhance stretching resistance in every constraint projection iteration. In addition, continuous collision detection and collision response guarantee a large time step and high performance. Furthermore, friction is integrated into the constraint projection process to stabilize the twining of multiple threads and complex contact situations. Through comparisons with existing methods, the surgical thread maintains constant length under large deformation after applying the direct distance constraint in our method. The twining and knotting of multiple threads correspond to stable solutions to contact and friction forces. A surgical suture scene is also modeled to demonstrate the practicality and simplicity of our method. Our method achieves stable and fast simulation of inextensible surgical thread. Benefiting from the unified particle framework, the rigid body, elastic rod, and soft body can be simultaneously simulated. The method is appropriate for applications in virtual surgery that require multiple dynamic bodies.
Standard setting: comparison of two methods.
George, Sanju; Haque, M Sayeed; Oyebode, Femi
2006-09-14
The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.
NASA Technical Reports Server (NTRS)
Goodrich, Kenneth H.; Sliwa, Steven M.; Lallman, Frederick J.
1989-01-01
Airplane designs are currently being proposed with a multitude of lifting and control devices. Because of the redundancy in ways to generate moments and forces, there are a variety of strategies for trimming each airplane. A linear optimum trim solution (LOTS) is derived using a Lagrange formulation. LOTS enables the rapid calculation of the longitudinal load distribution resulting in the minimum trim drag in level, steady-state flight for airplanes with a mixture of three or more aerodynamic surfaces and propulsive control effectors. Comparisons of the trim drags obtained using LOTS, a direct constrained optimization method, and several ad hoc methods are presented for vortex-lattice representations of a three-surface airplane and two-surface airplane with thrust vectoring. These comparisons show that LOTS accurately predicts the results obtained from the nonlinear optimization and that the optimum methods result in trim drag reductions of up to 80 percent compared to the ad hoc methods.
Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors
Pan, Jin; Ma, Boyuan
2018-01-01
This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gering, Kevin L.
A method, system, and computer-readable medium are described for characterizing performance loss of an object undergoing an arbitrary aging condition. Baseline aging data may be collected from the object for at least one known baseline aging condition over time, determining baseline multiple sigmoid model parameters from the baseline data, and performance loss of the object may be determined over time through multiple sigmoid model parameters associated with the object undergoing the arbitrary aging condition using a differential deviation-from-baseline approach from the baseline multiple sigmoid model parameters. The system may include an object, monitoring hardware configured to sample performance characteristics ofmore » the object, and a processor coupled to the monitoring hardware. The processor is configured to determine performance loss for the arbitrary aging condition from a comparison of the performance characteristics of the object deviating from baseline performance characteristics associated with a baseline aging condition.« less
The Development of the CMS Zero Degree Calorimeters to Derive the Centrality of AA Collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Jeffrey Scott
The centrality of РЬРЬ collisions is derived using correlations from the zero degree calorimeter (ZDC) signal and pixel multiplicity at the Compact Muon Solenoid (CMS) Experiment using data from the heavy ion run in 2010. The method to derive the centrality takes the two-dimensional correlation between the ZDC and pixels and linearizes it for sorting events. The initial method for deriving the centrality at CMS uses the energy deposit in the HF detector, and it is compared to the centrality derived Ьу the correlations in ZDC and pixel multiplicity. This comparison highlights the similarities between the results of both methodsmore » in central collisions, as expected, and deviations in the results in peripheral collisions. The ZDC signals in peripheral collisions are selected Ьу low pixel multiplicity to oЬtain а ZDC neutron spectrum, which is used to effectively gain match both sides of the ZDC« less
Multivariate meta-analysis using individual participant data
Riley, R. D.; Price, M. J.; Jackson, D.; Wardle, M.; Gueyffier, F.; Wang, J.; Staessen, J. A.; White, I. R.
2016-01-01
When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment–covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. PMID:26099484
A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*
Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.
2013-01-01
This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186
Tobin, R S; Lomax, P; Kushner, D J
1980-01-01
Nine different brands of membrane filter were compared in the membrane filtration (MF) method, and those with the highest yields were compared against the most-probable-number (MPN) multiple-tube method for total coliform enumeration in simulated sewage-contaminated tap water. The water was chlorinated for 30 min to subject the organisms to stresses similar to those encountered during treatment and distribution of drinking water. Significant differences were observed among membranes in four of the six experiments, with two- to four-times-higher recoveries between the membranes at each extreme of recovery. When results from the membranes with the highest total coliform recovery rate were compared with the MPN results, the MF results were found significantly higher in one experiment and equivalent to the MPN results in the other five experiments. A comparison was made of the species enumerated by these methods; in general the two methods enumerated a similar spectrum of organisms, with some indication that the MF method was subject to greater interference by Aeromonas. PMID:7469407
A Simple Illustration for the Need of Multiple Comparison Procedures
ERIC Educational Resources Information Center
Carter, Rickey E.
2010-01-01
Statistical adjustments to accommodate multiple comparisons are routinely covered in introductory statistical courses. The fundamental rationale for such adjustments, however, may not be readily understood. This article presents a simple illustration to help remedy this.
Comparing and combining biomarkers as principle surrogates for time-to-event clinical endpoints.
Gabriel, Erin E; Sachs, Michael C; Gilbert, Peter B
2015-02-10
Principal surrogate endpoints are useful as targets for phase I and II trials. In many recent trials, multiple post-randomization biomarkers are measured. However, few statistical methods exist for comparison of or combination of biomarkers as principal surrogates, and none of these methods to our knowledge utilize time-to-event clinical endpoint information. We propose a Weibull model extension of the semi-parametric estimated maximum likelihood method that allows for the inclusion of multiple biomarkers in the same risk model as multivariate candidate principal surrogates. We propose several methods for comparing candidate principal surrogates and evaluating multivariate principal surrogates. These include the time-dependent and surrogate-dependent true and false positive fraction, the time-dependent and the integrated standardized total gain, and the cumulative distribution function of the risk difference. We illustrate the operating characteristics of our proposed methods in simulations and outline how these statistics can be used to evaluate and compare candidate principal surrogates. We use these methods to investigate candidate surrogates in the Diabetes Control and Complications Trial. Copyright © 2014 John Wiley & Sons, Ltd.
Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.
2017-01-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.
Non‐parametric combination and related permutation tests for neuroimaging
Webster, Matthew A.; Brooks, Jonathan C.; Tracey, Irene; Smith, Stephen M.; Nichols, Thomas E.
2016-01-01
Abstract In this work, we show how permutation methods can be applied to combination analyses such as those that include multiple imaging modalities, multiple data acquisitions of the same modality, or simply multiple hypotheses on the same data. Using the well‐known definition of union‐intersection tests and closed testing procedures, we use synchronized permutations to correct for such multiplicity of tests, allowing flexibility to integrate imaging data with different spatial resolutions, surface and/or volume‐based representations of the brain, including non‐imaging data. For the problem of joint inference, we propose and evaluate a modification of the recently introduced non‐parametric combination (NPC) methodology, such that instead of a two‐phase algorithm and large data storage requirements, the inference can be performed in a single phase, with reasonable computational demands. The method compares favorably to classical multivariate tests (such as MANCOVA), even when the latter is assessed using permutations. We also evaluate, in the context of permutation tests, various combining methods that have been proposed in the past decades, and identify those that provide the best control over error rate and power across a range of situations. We show that one of these, the method of Tippett, provides a link between correction for the multiplicity of tests and their combination. Finally, we discuss how the correction can solve certain problems of multiple comparisons in one‐way ANOVA designs, and how the combination is distinguished from conjunctions, even though both can be assessed using permutation tests. We also provide a common algorithm that accommodates combination and correction. Hum Brain Mapp 37:1486‐1511, 2016. © 2016 Wiley Periodicals, Inc. PMID:26848101
Kiiski, Hanni S. M.; Ní Riada, Sinéad; Lalor, Edmund C.; Gonçalves, Nuno R.; Nolan, Hugh; Whelan, Robert; Lonergan, Róisín; Kelly, Siobhán; O'Brien, Marie Claire; Kinsella, Katie; Bramham, Jessica; Burke, Teresa; Ó Donnchadha, Seán; Hutchinson, Michael; Tubridy, Niall; Reilly, Richard B.
2016-01-01
Conduction along the optic nerve is often slowed in multiple sclerosis (MS). This is typically assessed by measuring the latency of the P100 component of the Visual Evoked Potential (VEP) using electroencephalography. The Visual Evoked Spread Spectrum Analysis (VESPA) method, which involves modulating the contrast of a continuous visual stimulus over time, can produce a visually evoked response analogous to the P100 but with a higher signal-to-noise ratio and potentially higher sensitivity to individual differences in comparison to the VEP. The main objective of the study was to conduct a preliminary investigation into the utility of the VESPA method for probing and monitoring visual dysfunction in multiple sclerosis. The latencies and amplitudes of the P100-like VESPA component were compared between healthy controls and multiple sclerosis patients, and multiple sclerosis subgroups. The P100-like VESPA component activations were examined at baseline and over a 3-year period. The study included 43 multiple sclerosis patients (23 relapsing-remitting MS, 20 secondary-progressive MS) and 42 healthy controls who completed the VESPA at baseline. The follow-up sessions were conducted 12 months after baseline with 24 MS patients (15 relapsing-remitting MS, 9 secondary-progressive MS) and 23 controls, and again at 24 months post-baseline with 19 MS patients (13 relapsing-remitting MS, 6 secondary-progressive MS) and 14 controls. The results showed P100-like VESPA latencies to be delayed in multiple sclerosis compared to healthy controls over the 24-month period. Secondary-progressive MS patients had most pronounced delay in P100-like VESPA latency relative to relapsing-remitting MS and controls. There were no longitudinal P100-like VESPA response differences. These findings suggest that the VESPA method is a reproducible electrophysiological method that may have potential utility in the assessment of visual dysfunction in multiple sclerosis. PMID:26726800
Huang, Yunda; Huang, Ying; Moodie, Zoe; Li, Sue; Self, Steve
2014-01-01
Summary In biomedical research such as the development of vaccines for infectious diseases or cancer, measures from the same assay are often collected from multiple sources or laboratories. Measurement error that may vary between laboratories needs to be adjusted for when combining samples across laboratories. We incorporate such adjustment in comparing and combining independent samples from different labs via integration of external data, collected on paired samples from the same two laboratories. We propose: 1) normalization of individual level data from two laboratories to the same scale via the expectation of true measurements conditioning on the observed; 2) comparison of mean assay values between two independent samples in the Main study accounting for inter-source measurement error; and 3) sample size calculations of the paired-sample study so that hypothesis testing error rates are appropriately controlled in the Main study comparison. Because the goal is not to estimate the true underlying measurements but to combine data on the same scale, our proposed methods do not require that the true values for the errorprone measurements are known in the external data. Simulation results under a variety of scenarios demonstrate satisfactory finite sample performance of our proposed methods when measurement errors vary. We illustrate our methods using real ELISpot assay data generated by two HIV vaccine laboratories. PMID:22764070
Development of a Risk-Based Comparison Methodology of Carbon Capture Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Dalton, Angela C.; Dale, Crystal
2014-06-01
Given the varying degrees of maturity among existing carbon capture (CC) technology alternatives, an understanding of the inherent technical and financial risk and uncertainty associated with these competing technologies is requisite to the success of carbon capture as a viable solution to the greenhouse gas emission challenge. The availability of tools and capabilities to conduct rigorous, risk–based technology comparisons is thus highly desirable for directing valuable resources toward the technology option(s) with a high return on investment, superior carbon capture performance, and minimum risk. To address this research need, we introduce a novel risk-based technology comparison method supported by anmore » integrated multi-domain risk model set to estimate risks related to technological maturity, technical performance, and profitability. Through a comparison between solid sorbent and liquid solvent systems, we illustrate the feasibility of estimating risk and quantifying uncertainty in a single domain (modular analytical capability) as well as across multiple risk dimensions (coupled analytical capability) for comparison. This method brings technological maturity and performance to bear on profitability projections, and carries risk and uncertainty modeling across domains via inter-model sharing of parameters, distributions, and input/output. The integration of the models facilitates multidimensional technology comparisons within a common probabilistic risk analysis framework. This approach and model set can equip potential technology adopters with the necessary computational capabilities to make risk-informed decisions about CC technology investment. The method and modeling effort can also be extended to other industries where robust tools and analytical capabilities are currently lacking for evaluating nascent technologies.« less
NASA Astrophysics Data System (ADS)
Jeong, Seungwon; Lee, Ye-Ryoung; Choi, Wonjun; Kang, Sungsam; Hong, Jin Hee; Park, Jin-Sung; Lim, Yong-Sik; Park, Hong-Gyu; Choi, Wonshik
2018-05-01
The efficient delivery of light energy is a prerequisite for the non-invasive imaging and stimulating of target objects embedded deep within a scattering medium. However, the injected waves experience random diffusion by multiple light scattering, and only a small fraction reaches the target object. Here, we present a method to counteract wave diffusion and to focus multiple-scattered waves at the deeply embedded target. To realize this, we experimentally inject light into the reflection eigenchannels of a specific flight time to preferably enhance the intensity of those multiple-scattered waves that have interacted with the target object. For targets that are too deep to be visible by optical imaging, we demonstrate a more than tenfold enhancement in light energy delivery in comparison with ordinary wave diffusion cases. This work will lay a foundation to enhance the working depth of imaging, sensing and light stimulation.
Negative Example Selection for Protein Function Prediction: The NoGO Database
Youngs, Noah; Penfold-Brown, Duncan; Bonneau, Richard; Shasha, Dennis
2014-01-01
Negative examples – genes that are known not to carry out a given protein function – are rarely recorded in genome and proteome annotation databases, such as the Gene Ontology database. Negative examples are required, however, for several of the most powerful machine learning methods for integrative protein function prediction. Most protein function prediction efforts have relied on a variety of heuristics for the choice of negative examples. Determining the accuracy of methods for negative example prediction is itself a non-trivial task, given that the Open World Assumption as applied to gene annotations rules out many traditional validation metrics. We present a rigorous comparison of these heuristics, utilizing a temporal holdout, and a novel evaluation strategy for negative examples. We add to this comparison several algorithms adapted from Positive-Unlabeled learning scenarios in text-classification, which are the current state of the art methods for generating negative examples in low-density annotation contexts. Lastly, we present two novel algorithms of our own construction, one based on empirical conditional probability, and the other using topic modeling applied to genes and annotations. We demonstrate that our algorithms achieve significantly fewer incorrect negative example predictions than the current state of the art, using multiple benchmarks covering multiple organisms. Our methods may be applied to generate negative examples for any type of method that deals with protein function, and to this end we provide a database of negative examples in several well-studied organisms, for general use (The NoGO database, available at: bonneaulab.bio.nyu.edu/nogo.html). PMID:24922051
A comparison of three fiber tract delineation methods and their impact on white matter analysis.
Sydnor, Valerie J; Rivas-Grajales, Ana María; Lyall, Amanda E; Zhang, Fan; Bouix, Sylvain; Karmacharya, Sarina; Shenton, Martha E; Westin, Carl-Fredrik; Makris, Nikos; Wassermann, Demian; O'Donnell, Lauren J; Kubicki, Marek
2018-05-19
Diffusion magnetic resonance imaging (dMRI) is an important method for studying white matter connectivity in the brain in vivo in both healthy and clinical populations. Improvements in dMRI tractography algorithms, which reconstruct macroscopic three-dimensional white matter fiber pathways, have allowed for methodological advances in the study of white matter; however, insufficient attention has been paid to comparing post-tractography methods that extract white matter fiber tracts of interest from whole-brain tractography. Here we conduct a comparison of three representative and conceptually distinct approaches to fiber tract delineation: 1) a manual multiple region of interest-based approach, 2) an atlas-based approach, and 3) a groupwise fiber clustering approach, by employing methods that exemplify these approaches to delineate the arcuate fasciculus, the middle longitudinal fasciculus, and the uncinate fasciculus in 10 healthy male subjects. We enable qualitative comparisons across methods, conduct quantitative evaluations of tract volume, tract length, mean fractional anisotropy, and true positive and true negative rates, and report measures of intra-method and inter-method agreement. We discuss methodological similarities and differences between the three approaches and the major advantages and drawbacks of each, and review research and clinical contexts for which each method may be most apposite. Emphasis is given to the means by which different white matter fiber tract delineation approaches may systematically produce variable results, despite utilizing the same input tractography and reliance on similar anatomical knowledge. Copyright © 2018. Published by Elsevier Inc.
Atri, Alireza; Rountree, Susan D.; Lopez, Oscar L.; Doody, Rachelle S.
2012-01-01
Background Randomized controlled efficacy trials (RCTs), the scientific gold standard, are required for regulatory approval of Alzheimer's disease (AD) interventions, yet provide limited information regarding real-world therapeutic effectiveness. Objective: To compare the nature of evidence regarding the combination of approved AD treatments from RCTs versus long-term observational controlled studies (LTOCs). Methods Comparisons of strengths, limitations, and evidence level for monotherapy [cholinesterase inhibitor (ChEI) or memantine] and combination therapy (ChEI + memantine) in RCTs versus LTOCs. Results RCTs examined highly selected populations over months. LTOCs collected data across multiple AD stages in large populations over many years. RCTs and LTOCs show similar patterns favoring combination over monotherapy over placebo/no treatment. Long-term combination therapy compared to monotherapy reduced cognitive and functional decline and delayed time to nursing home admission. Persistent treatment was associated with slower decline. While LTOCs used control groups, adjusted for multiple covariates, had higher external validity, and favorable ethical, practical and cost considerations, their limitations included potential selection bias due to lack of placebo comparisons and randomization. Conclusions Naturalistic LTOCs provide complementary long-term level II evidence to complement level I evidence from short-term RCTs regarding therapeutic effectiveness in AD that may otherwise be unobtainable. A coordinated strategy/consortium to pool LTOC data from multiple centers to estimate long-term comparative effectiveness, risks/benefits, and costs of AD treatments is needed. PMID:22327239
ERIC Educational Resources Information Center
Alexander, Jennifer L.; Smith, Katie A.; Mataras, Theologia; Shepley, Sally B.; Ayres, Kevin M.
2015-01-01
The two most frequently used methods for assessing performance on chained tasks are single opportunity probes (SOPs) and multiple opportunity probes (MOPs). Of the two, SOPs may be easier and less time-consuming but can suppress actual performance. In comparison, MOPs can provide more information but present the risk of participants acquiring…
Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R
2015-08-28
Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed preferable for X-axis positional verification, with KVX preferred for superoinferior alignment. The COM3PARE methodology was validated as feasible and useful in this pilot head and neck cancer positional verification dataset. COM3PARE represents a flexible and robust standardized analytic methodology for IGRT comparison. The implemented SAS script is included to encourage other groups to implement COM3PARE in other anatomic sites or IGRT platforms.
Stochastic treatment of electron multiplication without scattering in dielectrics
NASA Technical Reports Server (NTRS)
Lin, D. L.; Beers, B. L.
1981-01-01
By treating the emission of optical phonons as a Markov process, a simple analytic method is developed for calculating the electronic ionization rate per unit length for dielectrics. The effects of scattering from acoustic and optical phonons are neglected. The treatment obtains universal functions in recursive form, the theory depending on only two dimensionless energy ratios. A comparison of the present work with other numerical approaches indicates that the effect of scattering becomes important only when the electric potential energy drop in a mean free path for optical-phonon emission is less than about 25% of the ionization potential. A comparison with Monte Carlo results is also given for Teflon.
A SAS(®) macro implementation of a multiple comparison post hoc test for a Kruskal-Wallis analysis.
Elliott, Alan C; Hynan, Linda S
2011-04-01
The Kruskal-Wallis (KW) nonparametric analysis of variance is often used instead of a standard one-way ANOVA when data are from a suspected non-normal population. The KW omnibus procedure tests for some differences between groups, but provides no specific post hoc pair wise comparisons. This paper provides a SAS(®) macro implementation of a multiple comparison test based on significant Kruskal-Wallis results from the SAS NPAR1WAY procedure. The implementation is designed for up to 20 groups at a user-specified alpha significance level. A Monte-Carlo simulation compared this nonparametric procedure to commonly used parametric multiple comparison tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Implementation and Validation of an Impedance Eduction Technique
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Jones, Michael G.; Gerhold, Carl H.
2011-01-01
Implementation of a pressure gradient method of impedance eduction in two NASA Langley flow ducts is described. The Grazing Flow Impedance Tube only supports plane-wave sources, while the Curved Duct Test Rig supports sources that contain higher-order modes. Multiple exercises are used to validate this new impedance eduction method. First, synthesized data for a hard wall insert and a conventional liner mounted in the Grazing Flow Impedance Tube are used as input to the two impedance eduction methods, the pressure gradient method and a previously validated wall pressure method. Comparisons between the two results are excellent. Next, data measured in the Grazing Flow Impedance Tube are used as input to both methods. Results from the two methods compare quite favorably for sufficiently low Mach numbers but this comparison degrades at Mach 0.5, especially when the hard wall insert is used. Finally, data measured with a hard wall insert mounted in the Curved Duct Test Rig are used as input to the pressure gradient method. Significant deviation from the known solution is observed, which is believed to be largely due to 3-D effects in this flow duct. Potential solutions to this issue are currently being explored.
Towards discrete wavelet transform-based human activity recognition
NASA Astrophysics Data System (ADS)
Khare, Manish; Jeon, Moongu
2017-06-01
Providing accurate recognition of human activities is a challenging problem for visual surveillance applications. In this paper, we present a simple and efficient algorithm for human activity recognition based on a wavelet transform. We adopt discrete wavelet transform (DWT) coefficients as a feature of human objects to obtain advantages of its multiresolution approach. The proposed method is tested on multiple levels of DWT. Experiments are carried out on different standard action datasets including KTH and i3D Post. The proposed method is compared with other state-of-the-art methods in terms of different quantitative performance measures. The proposed method is found to have better recognition accuracy in comparison to the state-of-the-art methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleming, P. A.; van Wingerden, J. W.; Wright, A. D.
2011-12-01
This paper presents the structure of an ongoing controller comparison experiment at NREL's National Wind Technology Center; the design process for the two controllers compared in this phase of the experiment, and initial comparison results obtained in field-testing. The intention of the study is to demonstrate the advantage of using modern multivariable methods for designing control systems for wind turbines versus conventional approaches. We will demonstrate the advantages through field-test results from experimental turbines located at the NWTC. At least two controllers are being developed side-by-side to meet an incrementally increasing number of turbine load-reduction objectives. The first, a multiplemore » single-input, single-output (m-SISO) approach, uses separately developed decoupled and classicially tuned controllers, which is, to the best of our knowledge, common practice in the wind industry. The remaining controllers are developed using state-space multiple-input and multiple-output (MIMO) techniques to explicity account for coupling between loops and to optimize given known frequency structures of the turbine and disturbance. In this first publication from the study, we present the structure of the ongoing controller comparison experiment, the design process for the two controllers compared in this phase, and initial comparison results obtained in field-testing.« less
Multiple-grid convergence acceleration of viscous and inviscid flow computations
NASA Technical Reports Server (NTRS)
Johnson, G. M.
1983-01-01
A multiple-grid algorithm for use in efficiently obtaining steady solution to the Euler and Navier-Stokes equations is presented. The convergence of a simple, explicit fine-grid solution procedure is accelerated on a sequence of successively coarser grids by a coarse-grid information propagation method which rapidly eliminates transients from the computational domain. This use of multiple-gridding to increase the convergence rate results in substantially reduced work requirements for the numerical solution of a wide range of flow problems. Computational results are presented for subsonic and transonic inviscid flows and for laminar and turbulent, attached and separated, subsonic viscous flows. Work reduction factors as large as eight, in comparison to the basic fine-grid algorithm, were obtained. Possibilities for further performance improvement are discussed.
Comparison of Adaline and Multiple Linear Regression Methods for Rainfall Forecasting
NASA Astrophysics Data System (ADS)
Sutawinaya, IP; Astawa, INGA; Hariyanti, NKD
2018-01-01
Heavy rainfall can cause disaster, therefore need a forecast to predict rainfall intensity. Main factor that cause flooding is there is a high rainfall intensity and it makes the river become overcapacity. This will cause flooding around the area. Rainfall factor is a dynamic factor, so rainfall is very interesting to be studied. In order to support the rainfall forecasting, there are methods that can be used from Artificial Intelligence (AI) to statistic. In this research, we used Adaline for AI method and Regression for statistic method. The more accurate forecast result shows the method that used is good for forecasting the rainfall. Through those methods, we expected which is the best method for rainfall forecasting here.
COMPARISON OF VOLUMETRIC REGISTRATION ALGORITHMS FOR TENSOR-BASED MORPHOMETRY
Villalon, Julio; Joshi, Anand A.; Toga, Arthur W.; Thompson, Paul M.
2015-01-01
Nonlinear registration of brain MRI scans is often used to quantify morphological differences associated with disease or genetic factors. Recently, surface-guided fully 3D volumetric registrations have been developed that combine intensity-guided volume registrations with cortical surface constraints. In this paper, we compare one such algorithm to two popular high-dimensional volumetric registration methods: large-deformation viscous fluid registration, formulated in a Riemannian framework, and the diffeomorphic “Demons” algorithm. We performed an objective morphometric comparison, by using a large MRI dataset from 340 young adult twin subjects to examine 3D patterns of correlations in anatomical volumes. Surface-constrained volume registration gave greater effect sizes for detecting morphometric associations near the cortex, while the other two approaches gave greater effects sizes subcortically. These findings suggest novel ways to combine the advantages of multiple methods in the future. PMID:26925198
Two Formal Gas Models For Multi-Agent Sweeping and Obstacle Avoidance
NASA Technical Reports Server (NTRS)
Kerr, Wesley; Spears, Diana; Spears, William; Thayer, David
2004-01-01
The task addressed here is a dynamic search through a bounded region, while avoiding multiple large obstacles, such as buildings. In the case of limited sensors and communication, maintaining spatial coverage - especially after passing the obstacles - is a challenging problem. Here, we investigate two physics-based approaches to solving this task with multiple simulated mobile robots, one based on artificial forces and the other based on the kinetic theory of gases. The desired behavior is achieved with both methods, and a comparison is made between them. Because both approaches are physics-based, formal assurances about the multi-robot behavior are straightforward, and are included in the paper.
Atmospheric aerosols: Their Optical Properties and Effects (supplement)
NASA Technical Reports Server (NTRS)
1976-01-01
A digest of technical papers is presented. Topics include aerosol size distribution from spectral attenuation with scattering measurements; comparison of extinction and backscattering coefficients for measured and analytic stratospheric aerosol size distributions; using hybrid methods to solve problems in radiative transfer and in multiple scattering; blue moon phenomena; absorption refractive index of aerosols in the Denver pollution cloud; a two dimensional stratospheric model of the dispersion of aerosols from the Fuego volcanic eruption; the variation of the aerosol volume to light scattering coefficient; spectrophone in situ measurements of the absorption of visible light by aerosols; a reassessment of the Krakatoa volcanic turbidity, and multiple scattering in the sky radiance.
Janssen, T J; Guelen, P J; Vree, T B; Botterblom, M H; Valducci, R
1988-01-01
The bioavailability of a new ambroxol sustained release preparation (75 mg) based on a dialyzing membrane for controlled release was studied in healthy volunteers after single and multiple oral dose in comparison with a standard sustained release formulation in a cross-over study under carefully controlled conditions. Plasma concentrations of ambroxol were measured by means of a HPLC method. Based on AUC data both preparations are found to be bioequivalent, but show different plasma concentration profiles. The test preparation showed a more pronounced sustained release profile than the reference preparation (single dose) resulting in significantly higher steady state plasma levels.
Elias, Andrew; Crayton, Samuel H.; Warden-Rothman, Robert; Tsourkas, Andrew
2014-01-01
Given the rapidly expanding library of disease biomarkers and targeting agents, the number of unique targeted nanoparticles is growing exponentially. The high variability and expense of animal testing often makes it unfeasible to examine this large number of nanoparticles in vivo. This often leads to the investigation of a single formulation that performed best in vitro. However, nanoparticle performance in vivo depends on many variables, many of which cannot be adequately assessed with cell-based assays. To address this issue, we developed a lanthanide-doped nanoparticle method that allows quantitative comparison of multiple targeted nanoparticles simultaneously. Specifically, superparamagnetic iron oxide (SPIO) nanoparticles with different targeting ligands were created, each with a unique lanthanide dopant. Following the simultaneous injection of the various SPIO compositions into tumor-bearing mice, inductively coupled plasma mass spectroscopy was used to quantitatively and orthogonally assess the concentration of each SPIO composition in serial blood and resected tumor samples. PMID:25068300
Eckner, Karl F.
1998-01-01
A total of 338 water samples, 261 drinking water samples and 77 bathing water samples, obtained for routine testing were analyzed in duplicate by Swedish standard methods using multiple-tube fermentation or membrane filtration and by the Colilert and/or Enterolert methods. Water samples came from a wide variety of sources in southern Sweden (Skåne). The Colilert method was found to be more sensitive than Swedish standard methods for detecting coliform bacteria and of equal sensitivity for detecting Escherichia coli when all drinking water samples were grouped together. Based on these results, Swedac, the Swedish laboratory accreditation body, approved for the first time in Sweden use of the Colilert method at this laboratory for the analysis of all water sources not falling under public water regulations (A-krav). The coliform detection study of bathing water yielded anomalous results due to confirmation difficulties. E. coli detection in bathing water was similar by both the Colilert and Swedish standard methods as was fecal streptococcus and enterococcus detection by both the Enterolert and Swedish standard methods. PMID:9687478
Multiple-rule bias in the comparison of classification rules
Yousefi, Mohammadmahdi R.; Hua, Jianping; Dougherty, Edward R.
2011-01-01
Motivation: There is growing discussion in the bioinformatics community concerning overoptimism of reported results. Two approaches contributing to overoptimism in classification are (i) the reporting of results on datasets for which a proposed classification rule performs well and (ii) the comparison of multiple classification rules on a single dataset that purports to show the advantage of a certain rule. Results: This article provides a careful probabilistic analysis of the second issue and the ‘multiple-rule bias’, resulting from choosing a classification rule having minimum estimated error on the dataset. It quantifies this bias corresponding to estimating the expected true error of the classification rule possessing minimum estimated error and it characterizes the bias from estimating the true comparative advantage of the chosen classification rule relative to the others by the estimated comparative advantage on the dataset. The analysis is applied to both synthetic and real data using a number of classification rules and error estimators. Availability: We have implemented in C code the synthetic data distribution model, classification rules, feature selection routines and error estimation methods. The code for multiple-rule analysis is implemented in MATLAB. The source code is available at http://gsp.tamu.edu/Publications/supplementary/yousefi11a/. Supplementary simulation results are also included. Contact: edward@ece.tamu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21546390
Federico, Alejandro; Kaufmann, Guillermo H
2005-05-10
We evaluate the use of smoothing splines with a weighted roughness measure for local denoising of the correlation fringes produced in digital speckle pattern interferometry. In particular, we also evaluate the performance of the multiplicative correlation operation between two speckle patterns that is proposed as an alternative procedure to generate the correlation fringes. It is shown that the application of a normalization algorithm to the smoothed correlation fringes reduces the excessive bias generated in the previous filtering stage. The evaluation is carried out by use of computer-simulated fringes that are generated for different average speckle sizes and intensities of the reference beam, including decorrelation effects. A comparison with filtering methods based on the continuous wavelet transform is also presented. Finally, the performance of the smoothing method in processing experimental data is illustrated.
Efficient two-dimensional compressive sensing in MIMO radar
NASA Astrophysics Data System (ADS)
Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad
2017-12-01
Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.
Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy
Kauppi, Tomi; Kämäräinen, Joni-Kristian; Kalesnykiene, Valentina; Sorri, Iiris; Uusitalo, Hannu; Kälviäinen, Heikki
2013-01-01
We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions. PMID:23956787
A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain.
Barba, Lida; Rodríguez, Nibaldo
2017-01-01
Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT.
A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain
Rodríguez, Nibaldo
2017-01-01
Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT. PMID:28261267
Monte Carlo simulation of the radiant field produced by a multiple-lamp quartz heating system
NASA Technical Reports Server (NTRS)
Turner, Travis L.
1991-01-01
A method is developed for predicting the radiant heat flux distribution produced by a reflected bank of tungsten-filament tubular-quartz radiant heaters. The method is correlated with experimental results from two cases, one consisting of a single lamp and a flat reflector and the other consisting of a single lamp and a parabolic reflector. The simulation methodology, computer implementation, and experimental procedures are discussed. Analytical refinements necessary for comparison with experiment are discussed and applied to a multilamp, common reflector heating system.
Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L
2016-02-10
Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure. Copyright © 2015 John Wiley & Sons, Ltd.
Lindstedt, Bjørn-Arne; Vardund, Traute; Kapperud, Georg
2004-08-01
The Multiple-Locus Variable-Number Tandem-Repeats Analysis (MLVA) method is currently being used as the primary typing tool for Shiga-toxin-producing Escherichia coli (STEC) O157 isolates in our laboratory. The initial assay was performed using a single fluorescent dye and the different patterns were assigned using a gel image. Here, we present a significantly improved assay using multiple dye colors and enhanced PCR multiplexing to increase speed, and ease the interpretation of the results. The different MLVA patterns are now based on allele sizes entered as character values, thus removing the uncertainties introduced when analyzing band patterns from the gel image. We additionally propose an easy numbering scheme for the identification of separate isolates that will facilitate exchange of typing data. Seventy-two human and animal strains of Shiga-toxin-producing E. coli O157 were used for the development of the improved MLVA assay. The method is based on capillary separation of multiplexed PCR products of VNTR loci in the E. coli O157 genome labeled with multiple fluorescent dyes. The different alleles at each locus were then assigned to allele numbers, which were used for strain comparison.
Liu, Tian; Liu, Jing; de Rochefort, Ludovic; Spincemaille, Pascal; Khalidov, Ildar; Ledoux, James Robert; Wang, Yi
2011-09-01
Magnetic susceptibility varies among brain structures and provides insights into the chemical and molecular composition of brain tissues. However, the determination of an arbitrary susceptibility distribution from the measured MR signal phase is a challenging, ill-conditioned inverse problem. Although a previous method named calculation of susceptibility through multiple orientation sampling (COSMOS) has solved this inverse problem both theoretically and experimentally using multiple angle acquisitions, it is often impractical to carry out on human subjects. Recently, the feasibility of calculating the brain susceptibility distribution from a single-angle acquisition was demonstrated using morphology enabled dipole inversion (MEDI). In this study, we further improved the original MEDI method by sparsifying the edges in the quantitative susceptibility map that do not have a corresponding edge in the magnitude image. Quantitative susceptibility maps generated by the improved MEDI were compared qualitatively and quantitatively with those generated by calculation of susceptibility through multiple orientation sampling. The results show a high degree of agreement between MEDI and calculation of susceptibility through multiple orientation sampling, and the practicality of MEDI allows many potential clinical applications. Copyright © 2011 Wiley-Liss, Inc.
Benloucif, Susan; Burgess, Helen J.; Klerman, Elizabeth B.; Lewy, Alfred J.; Middleton, Benita; Murphy, Patricia J.; Parry, Barbara L.; Revell, Victoria L.
2008-01-01
Study Objectives: To provide guidelines for collecting and analyzing urinary, salivary, and plasma melatonin, thereby assisting clinicians and researchers in determining which method of measuring melatonin is most appropriate for their particular needs and facilitating the comparison of data between laboratories. Methods: A modified RAND process was utilized to derive recommendations for methods of measuring melatonin in humans. Results: Consensus-based guidelines are presented for collecting and analyzing melatonin for studies that are conducted in the natural living environment, the clinical setting, and in-patient research facilities under controlled conditions. Conclusions: The benefits and disadvantages of current methods of collecting and analyzing melatonin are summarized. Although a single method of analysis would be the most effective way to compare studies, limitations of current methods preclude this possibility. Given that the best analysis method for use under multiple conditions is not established, it is recommended to include, in any published report, one of the established low threshold measures of dim light melatonin onset to facilitate comparison between studies. Citation: Benloucif S; Burgess HJ; Klerman EB; Lewy AJ; Middleton B; Murphy PJ; Parry BL; Revell VL. Measuring melatonin in humans. J Clin Sleep Med 2008;4(1):66-69. PMID:18350967
Interdisciplinary research on patient-provider communication: a cross-method comparison.
Chou, Wen-Ying Sylvia; Han, Paul; Pilsner, Alison; Coa, Kisha; Greenberg, Larrie; Blatt, Benjamin
2011-01-01
Patient-provider communication, a key aspect of healthcare delivery, has been assessed through multiple methods for purposes of research, education, and quality control. Common techniques include satisfaction ratings and quantitatively- and qualitatively-oriented direct observations. Identifying the strengths and weaknesses of different approaches is critically important in determining the appropriate assessment method for a specific research or practical goal. Analyzing ten videotaped simulated encounters between medical students and Standardized Patients (SPs), this study compared three existing assessment methods through the same data set. Methods included: (1) dichotomized SP ratings on students' communication skills; (2) Roter Interaction Analysis System (RIAS) analysis; and (3) inductive discourse analysis informed by sociolinguistic theories. The large dichotomous contrast between good and poor ratings in (1) was not evidenced in any of the other methods. Following a discussion of strengths and weaknesses of each approach, we pilot-tested a combined assessment done by coders blinded to results of (1)-(3). This type of integrative approach has the potential of adding a quantifiable dimension to qualitative, discourse-based observations. Subjecting the same data set to separate analytic methods provides an excellent opportunity for methodological comparisons with the goal of informing future assessment of clinical encounters.
Owen, Rhiannon K; Cooper, Nicola J; Quinn, Terence J; Lees, Rosalind; Sutton, Alex J
2018-07-01
Network meta-analyses (NMA) have extensively been used to compare the effectiveness of multiple interventions for health care policy and decision-making. However, methods for evaluating the performance of multiple diagnostic tests are less established. In a decision-making context, we are often interested in comparing and ranking the performance of multiple diagnostic tests, at varying levels of test thresholds, in one simultaneous analysis. Motivated by an example of cognitive impairment diagnosis following stroke, we synthesized data from 13 studies assessing the efficiency of two diagnostic tests: Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA), at two test thresholds: MMSE <25/30 and <27/30, and MoCA <22/30 and <26/30. Using Markov chain Monte Carlo (MCMC) methods, we fitted a bivariate network meta-analysis model incorporating constraints on increasing test threshold, and accounting for the correlations between multiple test accuracy measures from the same study. We developed and successfully fitted a model comparing multiple tests/threshold combinations while imposing threshold constraints. Using this model, we found that MoCA at threshold <26/30 appeared to have the best true positive rate, whereas MMSE at threshold <25/30 appeared to have the best true negative rate. The combined analysis of multiple tests at multiple thresholds allowed for more rigorous comparisons between competing diagnostics tests for decision making. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo
2005-10-01
An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.
Multiple network alignment via multiMAGNA+.
Vijayan, Vipin; Milenkovic, Tijana
2017-08-21
Network alignment (NA) aims to find a node mapping that identifies topologically or functionally similar network regions between molecular networks of different species. Analogous to genomic sequence alignment, NA can be used to transfer biological knowledge from well- to poorly-studied species between aligned network regions. Pairwise NA (PNA) finds similar regions between two networks while multiple NA (MNA) can align more than two networks. We focus on MNA. Existing MNA methods aim to maximize total similarity over all aligned nodes (node conservation). Then, they evaluate alignment quality by measuring the amount of conserved edges, but only after the alignment is constructed. Directly optimizing edge conservation during alignment construction in addition to node conservation may result in superior alignments. Thus, we present a novel MNA method called multiMAGNA++ that can achieve this. Indeed, multiMAGNA++ outperforms or is on par with existing MNA methods, while often completing faster than existing methods. That is, multiMAGNA++ scales well to larger network data and can be parallelized effectively. During method evaluation, we also introduce new MNA quality measures to allow for more fair MNA method comparison compared to the existing alignment quality measures. MultiMAGNA++ code is available on the method's web page at http://nd.edu/~cone/multiMAGNA++/.
Saad, Ahmed S; Abo-Talib, Nisreen F; El-Ghobashy, Mohamed R
2016-01-05
Different methods have been introduced to enhance selectivity of UV-spectrophotometry thus enabling accurate determination of co-formulated components, however mixtures whose components exhibit wide variation in absorptivities has been an obstacle against application of UV-spectrophotometry. The developed ratio difference at coabsorptive point method (RDC) represents a simple effective solution for the mentioned problem, where the additive property of light absorbance enabled the consideration of the two components as multiples of the lower absorptivity component at certain wavelength (coabsorptive point), at which their total concentration multiples could be determined, whereas the other component was selectively determined by applying the ratio difference method in a single step. Mixture of perindopril arginine (PA) and amlodipine besylate (AM) figures that problem, where the low absorptivity of PA relative to AM hinders selective spectrophotometric determination of PA. The developed method successfully determined both components in the overlapped region of their spectra with accuracy 99.39±1.60 and 100.51±1.21, for PA and AM, respectively. The method was validated as per the USP guidelines and showed no significant difference upon statistical comparison with reported chromatographic method. Copyright © 2015 Elsevier B.V. All rights reserved.
ZnO-based multiple channel and multiple gate FinMOSFETs
NASA Astrophysics Data System (ADS)
Lee, Ching-Ting; Huang, Hung-Lin; Tseng, Chun-Yen; Lee, Hsin-Ying
2016-02-01
In recent years, zinc oxide (ZnO)-based metal-oxide-semiconductor field-effect transistors (MOSFETs) have attracted much attention, because ZnO-based semiconductors possess several advantages, including large exciton binding energy, nontoxicity, biocompatibility, low material cost, and wide direct bandgap. Moreover, the ZnO-based MOSFET is one of most potential devices, due to the applications in microwave power amplifiers, logic circuits, large scale integrated circuits, and logic swing. In this study, to enhance the performances of the ZnO-based MOSFETs, the ZnObased multiple channel and multiple gate structured FinMOSFETs were fabricated using the simple laser interference photolithography method and the self-aligned photolithography method. The multiple channel structure possessed the additional sidewall depletion width control ability to improve the channel controllability, because the multiple channel sidewall portions were surrounded by the gate electrode. Furthermore, the multiple gate structure had a shorter distance between source and gate and a shorter gate length between two gates to enhance the gate operating performances. Besides, the shorter distance between source and gate could enhance the electron velocity in the channel fin structure of the multiple gate structure. In this work, ninety one channels and four gates were used in the FinMOSFETs. Consequently, the drain-source saturation current (IDSS) and maximum transconductance (gm) of the ZnO-based multiple channel and multiple gate structured FinFETs operated at a drain-source voltage (VDS) of 10 V and a gate-source voltage (VGS) of 0 V were respectively improved from 11.5 mA/mm to 13.7 mA/mm and from 4.1 mS/mm to 6.9 mS/mm in comparison with that of the conventional ZnO-based single channel and single gate MOSFETs.
Ambler, Gareth; Omar, Rumana Z; Royston, Patrick
2007-06-01
Risk models that aim to predict the future course and outcome of disease processes are increasingly used in health research, and it is important that they are accurate and reliable. Most of these risk models are fitted using routinely collected data in hospitals or general practices. Clinical outcomes such as short-term mortality will be near-complete, but many of the predictors may have missing values. A common approach to dealing with this is to perform a complete-case analysis. However, this may lead to overfitted models and biased estimates if entire patient subgroups are excluded. The aim of this paper is to investigate a number of methods for imputing missing data to evaluate their effect on risk model estimation and the reliability of the predictions. Multiple imputation methods, including hotdecking and multiple imputation by chained equations (MICE), were investigated along with several single imputation methods. A large national cardiac surgery database was used to create simulated yet realistic datasets. The results suggest that complete case analysis may produce unreliable risk predictions and should be avoided. Conditional mean imputation performed well in our scenario, but may not be appropriate if using variable selection methods. MICE was amongst the best performing multiple imputation methods with regards to the quality of the predictions. Additionally, it produced the least biased estimates, with good coverage, and hence is recommended for use in practice.
Beck, Andrew; Tesh, Robert B.; Wood, Thomas G.; Widen, Steven G.; Ryman, Kate D.; Barrett, Alan D. T.
2014-01-01
Background. The first comparison of a live RNA viral vaccine strain to its wild-type parental strain by deep sequencing is presented using as a model the yellow fever virus (YFV) live vaccine strain 17D-204 and its wild-type parental strain, Asibi. Methods. The YFV 17D-204 vaccine genome was compared to that of the parental strain Asibi by massively parallel methods. Variability was compared on multiple scales of the viral genomes. A modeled exploration of small-frequency variants was performed to reconstruct plausible regions of mutational plasticity. Results. Overt quasispecies diversity is a feature of the parental strain, whereas the live vaccine strain lacks diversity according to multiple independent measurements. A lack of attenuating mutations in the Asibi population relative to that of 17D-204 was observed, demonstrating that the vaccine strain was derived by discrete mutation of Asibi and not by selection of genomes in the wild-type population. Conclusions. Relative quasispecies structure is a plausible correlate of attenuation for live viral vaccines. Analyses such as these of attenuated viruses improve our understanding of the molecular basis of vaccine attenuation and provide critical information on the stability of live vaccines and the risk of reversion to virulence. PMID:24141982
Creative females have larger white matter structures: Evidence from a large sample study.
Takeuchi, Hikaru; Taki, Yasuyuki; Nouchi, Rui; Yokoyama, Ryoichi; Kotozaki, Yuka; Nakagawa, Seishu; Sekiguchi, Atsushi; Iizuka, Kunio; Yamamoto, Yuki; Hanawa, Sugiko; Araki, Tsuyoshi; Makoto Miyauchi, Carlos; Shinada, Takamitsu; Sakaki, Kohei; Sassa, Yuko; Nozawa, Takayuki; Ikeda, Shigeyuki; Yokota, Susumu; Daniele, Magistro; Kawashima, Ryuta
2017-01-01
The importance of brain connectivity for creativity has been theoretically suggested and empirically demonstrated. Studies have shown sex differences in creativity measured by divergent thinking (CMDT) as well as sex differences in the structural correlates of CMDT. However, the relationships between regional white matter volume (rWMV) and CMDT and associated sex differences have never been directly investigated. In addition, structural studies have shown poor replicability and inaccuracy of multiple comparisons over the whole brain. To address these issues, we used the data from a large sample of healthy young adults (776 males and 560 females; mean age: 20.8 years, SD = 0.8). We investigated the relationship between CMDT and WMV using the newest version of voxel-based morphometry (VBM). We corrected for multiple comparisons over whole brain using the permutation-based method, which is known to be quite accurate and robust. Significant positive correlations between rWMV and CMDT scores were observed in widespread areas below the neocortex specifically in females. These associations with CMDT were not observed in analyses of fractional anisotropy using diffusion tensor imaging. Using rigorous methods, our findings further supported the importance of brain connectivity for creativity as well as its female-specific association. Hum Brain Mapp 38:414-430, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Badeau, Ryan; White, Daniel R.; Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.
2017-12-01
The ability to solve physics problems that require multiple concepts from across the physics curriculum—"synthesis" problems—is often a goal of physics instruction. Three experiments were designed to evaluate the effectiveness of two instructional methods employing worked examples on student performance with synthesis problems; these instructional techniques, analogical comparison and self-explanation, have previously been studied primarily in the context of single-concept problems. Across three experiments with students from introductory calculus-based physics courses, both self-explanation and certain kinds of analogical comparison of worked examples significantly improved student performance on a target synthesis problem, with distinct improvements in recognition of the relevant concepts. More specifically, analogical comparison significantly improved student performance when the comparisons were invoked between worked synthesis examples. In contrast, similar comparisons between corresponding pairs of worked single-concept examples did not significantly improve performance. On a more complicated synthesis problem, self-explanation was significantly more effective than analogical comparison, potentially due to differences in how successfully students encoded the full structure of the worked examples. Finally, we find that the two techniques can be combined for additional benefit, with the trade-off of slightly more time on task.
Round-robin comparison of methods for the detection of human enteric viruses in lettuce.
Le Guyader, Françoise S; Schultz, Anna-Charlotte; Haugarreau, Larissa; Croci, Luciana; Maunula, Leena; Duizer, Erwin; Lodder-Verschoor, Froukje; von Bonsdorff, Carl-Henrik; Suffredini, Elizabetha; van der Poel, Wim M M; Reymundo, Rosanna; Koopmans, Marion
2004-10-01
Five methods that detect human enteric virus contamination in lettuce were compared. To mimic multiple contaminations as observed after sewage contamination, artificial contamination was with human calicivirus and poliovirus and animal calicivirus strains at different concentrations. Nucleic acid extractions were done at the same time in the same laboratory to reduce assay-to-assay variability. Results showed that the two critical steps are the washing step and removal of inhibitors. The more reliable methods (sensitivity, simplicity, low cost) included an elution/concentration step and a commercial kit. Such development of sensitive methods for viral detection in foods other than shellfish is important to improve food safety.
Comparison of Signals from Gravitational Wave Detectors with Instantaneous Time-Frequency Maps
NASA Technical Reports Server (NTRS)
Stroeer, A.; Blackburn, L.; Camp, J.
2011-01-01
Gravitational wave astronomy relies on the use of multiple detectors, so that coincident detections may distinguish real signals from instrumental artifacts, and also so that relative timing of signals can provide the sky position of sources. We show that the comparison of instantaneous time-frequency and time-amplitude maps provided by the Hilbert-Huang Transform (HHT) can be used effectively for relative signal timing of common signals, to discriminate between the case of identical coincident signals and random noise coincidences and to provide a classification of signals based on their time-frequency trajectories. The comparison is done with a X(sup 2) goodness-offit method which includes contributions from both the instantaneous amplitude and frequency components of the HHT to match two signals in the time domain. This approach naturally allows the analysis of waveforms with strong frequency modulation.
Dierker, Lisa; Rose, Jennifer; Tan, Xianming; Li, Runze
2010-12-01
This paper describes and compares a selection of available modeling techniques for identifying homogeneous population subgroups in the interest of informing targeted substance use intervention. We present a nontechnical review of the common and unique features of three methods: (a) trajectory analysis, (b) functional hierarchical linear modeling (FHLM), and (c) decision tree methods. Differences among the techniques are described, including required data features, strengths and limitations in terms of the flexibility with which outcomes and predictors can be modeled, and the potential of each technique for helping to inform the selection of targets and timing of substance intervention programs.
Temperature Distribution in a Composite of Opaque and Semitransparent Spectral Layers
NASA Technical Reports Server (NTRS)
Siegel, Robert
1997-01-01
The analysis of radiative transfer becomes computationally complex for a composite when there are multiple layers and multiple spectral bands. A convenient analytical method is developed for combined radiation and conduction in a composite of alternating semitransparent and opaque layers. The semi- transparent layers absorb, scatter, and emit radiation, and spectral properties with large scattering are included. The two-flux method is used, and its applicability is verified by comparison with a basic solution in the literature. The differential equation in the two-flux method Is solved by deriving a Green's function. The solution technique is applied to analyze radiation effects in a multilayer zirconia thermal barrier coating with internal radiation shields for conditions in an aircraft engine combustor. The zirconia radiative properties are modeled by two spectral bands. Thin opaque layers within the coating are used to decrease radiant transmission that can degrade the zirconia insulating ability. With radiation shields, the temperature distributions more closely approach the opaque limit that provides the lowest metal wall temperatures.
Pare, Guillaume; Mao, Shihong; Deng, Wei Q
2016-06-08
Despite considerable efforts, known genetic associations only explain a small fraction of predicted heritability. Regional associations combine information from multiple contiguous genetic variants and can improve variance explained at established association loci. However, regional associations are not easily amenable to estimation using summary association statistics because of sensitivity to linkage disequilibrium (LD). We now propose a novel method, LD Adjusted Regional Genetic Variance (LARGV), to estimate phenotypic variance explained by regional associations using summary statistics while accounting for LD. Our method is asymptotically equivalent to a multiple linear regression model when no interaction or haplotype effects are present. It has several applications, such as ranking of genetic regions according to variance explained or comparison of variance explained by two or more regions. Using height and BMI data from the Health Retirement Study (N = 7,776), we show that most genetic variance lies in a small proportion of the genome and that previously identified linkage peaks have higher than expected regional variance.
Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin
2011-01-01
Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104
Comparison of scavenging capacities of vegetables by ORAC and EPR.
Kameya, Hiromi; Watanabe, Jun; Takano-Ishikawa, Yuko; Todoriki, Setsuko
2014-02-15
Reactive oxygen species (ROS) are considered to be causative agents of many health problems. In spite of this, the radical-specific scavenging capacities of food samples have not been well studied. In the present work, we have developed an electron paramagnetic resonance (EPR) spin trapping method for analysis of the scavenging capacities of food samples for multiple ROS, utilising the same photolysis procedure for generating each type of radical. The optimal conditions for effective evaluation of hydroxyl, superoxide, and alkoxyl radical scavenging capacity were determined. Quantification of radical adducts was found to be highly reproducible, with variations of less than 4%. The optimised EPR spin trapping method was used to analyse the scavenging capacities of 54 different vegetable extracts for multiple radicals, and the results were compared with oxygen radical absorption capacity values. Good correlations between the two methods were observed for superoxide and alkoxyl radicals, but not for hydroxyl. Copyright © 2013 Elsevier Ltd. All rights reserved.
Pare, Guillaume; Mao, Shihong; Deng, Wei Q.
2016-01-01
Despite considerable efforts, known genetic associations only explain a small fraction of predicted heritability. Regional associations combine information from multiple contiguous genetic variants and can improve variance explained at established association loci. However, regional associations are not easily amenable to estimation using summary association statistics because of sensitivity to linkage disequilibrium (LD). We now propose a novel method, LD Adjusted Regional Genetic Variance (LARGV), to estimate phenotypic variance explained by regional associations using summary statistics while accounting for LD. Our method is asymptotically equivalent to a multiple linear regression model when no interaction or haplotype effects are present. It has several applications, such as ranking of genetic regions according to variance explained or comparison of variance explained by two or more regions. Using height and BMI data from the Health Retirement Study (N = 7,776), we show that most genetic variance lies in a small proportion of the genome and that previously identified linkage peaks have higher than expected regional variance. PMID:27273519
Potassium Channel KIR4.1 as an Immune Target in Multiple Sclerosis
Srivastava, Rajneesh; Aslam, Muhammad; Kalluri, Sudhakar Reddy; Schirmer, Lucas; Buck, Dorothea; Tackenberg, Björn; Rothhammer, Veit; Chan, Andrew; Gold, Ralf; Berthele, Achim; Bennett, Jeffrey L.; Korn, Thomas; Hemmer, Bernhard
2016-01-01
BACKGROUND Multiple sclerosis is a chronic inflammatory demyelinating disease of the central nervous system. Many findings suggest that the disease has an autoimmune pathogenesis; the target of the immune response is not yet known. METHODS We screened serum IgG from persons with multiple sclerosis to identify antibodies that are capable of binding to brain tissue and observed specific binding of IgG to glial cells in a subgroup of patients. Using a proteomic approach focusing on membrane proteins, we identified the ATP-sensitive inward rectifying potassium channel KIR4.1 as the target of the IgG antibodies. We used a multifaceted validation strategy to confirm KIR4.1 as a target of the autoantibody response in multiple sclerosis and to show its potential pathogenicity in vivo. RESULTS Serum levels of antibodies to KIR4.1 were higher in persons with multiple sclerosis than in persons with other neurologic diseases and healthy donors (P<0.001 for both comparisons). We replicated this finding in two independent groups of persons with multiple sclerosis or other neurologic diseases (P<0.001 for both comparisons). Analysis of the combined data sets indicated the presence of serum antibodies to KIR4.1 in 186 of 397 persons with multiple sclerosis (46.9%), in 3 of 329 persons with other neurologic diseases (0.9%), and in none of the 59 healthy donors. These antibodies bound to the first extracellular loop of KIR4.1. Injection of KIR4.1 serum IgG into the cisternae magnae of mice led to a profound loss of KIR4.1 expression, altered expression of glial fibrillary acidic protein in astrocytes, and activation of the complement cascade at sites of KIR4.1 expression in the cerebellum. CONCLUSIONS KIR4.1 is a target of the autoantibody response in a subgroup of persons with multiple sclerosis. (Funded by the German Ministry for Education and Research and Deutsche Forschungsgemeinschaft.) PMID:22784115
Effect of Methamphetamine Dependence on Heart Rate Variability
Henry, Brook L.; Minassian, Arpi; Perry, William
2010-01-01
Background Methamphetamine (METH) is an increasing popular and highly addictive stimulant associated with autonomic nervous system (ANS) dysfunction, cardiovascular pathology, and neurotoxicity. Heart rate variability (HRV) has been used to assess autonomic function and predict mortality in cardiac disorders and drug intoxication, but has not been characterized in METH use. We recorded HRV in a sample of currently abstinent individuals with a history of METH dependence compared to age- and gender-matched drug-free comparison subjects. Method HRV was assessed using time domain, frequency domain, and nonlinear entropic analyses in 17 previously METH-dependent and 21 drug-free comparison individuals during a 5 minute rest period. Results The METH-dependent group demonstrated significant reduction in HRV, reduced parasympathetic activity, and diminished heartbeat complexity relative to comparison participants. More recent METH use was associated with increased sympathetic tone. Conclusion Chronic METH exposure may be associated with decreased HRV, impaired vagal function, and reduction in heart rate complexity as assessed by multiple methods of analysis. We discuss and review evidence that impaired HRV may be related to the cardiotoxic or neurotoxic effects of prolonged METH use. PMID:21182570
The effect of multiple primary rules on cancer incidence rates and trends
Weir, Hannah K.; Johnson, Christopher J.; Ward, Kevin C.; Coleman, Michel P.
2018-01-01
Purpose An examination of multiple primary cancers can provide insight into the etiologic role of genes, the environment, and prior cancer treatment on a cancer patient’s risk of developing a subsequent cancer. Different rules for registering multiple primary cancers (MP) are used by cancer registries throughout the world making data comparisons difficult. Methods We evaluated the effect of SEER and IARC/IACR rules on cancer incidence rates and trends using data from the SEER Program. We estimated age-standardized incidence rate (ASIR) and trends (1975–2011) for the top 26 cancer categories using joinpoint regression analysis. Results ASIRs were higher using SEER compared to IARC/IACR rules for all cancers combined (3 %) and, in rank order, melanoma (9 %), female breast (7 %), urinary bladder (6 %), colon (4 %), kidney and renal pelvis (4 %), oral cavity and pharynx (3 %), lung and bronchus (2 %), and non-Hodgkin lymphoma (2 %). ASIR differences were largest for patients aged 65+ years. Trends were similar using both MP rules with the exception of cancers of the urinary bladder, and kidney and renal pelvis. Conclusions The choice of multiple primary coding rules effects incidence rates and trends. Compared to SEER MP coding rules, IARC/IACR rules are less complex, have not changed over time, and report fewer multiple primary cancers, particularly cancers that occur in paired organs, at the same anatomic site and with the same or related histologic type. Cancer registries collecting incidence data using SEER rules may want to consider including incidence rates and trends using IARC/IACR rules to facilitate international data comparisons. PMID:26809509
Bastien, Olivier; Ortet, Philippe; Roy, Sylvaine; Maréchal, Eric
2005-03-10
Popular methods to reconstruct molecular phylogenies are based on multiple sequence alignments, in which addition or removal of data may change the resulting tree topology. We have sought a representation of homologous proteins that would conserve the information of pair-wise sequence alignments, respect probabilistic properties of Z-scores (Monte Carlo methods applied to pair-wise comparisons) and be the basis for a novel method of consistent and stable phylogenetic reconstruction. We have built up a spatial representation of protein sequences using concepts from particle physics (configuration space) and respecting a frame of constraints deduced from pair-wise alignment score properties in information theory. The obtained configuration space of homologous proteins (CSHP) allows the representation of real and shuffled sequences, and thereupon an expression of the TULIP theorem for Z-score probabilities. Based on the CSHP, we propose a phylogeny reconstruction using Z-scores. Deduced trees, called TULIP trees, are consistent with multiple-alignment based trees. Furthermore, the TULIP tree reconstruction method provides a solution for some previously reported incongruent results, such as the apicomplexan enolase phylogeny. The CSHP is a unified model that conserves mutual information between proteins in the way physical models conserve energy. Applications include the reconstruction of evolutionary consistent and robust trees, the topology of which is based on a spatial representation that is not reordered after addition or removal of sequences. The CSHP and its assigned phylogenetic topology, provide a powerful and easily updated representation for massive pair-wise genome comparisons based on Z-score computations.
Resolution and Assignment of Differential Ion Mobility Spectra of Sarcosine and Isomers.
Berthias, Francis; Maatoug, Belkis; Glish, Gary L; Moussa, Fathi; Maitre, Philippe
2018-04-01
Due to their central role in biochemical processes, fast separation and identification of amino acids (AA) is of importance in many areas of the biomedical field including the diagnosis and monitoring of inborn errors of metabolism and biomarker discovery. Due to the large number of AA together with their isomers and isobars, common methods of AA analysis are tedious and time-consuming because they include a chromatographic separation step requiring pre- or post-column derivatization. Here, we propose a rapid method of separation and identification of sarcosine, a biomarker candidate of prostate cancer, from isomers using differential ion mobility spectrometry (DIMS) interfaced with a tandem mass spectrometer (MS/MS) instrument. Baseline separation of protonated sarcosine from α- and β-alanine isomers can be easily achieved. Identification of DIMS peak is performed using an isomer-specific activation mode where DIMS- and mass-selected ions are irradiated at selected wavenumbers allowing for the specific fragmentation via an infrared multiple photon dissociation (IRMPD) process. Two orthogonal methods to MS/MS are thus added, where the MS/MS(IRMPD) is nothing but an isomer-specific multiple reaction monitoring (MRM) method. The identification relies on the comparison of DIMS-MS/MS(IRMPD) chromatograms recorded at different wavenumbers. Based on the comparison of IR spectra of the three isomers, it is shown that specific depletion of the two protonated α- and β-alanine can be achieved, thus allowing for clear identification of the sarcosine peak. It is also demonstrated that DIMS-MS/MS(IRMPD) spectra in the carboxylic C=O stretching region allow for the resolution of overlapping DIMS peaks. Graphical Abstract ᅟ.
Boehm, A B; Griffith, J; McGee, C; Edge, T A; Solo-Gabriele, H M; Whitman, R; Cao, Y; Getrich, M; Jay, J A; Ferguson, D; Goodwin, K D; Lee, C M; Madison, M; Weisberg, S B
2009-11-01
The absence of standardized methods for quantifying faecal indicator bacteria (FIB) in sand hinders comparison of results across studies. The purpose of the study was to compare methods for extraction of faecal bacteria from sands and recommend a standardized extraction technique. Twenty-two methods of extracting enterococci and Escherichia coli from sand were evaluated, including multiple permutations of hand shaking, mechanical shaking, blending, sonication, number of rinses, settling time, eluant-to-sand ratio, eluant composition, prefiltration and type of decantation. Tests were performed on sands from California, Florida and Lake Michigan. Most extraction parameters did not significantly affect bacterial enumeration. anova revealed significant effects of eluant composition and blending; with both sodium metaphosphate buffer and blending producing reduced counts. The simplest extraction method that produced the highest FIB recoveries consisted of 2 min of hand shaking in phosphate-buffered saline or deionized water, a 30-s settling time, one-rinse step and a 10 : 1 eluant volume to sand weight ratio. This result was consistent across the sand compositions tested in this study but could vary for other sand types. Method standardization will improve the understanding of how sands affect surface water quality.
NASA Astrophysics Data System (ADS)
Ferrini, Silvia; Schaafsma, Marije; Bateman, Ian
2014-06-01
Benefit transfer (BT) methods are becoming increasingly important for environmental policy, but the empirical findings regarding transfer validity are mixed. A novel valuation survey was designed to obtain both stated preference (SP) and revealed preference (RP) data concerning river water quality values from a large sample of households. Both dichotomous choice and payment card contingent valuation (CV) and travel cost (TC) data were collected. Resulting valuations were directly compared and used for BT analyses using both unit value and function transfer approaches. WTP estimates are found to pass the convergence validity test. BT results show that the CV data produce lower transfer errors, below 20% for both unit value and function transfer, than TC data especially when using function transfer. Further, comparison of WTP estimates suggests that in all cases, differences between methods are larger than differences between study areas. Results show that when multiple studies are available, using welfare estimates from the same area but based on a different method consistently results in larger errors than transfers across space keeping the method constant.
Grow, Laura L; Kodak, Tiffany; Carr, James E
2014-01-01
Previous research has demonstrated that the conditional-only method (starting with a multiple-stimulus array) is more efficient than the simple-conditional method (progressive incorporation of more stimuli into the array) for teaching receptive labeling to children with autism spectrum disorders (Grow, Carr, Kodak, Jostad, & Kisamore,). The current study systematically replicated the earlier study by comparing the 2 approaches using progressive prompting with 2 boys with autism. The results showed that the conditional-only method was a more efficient and reliable teaching procedure than the simple-conditional method. The results further call into question the practice of teaching simple discriminations to facilitate acquisition of conditional discriminations. © Society for the Experimental Analysis of Behavior.
A Bayesian Missing Data Framework for Generalized Multiple Outcome Mixed Treatment Comparisons
ERIC Educational Resources Information Center
Hong, Hwanhee; Chu, Haitao; Zhang, Jing; Carlin, Bradley P.
2016-01-01
Bayesian statistical approaches to mixed treatment comparisons (MTCs) are becoming more popular because of their flexibility and interpretability. Many randomized clinical trials report multiple outcomes with possible inherent correlations. Moreover, MTC data are typically sparse (although richer than standard meta-analysis, comparing only two…
A Fiducial Approach to Extremes and Multiple Comparisons
ERIC Educational Resources Information Center
Wandler, Damian V.
2010-01-01
Generalized fiducial inference is a powerful tool for many difficult problems. Based on an extension of R. A. Fisher's work, we used generalized fiducial inference for two extreme value problems and a multiple comparison procedure. The first extreme value problem is dealing with the generalized Pareto distribution. The generalized Pareto…
Reporting of analyses from randomized controlled trials with multiple arms: a systematic review.
Baron, Gabriel; Perrodeau, Elodie; Boutron, Isabelle; Ravaud, Philippe
2013-03-27
Multiple-arm randomized trials can be more complex in their design, data analysis, and result reporting than two-arm trials. We conducted a systematic review to assess the reporting of analyses in reports of randomized controlled trials (RCTs) with multiple arms. The literature in the MEDLINE database was searched for reports of RCTs with multiple arms published in 2009 in the core clinical journals. Two reviewers extracted data using a standardized extraction form. In total, 298 reports were identified. Descriptions of the baseline characteristics and outcomes per group were missing in 45 reports (15.1%) and 48 reports (16.1%), respectively. More than half of the articles (n = 171, 57.4%) reported that a planned global test comparison was used (that is, assessment of the global differences between all groups), but 67 (39.2%) of these 171 articles did not report details of the planned analysis. Of the 116 articles reporting a global comparison test, 12 (10.3%) did not report the analysis as planned. In all, 60% of publications (n = 180) described planned pairwise test comparisons (that is, assessment of the difference between two groups), but 20 of these 180 articles (11.1%) did not report the pairwise test comparisons. Of the 204 articles reporting pairwise test comparisons, the comparisons were not planned for 44 (21.6%) of them. Less than half the reports (n = 137; 46%) provided baseline and outcome data per arm and reported the analysis as planned. Our findings highlight discrepancies between the planning and reporting of analyses in reports of multiple-arm trials.
Comparison of 3 kidney injury multiplex panels in rats.
John-Baptiste, Annette; Vitsky, Allison; Sace, Frederick; Zong, Qing; Ko, Mira; Yafawi, Rolla; Liu, Ling
2012-01-01
Kidney injury biomarkers have been utilized by pharmaceutical companies as a means to assess the potential of candidate drugs to induce nephrotoxicity. Multiple platforms and assay methods exist, but the comparison of these methods has not been described. Millipore's Kidney Toxicity panel, EMD/Novagen's Widescreen Kidney Toxicity panel, and Meso Scales Kidney Injury panel were selected based on published information. Kidney injury molecule 1, cystatin C, clusterin, and osteopontin were the 4 biomarkers common among all kits tested and the focus of this study. Rats were treated with a low and high dose of para-aminophenol, a known nephrotoxicant, and urine samples were collected and analyzed on the Bio-Plex 200 or MSD's Sector Imager 6000, according to manufacturers specifications. Comparatively, of the 3 kits, Millipore was the most consistent in detecting elevations of 3 out of the 4 biomarkers at both dose levels and indicated time points.
Surface plasmon resonance spectroscopy sensor and methods for using same
Anderson, Brian Benjamin; Nave, Stanley Eugene
2002-01-01
A surface plasmon resonance ("SPR") probe with a detachable sensor head and system and methods for using the same in various applications is described. The SPR probe couples fiber optic cables directly to an SPR substrate that has a generally planar input surface and a generally curved reflecting surface, such as a substrate formed as a hemisphere. Forming the SPR probe in this manner allows the probe to be miniaturized and operate without the need for high precision, expensive and bulky collimating or focusing optics. Additionally, the curved reflecting surface of the substrate can be coated with one or multiple patches of sensing medium to allow the probe to detect for multiple analytes of interest or to provide multiple readings for comparison and higher precision. Specific applications for the probe are disclosed, including extremely high sensitive relative humidity and dewpoint detection for, e.g., moisture-sensitive environment such as volatile chemical reactions. The SPR probe disclosed operates with a large dynamic range and provides extremely high quality spectra despite being robust enough for field deployment and readily manufacturable.
NASA Technical Reports Server (NTRS)
Yates, Leslie A.
1993-01-01
The construction of interferograms, schlieren, and shadowgraphs from computed flowfield solutions permits one-to-one comparisons of computed and experimental results. A method of constructing these images from both ideal- and real-gas, two and three-dimensional computed flowfields is described. The computational grids can be structured or unstructured, and multiple grids are an option. Constructed images are shown for several types of computed flows including nozzle, wake, and reacting flows; comparisons to experimental images are also shown. In addition, th sensitivity of these images to errors in the flowfield solution is demonstrated, and the constructed images can be used to identify problem areas in the computations.
NASA Technical Reports Server (NTRS)
Yates, Leslie A.
1992-01-01
The construction of interferograms, schlieren, and shadowgraphs from computed flowfield solutions permits one-to-one comparisons of computed and experimental results. A method for constructing these images from both ideal- and real-gas, two- and three-dimensional computed flowfields is described. The computational grids can be structured or unstructured, and multiple grids are an option. Constructed images are shown for several types of computed flows including nozzle, wake, and reacting flows; comparisons to experimental images are also shown. In addition, the sensitivity of these images to errors in the flowfield solution is demonstrated, and the constructed images can be used to identify problem areas in the computations.
Baxter, Melissa; Withey, Sarah; Harrison, Sean; Segeritz, Charis-Patricia; Zhang, Fang; Atkinson-Dell, Rebecca; Rowe, Cliff; Gerrard, Dave T.; Sison-Young, Rowena; Jenkins, Roz; Henry, Joanne; Berry, Andrew A.; Mohamet, Lisa; Best, Marie; Fenwick, Stephen W.; Malik, Hassan; Kitteringham, Neil R.; Goldring, Chris E.; Piper Hanley, Karen; Vallier, Ludovic; Hanley, Neil A.
2015-01-01
Background & Aims Hepatocyte-like cells (HLCs), differentiated from pluripotent stem cells by the use of soluble factors, can model human liver function and toxicity. However, at present HLC maturity and whether any deficit represents a true fetal state or aberrant differentiation is unclear and compounded by comparison to potentially deteriorated adult hepatocytes. Therefore, we generated HLCs from multiple lineages, using two different protocols, for direct comparison with fresh fetal and adult hepatocytes. Methods Protocols were developed for robust differentiation. Multiple transcript, protein and functional analyses compared HLCs to fresh human fetal and adult hepatocytes. Results HLCs were comparable to those of other laboratories by multiple parameters. Transcriptional changes during differentiation mimicked human embryogenesis and showed more similarity to pericentral than periportal hepatocytes. Unbiased proteomics demonstrated greater proximity to liver than 30 other human organs or tissues. However, by comparison to fresh material, HLC maturity was proven by transcript, protein and function to be fetal-like and short of the adult phenotype. The expression of 81% phase 1 enzymes in HLCs was significantly upregulated and half were statistically not different from fetal hepatocytes. HLCs secreted albumin and metabolized testosterone (CYP3A) and dextrorphan (CYP2D6) like fetal hepatocytes. In seven bespoke tests, devised by principal components analysis to distinguish fetal from adult hepatocytes, HLCs from two different source laboratories consistently demonstrated fetal characteristics. Conclusions HLCs from different sources are broadly comparable with unbiased proteomic evidence for faithful differentiation down the liver lineage. This current phenotype mimics human fetal rather than adult hepatocytes. PMID:25457200
Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S
2017-10-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Bonelli, Maria Grazia; Ferrini, Mauro; Manni, Andrea
2016-12-01
The assessment of metals and organic micropollutants contamination in agricultural soils is a difficult challenge due to the extensive area used to collect and analyze a very large number of samples. With Dioxins and dioxin-like PCBs measurement methods and subsequent the treatment of data, the European Community advises the develop low-cost and fast methods allowing routing analysis of a great number of samples, providing rapid measurement of these compounds in the environment, feeds and food. The aim of the present work has been to find a method suitable to describe the relations occurring between organic and inorganic contaminants and use the value of the latter in order to forecast the former. In practice, the use of a metal portable soil analyzer coupled with an efficient statistical procedure enables the required objective to be achieved. Compared to Multiple Linear Regression, the Artificial Neural Networks technique has shown to be an excellent forecasting method, though there is no linear correlation between the variables to be analyzed.
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-12
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A
2008-10-01
Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.
A Novel Method for Reconstructing Broken Contour Lines Extracted from Scanned Topographic Maps
NASA Astrophysics Data System (ADS)
Wang, Feng; Liu, Pingzhi; Yang, Yun; Wei, Haiping; An, Xiaoya
2018-05-01
It is known that after segmentation and morphological operations on scanned topographic maps, gaps occur in contour lines. It is also well known that filling these gaps and reconstruction of contour lines with high accuracy and completeness is not an easy problem. In this paper, a novel method is proposed dedicated in automatic or semiautomatic filling up caps and reconstructing broken contour lines in binary images. The key part of end points' auto-matching and reconnecting is deeply discussed after introducing the procedure of reconstruction, in which some key algorithms and mechanisms are presented and realized, including multiple incremental backing trace to get weighted average direction angle of end points, the max constraint angle control mechanism based on the multiple gradient ranks, combination of weighted Euclidean distance and deviation angle to determine the optimum matching end point, bidirectional parabola control, etc. Lastly, experimental comparisons based on typically samples are complemented between proposed method and the other representative method, the results indicate that the former holds higher accuracy and completeness, better stability and applicability.
NASA Astrophysics Data System (ADS)
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-01
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
Pre-treatment red blood cell distribution width provides prognostic information in multiple myeloma.
Zhou, Di; Xu, Peipei; Peng, Miaoxin; Shao, Xiaoyan; Wang, Miao; Ouyang, Jian; Chen, Bing
2018-06-01
The red blood cell distribution width (RDW), a credible marker for abnormal erythropoiesis, has recently been studied as a prognostic factor in oncology, but its role in multiple myeloma (MM) hasn't been thoroughly investigated. We performed a retrospective study in 162 patients with multiple myeloma. Categorical parameters were analyzed using Pearson chi-squared test. The Mann-Whitney and Wilcoxon tests were used for group comparisons. Comparisons of repeated samples data were analyzed with the general linear model repeated-measures procedure. The Kaplan-Meier product-limit method was used to determine OS and PFS, and the differences were assessed by the log-rank test. High RDW baseline was significantly associated with indexes including haemoglobin, bone marrow plasma cell infiltration, and cytogenetics risk stratification. After chemotherapy, the overall response rate (ORR) decreased as RDW baseline increased. In 24 patients with high RDW baseline, it was revealed RDW value decreased when patients achieved complete remission (CR), but increased when the disease progressed. The normal-RDW baseline group showed both longer overall survival (OS) and progression-free survival (PFS) than the high-RDW baseline group. Our study suggests pre-treatment RDW level is a prognostic factor in MM and should be regarded as an important parameter for assessment of therapeutic efficiency. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Mertens, Christopher; Moyers, Michael; Walker, Steven; Tweed, John
Recent developments in NASA's High Charge and Energy Transport (HZETRN) code have included lateral broadening of primary ion beams due to small-angle multiple Coulomb scattering, and coupling of the ion-nuclear scattering interactions with energy loss and straggling. The new version of HZETRN based on Green function methods, GRNTRN, is suitable for modeling transport with both space environment and laboratory boundary conditions. Multiple scattering processes are a necessary extension to GRNTRN in order to accurately model ion beam experiments, to simulate the physical and biological-effective radiation dose, and to develop new methods and strategies for light ion radiation therapy. In this paper we compare GRNTRN simulations of proton lateral scattering distributions with beam measurements taken at Loma Linda Medical University. The simulated and measured lateral proton distributions will be compared for a 250 MeV proton beam on aluminum, polyethylene, polystyrene, bone, iron, and lead target materials.
Using Replicates in Information Retrieval Evaluation.
Voorhees, Ellen M; Samarov, Daniel; Soboroff, Ian
2017-09-01
This article explores a method for more accurately estimating the main effect of the system in a typical test-collection-based evaluation of information retrieval systems, thus increasing the sensitivity of system comparisons. Randomly partitioning the test document collection allows for multiple tests of a given system and topic (replicates). Bootstrap ANOVA can use these replicates to extract system-topic interactions-something not possible without replicates-yielding a more precise value for the system effect and a narrower confidence interval around that value. Experiments using multiple TREC collections demonstrate that removing the topic-system interactions substantially reduces the confidence intervals around the system effect as well as increases the number of significant pairwise differences found. Further, the method is robust against small changes in the number of partitions used, against variability in the documents that constitute the partitions, and the measure of effectiveness used to quantify system effectiveness.
Multivariate longitudinal data analysis with mixed effects hidden Markov models.
Raffa, Jesse D; Dubin, Joel A
2015-09-01
Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.
Stochastic determination of matrix determinants
NASA Astrophysics Data System (ADS)
Dorn, Sebastian; Enßlin, Torsten A.
2015-07-01
Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations—matrices—acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.
NASA Astrophysics Data System (ADS)
Sajjadi, Mohammadreza; Pishkenari, Hossein Nejat; Vossoughi, Gholamreza
2018-06-01
Trolling mode atomic force microscopy (TR-AFM) has resolved many imaging problems by a considerable reduction of the liquid-resonator interaction forces in liquid environments. The present study develops a nonlinear model of the meniscus force exerted to the nanoneedle of TR-AFM and presents an analytical solution to the distributed-parameter model of TR-AFM resonator utilizing multiple time scales (MTS) method. Based on the developed analytical solution, the frequency-response curves of the resonator operation in air and liquid (for different penetration length of the nanoneedle) are obtained. The closed-form analytical solution and the frequency-response curves are validated by the comparison with both the finite element solution of the main partial differential equations and the experimental observations. The effect of excitation angle of the resonator on horizontal oscillation of the probe tip and the effect of different parameters on the frequency-response of the system are investigated.
Using Replicates in Information Retrieval Evaluation
VOORHEES, ELLEN M.; SAMAROV, DANIEL; SOBOROFF, IAN
2018-01-01
This article explores a method for more accurately estimating the main effect of the system in a typical test-collection-based evaluation of information retrieval systems, thus increasing the sensitivity of system comparisons. Randomly partitioning the test document collection allows for multiple tests of a given system and topic (replicates). Bootstrap ANOVA can use these replicates to extract system-topic interactions—something not possible without replicates—yielding a more precise value for the system effect and a narrower confidence interval around that value. Experiments using multiple TREC collections demonstrate that removing the topic-system interactions substantially reduces the confidence intervals around the system effect as well as increases the number of significant pairwise differences found. Further, the method is robust against small changes in the number of partitions used, against variability in the documents that constitute the partitions, and the measure of effectiveness used to quantify system effectiveness. PMID:29905334
Stochastic determination of matrix determinants.
Dorn, Sebastian; Ensslin, Torsten A
2015-07-01
Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations-matrices-acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.
Hyde, Jonathan M; DaCosta, Gérald; Hatzoglou, Constantinos; Weekes, Hannah; Radiguet, Bertrand; Styman, Paul D; Vurpillot, Francois; Pareige, Cristelle; Etienne, Auriane; Bonny, Giovanni; Castin, Nicolas; Malerba, Lorenzo; Pareige, Philippe
2017-04-01
Irradiation of reactor pressure vessel (RPV) steels causes the formation of nanoscale microstructural features (termed radiation damage), which affect the mechanical properties of the vessel. A key tool for characterizing these nanoscale features is atom probe tomography (APT), due to its high spatial resolution and the ability to identify different chemical species in three dimensions. Microstructural observations using APT can underpin development of a mechanistic understanding of defect formation. However, with atom probe analyses there are currently multiple methods for analyzing the data. This can result in inconsistencies between results obtained from different researchers and unnecessary scatter when combining data from multiple sources. This makes interpretation of results more complex and calibration of radiation damage models challenging. In this work simulations of a range of different microstructures are used to directly compare different cluster analysis algorithms and identify their strengths and weaknesses.
BEACON: automated tool for Bacterial GEnome Annotation ComparisON.
Kalkatawi, Manal; Alam, Intikhab; Bajic, Vladimir B
2015-08-18
Genome annotation is one way of summarizing the existing knowledge about genomic characteristics of an organism. There has been an increased interest during the last several decades in computer-based structural and functional genome annotation. Many methods for this purpose have been developed for eukaryotes and prokaryotes. Our study focuses on comparison of functional annotations of prokaryotic genomes. To the best of our knowledge there is no fully automated system for detailed comparison of functional genome annotations generated by different annotation methods (AMs). The presence of many AMs and development of new ones introduce needs to: a/ compare different annotations for a single genome, and b/ generate annotation by combining individual ones. To address these issues we developed an Automated Tool for Bacterial GEnome Annotation ComparisON (BEACON) that benefits both AM developers and annotation analysers. BEACON provides detailed comparison of gene function annotations of prokaryotic genomes obtained by different AMs and generates extended annotations through combination of individual ones. For the illustration of BEACON's utility, we provide a comparison analysis of multiple different annotations generated for four genomes and show on these examples that the extended annotation can increase the number of genes annotated by putative functions up to 27%, while the number of genes without any function assignment is reduced. We developed BEACON, a fast tool for an automated and a systematic comparison of different annotations of single genomes. The extended annotation assigns putative functions to many genes with unknown functions. BEACON is available under GNU General Public License version 3.0 and is accessible at: http://www.cbrc.kaust.edu.sa/BEACON/ .
Panelli, Simona; Damiani, Giuseppe; Espen, Luca; Micheli, Gioacchino; Sgaramella, Vittorio
2006-05-10
The development of methods for the analysis and comparison of the nucleic acids contained in single cells is an ambitious and challenging goal that may provide useful insights in many physiopathological processes. We review here some of the published protocols for the amplification of whole genomes (WGA). We focus on the reaction known as Multiple Displacement Amplification (MDA), which probably represents the most reliable and efficient WGA protocol developed to date. We discuss some recent advances and applications, as well as some modifications to the reaction, which should improve its use and enlarge its range of applicability possibly to degraded genomes, and also to RNA via complementary DNA.
Apparently abnormal Wechsler Memory Scale index score patterns in the normal population.
Carrasco, Roman Marcus; Grups, Josefine; Evans, Brittney; Simco, Edward; Mittenberg, Wiley
2015-01-01
Interpretation of the Wechsler Memory Scale-Fourth Edition may involve examination of multiple memory index score contrasts and similar comparisons with Wechsler Adult Intelligence Scale-Fourth Edition ability indexes. Standardization sample data suggest that 15-point differences between any specific pair of index scores are relatively uncommon in normal individuals, but these base rates refer to a comparison between a single pair of indexes rather than multiple simultaneous comparisons among indexes. This study provides normative data for the occurrence of multiple index score differences calculated by using Monte Carlo simulations and validated against standardization data. Differences of 15 points between any two memory indexes or between memory and ability indexes occurred in 60% and 48% of the normative sample, respectively. Wechsler index score discrepancies are normally common and therefore not clinically meaningful when numerous such comparisons are made. Explicit prior interpretive hypotheses are necessary to reduce the number of index comparisons and associated false-positive conclusions. Monte Carlo simulation accurately predicts these false-positive rates.
Large scale study of multiple-molecule queries
2009-01-01
Background In ligand-based screening, as well as in other chemoinformatics applications, one seeks to effectively search large repositories of molecules in order to retrieve molecules that are similar typically to a single molecule lead. However, in some case, multiple molecules from the same family are available to seed the query and search for other members of the same family. Multiple-molecule query methods have been less studied than single-molecule query methods. Furthermore, the previous studies have relied on proprietary data and sometimes have not used proper cross-validation methods to assess the results. In contrast, here we develop and compare multiple-molecule query methods using several large publicly available data sets and background. We also create a framework based on a strict cross-validation protocol to allow unbiased benchmarking for direct comparison in future studies across several performance metrics. Results Fourteen different multiple-molecule query methods were defined and benchmarked using: (1) 41 publicly available data sets of related molecules with similar biological activity; and (2) publicly available background data sets consisting of up to 175,000 molecules randomly extracted from the ChemDB database and other sources. Eight of the fourteen methods were parameter free, and six of them fit one or two free parameters to the data using a careful cross-validation protocol. All the methods were assessed and compared for their ability to retrieve members of the same family against the background data set by using several performance metrics including the Area Under the Accumulation Curve (AUAC), Area Under the Curve (AUC), F1-measure, and BEDROC metrics. Consistent with the previous literature, the best parameter-free methods are the MAX-SIM and MIN-RANK methods, which score a molecule to a family by the maximum similarity, or minimum ranking, obtained across the family. One new parameterized method introduced in this study and two previously defined methods, the Exponential Tanimoto Discriminant (ETD), the Tanimoto Power Discriminant (TPD), and the Binary Kernel Discriminant (BKD), outperform most other methods but are more complex, requiring one or two parameters to be fit to the data. Conclusion Fourteen methods for multiple-molecule querying of chemical databases, including novel methods, (ETD) and (TPD), are validated using publicly available data sets, standard cross-validation protocols, and established metrics. The best results are obtained with ETD, TPD, BKD, MAX-SIM, and MIN-RANK. These results can be replicated and compared with the results of future studies using data freely downloadable from http://cdb.ics.uci.edu/. PMID:20298525
Simulating propagation of coherent light in random media using the Fredholm type integral equation
NASA Astrophysics Data System (ADS)
Kraszewski, Maciej; Pluciński, Jerzy
2017-06-01
Studying propagation of light in random scattering materials is important for both basic and applied research. Such studies often require usage of numerical method for simulating behavior of light beams in random media. However, if such simulations require consideration of coherence properties of light, they may become a complex numerical problems. There are well established methods for simulating multiple scattering of light (e.g. Radiative Transfer Theory and Monte Carlo methods) but they do not treat coherence properties of light directly. Some variations of these methods allows to predict behavior of coherent light but only for an averaged realization of the scattering medium. This limits their application in studying many physical phenomena connected to a specific distribution of scattering particles (e.g. laser speckle). In general, numerical simulation of coherent light propagation in a specific realization of random medium is a time- and memory-consuming problem. The goal of the presented research was to develop new efficient method for solving this problem. The method, presented in our earlier works, is based on solving the Fredholm type integral equation, which describes multiple light scattering process. This equation can be discretized and solved numerically using various algorithms e.g. by direct solving the corresponding linear equations system, as well as by using iterative or Monte Carlo solvers. Here we present recent development of this method including its comparison with well-known analytical results and a finite-difference type simulations. We also present extension of the method for problems of multiple scattering of a polarized light on large spherical particles that joins presented mathematical formalism with Mie theory.
Mortality in a Combined Cohort of Uranium Enrichment Workers
Yiin, James H.; Anderson, Jeri L.; Daniels, Robert D.; Bertke, Stephen J.; Fleming, Donald A.; Tollerud, David J.; Tseng, Chih-Yu; Chen, Pi-Hsueh; Waters, Kathleen M.
2017-01-01
Objective To examine the patterns of cause-specific mortality and relationship between internal exposure to uranium and specific causes in a pooled cohort of 29,303 workers employed at three former uranium enrichment facilities in the United States with follow-up through 2011. Methods Cause-specific standardized mortality ratios (SMRs) for the full cohort were calculated with the U.S. population as referent. Internal comparison of the dose-response relation between selected outcomes and estimated organ doses was evaluated using regression models. Results External comparison with the U.S. population showed significantly lower SMRs in most diseases in the pooled cohort. Internal comparison showed positive associations of absorbed organ doses with multiple myeloma, and to a lesser degree with kidney cancer. Conclusion In general, these gaseous diffusion plant workers had significantly lower SMRs than the U.S. population. The internal comparison however, showed associations between internal organ doses and diseases associated with uranium exposure in previous studies. PMID:27753121
Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.
2015-01-01
Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB). Additional metrics of comparison can easily be incorporated into this type of analysis. By considering such a multifaceted approach, the top-performing models can easily be identified and considered for further research. The top-performing models can then provide a basis for future applications and explorations by scientists, engineers, managers, and practitioners to suit their own needs.
Lazar, Cosmin; Gatto, Laurent; Ferro, Myriam; Bruley, Christophe; Burger, Thomas
2016-04-01
Missing values are a genuine issue in label-free quantitative proteomics. Recent works have surveyed the different statistical methods to conduct imputation and have compared them on real or simulated data sets and recommended a list of missing value imputation methods for proteomics application. Although insightful, these comparisons do not account for two important facts: (i) depending on the proteomics data set, the missingness mechanism may be of different natures and (ii) each imputation method is devoted to a specific type of missingness mechanism. As a result, we believe that the question at stake is not to find the most accurate imputation method in general but instead the most appropriate one. We describe a series of comparisons that support our views: For instance, we show that a supposedly "under-performing" method (i.e., giving baseline average results), if applied at the "appropriate" time in the data-processing pipeline (before or after peptide aggregation) on a data set with the "appropriate" nature of missing values, can outperform a blindly applied, supposedly "better-performing" method (i.e., the reference method from the state-of-the-art). This leads us to formulate few practical guidelines regarding the choice and the application of an imputation method in a proteomics context.
NF-kB2/p52 Activation and Androgen Receptor Signaling in Prostate Cancer
2010-08-01
for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202- 4302. Respondents should be aware...Materials and Methods; ref. 38). Statistical analysis. Data are shown as the mean ± SD. Multiple group comparison was performed by one-way ANO- VA followed... Moroz , Byron Crawford, Asim Abdel-Mageed, New Orleans, LA INTRODUCTION AND OBJECTIVES: African American men (AA) have twice the incidence and mortality
MultiSETTER: web server for multiple RNA structure comparison.
Čech, Petr; Hoksza, David; Svozil, Daniel
2015-08-12
Understanding the architecture and function of RNA molecules requires methods for comparing and analyzing their tertiary and quaternary structures. While structural superposition of short RNAs is achievable in a reasonable time, large structures represent much bigger challenge. Therefore, we have developed a fast and accurate algorithm for RNA pairwise structure superposition called SETTER and implemented it in the SETTER web server. However, though biological relationships can be inferred by a pairwise structure alignment, key features preserved by evolution can be identified only from a multiple structure alignment. Thus, we extended the SETTER algorithm to the alignment of multiple RNA structures and developed the MultiSETTER algorithm. In this paper, we present the updated version of the SETTER web server that implements a user friendly interface to the MultiSETTER algorithm. The server accepts RNA structures either as the list of PDB IDs or as user-defined PDB files. After the superposition is computed, structures are visualized in 3D and several reports and statistics are generated. To the best of our knowledge, the MultiSETTER web server is the first publicly available tool for a multiple RNA structure alignment. The MultiSETTER server offers the visual inspection of an alignment in 3D space which may reveal structural and functional relationships not captured by other multiple alignment methods based either on a sequence or on secondary structure motifs.
Vertical decomposition with Genetic Algorithm for Multiple Sequence Alignment
2011-01-01
Background Many Bioinformatics studies begin with a multiple sequence alignment as the foundation for their research. This is because multiple sequence alignment can be a useful technique for studying molecular evolution and analyzing sequence structure relationships. Results In this paper, we have proposed a Vertical Decomposition with Genetic Algorithm (VDGA) for Multiple Sequence Alignment (MSA). In VDGA, we divide the sequences vertically into two or more subsequences, and then solve them individually using a guide tree approach. Finally, we combine all the subsequences to generate a new multiple sequence alignment. This technique is applied on the solutions of the initial generation and of each child generation within VDGA. We have used two mechanisms to generate an initial population in this research: the first mechanism is to generate guide trees with randomly selected sequences and the second is shuffling the sequences inside such trees. Two different genetic operators have been implemented with VDGA. To test the performance of our algorithm, we have compared it with existing well-known methods, namely PRRP, CLUSTALX, DIALIGN, HMMT, SB_PIMA, ML_PIMA, MULTALIGN, and PILEUP8, and also other methods, based on Genetic Algorithms (GA), such as SAGA, MSA-GA and RBT-GA, by solving a number of benchmark datasets from BAliBase 2.0. Conclusions The experimental results showed that the VDGA with three vertical divisions was the most successful variant for most of the test cases in comparison to other divisions considered with VDGA. The experimental results also confirmed that VDGA outperformed the other methods considered in this research. PMID:21867510
ERIC Educational Resources Information Center
Stipancic, Kaila L.; Tjaden, Kris; Wilding, Gregory
2016-01-01
Purpose: This study obtained judgments of sentence intelligibility using orthographic transcription for comparison with previously reported intelligibility judgments obtained using a visual analog scale (VAS) for individuals with Parkinson's disease and multiple sclerosis and healthy controls (K. Tjaden, J. E. Sussman, & G. E. Wilding, 2014).…
Multiple sensor fault diagnosis for dynamic processes.
Li, Cheng-Chih; Jeng, Jyh-Cheng
2010-10-01
Modern industrial plants are usually large scaled and contain a great amount of sensors. Sensor fault diagnosis is crucial and necessary to process safety and optimal operation. This paper proposes a systematic approach to detect, isolate and identify multiple sensor faults for multivariate dynamic systems. The current work first defines deviation vectors for sensor observations, and further defines and derives the basic sensor fault matrix (BSFM), consisting of the normalized basic fault vectors, by several different methods. By projecting a process deviation vector to the space spanned by BSFM, this research uses a vector with the resulted weights on each direction for multiple sensor fault diagnosis. This study also proposes a novel monitoring index and derives corresponding sensor fault detectability. The study also utilizes that vector to isolate and identify multiple sensor faults, and discusses the isolatability and identifiability. Simulation examples and comparison with two conventional PCA-based contribution plots are presented to demonstrate the effectiveness of the proposed methodology. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
An Object-oriented Taxonomy of Medical Data Presentations
Starren, Justin; Johnson, Stephen B.
2000-01-01
A variety of methods have been proposed for presenting medical data visually on computers. Discussion of and comparison among these methods have been hindered by a lack of consistent terminology. A taxonomy of medical data presentations based on object-oriented user interface principles is presented. Presentations are divided into five major classes—list, table, graph, icon, and generated text. These are subdivided into eight subclasses with simple inheritance and four subclasses with multiple inheritance. The various subclasses are reviewed and examples are provided. Issues critical to the development and evaluation of presentations are also discussed. PMID:10641959
A Quantitative Comparison of Calibration Methods for RGB-D Sensors Using Different Technologies.
Villena-Martínez, Víctor; Fuster-Guilló, Andrés; Azorín-López, Jorge; Saval-Calvo, Marcelo; Mora-Pascual, Jeronimo; Garcia-Rodriguez, Jose; Garcia-Garcia, Alberto
2017-01-27
RGB-D (Red Green Blue and Depth) sensors are devices that can provide color and depth information from a scene at the same time. Recently, they have been widely used in many solutions due to their commercial growth from the entertainment market to many diverse areas (e.g., robotics, CAD, etc.). In the research community, these devices have had good uptake due to their acceptable levelofaccuracyformanyapplicationsandtheirlowcost,butinsomecases,theyworkatthelimitof their sensitivity, near to the minimum feature size that can be perceived. For this reason, calibration processes are critical in order to increase their accuracy and enable them to meet the requirements of such kinds of applications. To the best of our knowledge, there is not a comparative study of calibration algorithms evaluating its results in multiple RGB-D sensors. Specifically, in this paper, a comparison of the three most used calibration methods have been applied to three different RGB-D sensors based on structured light and time-of-flight. The comparison of methods has been carried out by a set of experiments to evaluate the accuracy of depth measurements. Additionally, an object reconstruction application has been used as example of an application for which the sensor works at the limit of its sensitivity. The obtained results of reconstruction have been evaluated through visual inspection and quantitative measurements.
Bremner, P D; Blacklock, C J; Paganga, G; Mullen, W; Rice-Evans, C A; Crozier, A
2000-06-01
After minimal sample preparation, two different HPLC methodologies, one based on a single gradient reversed-phase HPLC step, the other on multiple HPLC runs each optimised for specific components, were used to investigate the composition of flavonoids and phenolic acids in apple and tomato juices. The principal components in apple juice were identified as chlorogenic acid, phloridzin, caffeic acid and p-coumaric acid. Tomato juice was found to contain chlorogenic acid, caffeic acid, p-coumaric acid, naringenin and rutin. The quantitative estimates of the levels of these compounds, obtained with the two HPLC procedures, were very similar, demonstrating that either method can be used to analyse accurately the phenolic components of apple and tomato juices. Chlorogenic acid in tomato juice was the only component not fully resolved in the single run study and the multiple run analysis prior to enzyme treatment. The single run system of analysis is recommended for the initial investigation of plant phenolics and the multiple run approach for analyses where chromatographic resolution requires improvement.
Method and apparatus for timing of laser beams in a multiple laser beam fusion system
Eastman, Jay M.; Miller, Theodore L.
1981-01-01
The optical path lengths of a plurality of comparison laser beams directed to impinge upon a common target from different directions are compared to that of a master laser beam by using an optical heterodyne interferometric detection technique. The technique consists of frequency shifting the master laser beam and combining the master beam with a first one of the comparison laser beams to produce a time-varying heterodyne interference pattern which is detected by a photo-detector to produce an AC electrical signal indicative of the difference in the optical path lengths of the two beams which were combined. The optical path length of this first comparison laser beam is adjusted to compensate for the detected difference in the optical path lengths of the two beams. The optical path lengths of all of the comparison laser beams are made equal to the optical path length of the master laser beam by repeating the optical path length adjustment process for each of the comparison laser beams. In this manner, the comparison laser beams are synchronized or timed to arrive at the target within .+-.1.times.10.sup.-12 second of each other.
NASA Astrophysics Data System (ADS)
Song, Yang; Liu, Zhigang; Wang, Hongrui; Lu, Xiaobing; Zhang, Jing
2015-10-01
Due to the intrinsic nonlinear characteristics and complex structure of the high-speed catenary system, a modelling method is proposed based on the analytical expressions of nonlinear cable and truss elements. The calculation procedure for solving the initial equilibrium state is proposed based on the Newton-Raphson iteration method. The deformed configuration of the catenary system as well as the initial length of each wire can be calculated. Its accuracy and validity of computing the initial equilibrium state are verified by comparison with the separate model method, absolute nodal coordinate formulation and other methods in the previous literatures. Then, the proposed model is combined with a lumped pantograph model and a dynamic simulation procedure is proposed. The accuracy is guaranteed by the multiple iterative calculations in each time step. The dynamic performance of the proposed model is validated by comparison with EN 50318, the results of the finite element method software and SIEMENS simulation report, respectively. At last, the influence of the catenary design parameters (such as the reserved sag and pre-tension) on the dynamic performance is preliminarily analysed by using the proposed model.
Jansen, Jeroen P; Fleurence, Rachael; Devine, Beth; Itzler, Robbin; Barrett, Annabel; Hawkins, Neil; Lee, Karen; Boersma, Cornelis; Annemans, Lieven; Cappelleri, Joseph C
2011-06-01
Evidence-based health-care decision making requires comparisons of all relevant competing interventions. In the absence of randomized, controlled trials involving a direct comparison of all treatments of interest, indirect treatment comparisons and network meta-analysis provide useful evidence for judiciously selecting the best choice(s) of treatment. Mixed treatment comparisons, a special case of network meta-analysis, combine direct and indirect evidence for particular pairwise comparisons, thereby synthesizing a greater share of the available evidence than a traditional meta-analysis. This report from the ISPOR Indirect Treatment Comparisons Good Research Practices Task Force provides guidance on the interpretation of indirect treatment comparisons and network meta-analysis to assist policymakers and health-care professionals in using its findings for decision making. We start with an overview of how networks of randomized, controlled trials allow multiple treatment comparisons of competing interventions. Next, an introduction to the synthesis of the available evidence with a focus on terminology, assumptions, validity, and statistical methods is provided, followed by advice on critically reviewing and interpreting an indirect treatment comparison or network meta-analysis to inform decision making. We finish with a discussion of what to do if there are no direct or indirect treatment comparisons of randomized, controlled trials possible and a health-care decision still needs to be made. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Multiple comparisons in drug efficacy studies: scientific or marketing principles?
Leo, Jonathan
2004-01-01
When researchers design an experiment to compare a given medication to another medication, a behavioral therapy, or a placebo, the experiment often involves numerous comparisons. For instance, there may be several different evaluation methods, raters, and time points. Although scientifically justified, such comparisons can be abused in the interests of drug marketing. This article provides two recent examples of such questionable practices. The first involves the case of the arthritis drug celecoxib (Celebrex), where the study lasted 12 months but the authors only presented 6 months of data. The second case involves the NIMH Multimodal Treatment Study (MTA) study evaluating the efficacy of stimulant medication for attention-deficit hyperactivity disorder where ratings made by several groups are reported in contradictory fashion. The MTA authors have not clarified the confusion, at least in print, suggesting that the actual findings of the study may have played little role in the authors' reported conclusions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judd, R.C.; Caldwell, H.D.
1985-01-01
The objective of this study was to determine if in-gel chloramine-T radioiodination adequately labels OM proteins to allow for accurate and precise structural comparison of these molecules. Therefore, intrinsically /sup 14/C-amino acid labeled proteins and /sup 125/I-labeled proteins were cleaved with two endopeptidic reagents and the peptide fragments separated by HPLC. A comparison of retention times of the fragments, as determined by differential radiation counting, thus indicated whether /sup 125/Ilabeling identified of all the peptide peaks seen in the /sup 14/Clabeled proteins. Results demonstrated that radioiodination yields complete and accurate information about the primary structure of outer membrane proteins. Inmore » addition, it permits the use of extremely small amounts of protein allowing for method optimization and multiple separations to insure reproducibility.« less
Functional equivalency inferred from "authoritative sources" in networks of homologous proteins.
Natarajan, Shreedhar; Jakobsson, Eric
2009-06-12
A one-on-one mapping of protein functionality across different species is a critical component of comparative analysis. This paper presents a heuristic algorithm for discovering the Most Likely Functional Counterparts (MoLFunCs) of a protein, based on simple concepts from network theory. A key feature of our algorithm is utilization of the user's knowledge to assign high confidence to selected functional identification. We show use of the algorithm to retrieve functional equivalents for 7 membrane proteins, from an exploration of almost 40 genomes form multiple online resources. We verify the functional equivalency of our dataset through a series of tests that include sequence, structure and function comparisons. Comparison is made to the OMA methodology, which also identifies one-on-one mapping between proteins from different species. Based on that comparison, we believe that incorporation of user's knowledge as a key aspect of the technique adds value to purely statistical formal methods.
Functional Equivalency Inferred from “Authoritative Sources” in Networks of Homologous Proteins
Natarajan, Shreedhar; Jakobsson, Eric
2009-01-01
A one-on-one mapping of protein functionality across different species is a critical component of comparative analysis. This paper presents a heuristic algorithm for discovering the Most Likely Functional Counterparts (MoLFunCs) of a protein, based on simple concepts from network theory. A key feature of our algorithm is utilization of the user's knowledge to assign high confidence to selected functional identification. We show use of the algorithm to retrieve functional equivalents for 7 membrane proteins, from an exploration of almost 40 genomes form multiple online resources. We verify the functional equivalency of our dataset through a series of tests that include sequence, structure and function comparisons. Comparison is made to the OMA methodology, which also identifies one-on-one mapping between proteins from different species. Based on that comparison, we believe that incorporation of user's knowledge as a key aspect of the technique adds value to purely statistical formal methods. PMID:19521530
Najafi, Mostafa; Akouchekian, Shahla; Ghaderi, Alireza; Mahaki, Behzad; Rezaei, Mariam
2017-01-01
Background: Attention deficit and hyperactivity disorder (ADHD) is a common psychological problem during childhood. This study aimed to evaluate multiple intelligences profiles of children with ADHD in comparison with non-ADHD. Materials and Methods: This cross-sectional descriptive analytical study was done on 50 children of 6–13 years old in two groups of with and without ADHD. Children with ADHD were referred to Clinics of Child and Adolescent Psychiatry, Isfahan University of Medical Sciences, in 2014. Samples were selected based on clinical interview (based on Diagnostic and Statistical Manual of Mental Disorders IV and parent–teacher strengths and difficulties questionnaire), which was done by psychiatrist and psychologist. Raven intelligence quotient (IQ) test was used, and the findings were compared to the results of multiple intelligences test. Data analysis was done using a multivariate analysis of covariance using SPSS20 software. Results: Comparing the profiles of multiple intelligence among two groups, there are more kinds of multiple intelligences in control group than ADHD group, a difference which has been more significant in logical, interpersonal, and intrapersonal intelligence (P < 0.05). There was no significant difference with the other kinds of multiple intelligences in two groups (P > 0.05). The IQ average score in the control group and ADHD group was 102.42 ± 16.26 and 96.72 ± 16.06, respectively, that reveals the negative effect of ADHD on IQ average value. There was an insignificance relationship between linguistic and naturalist intelligence (P > 0.05). However, in other kinds of multiple intelligences, direct and significant relationships were observed (P < 0.05). Conclusions: Since the levels of IQ (Raven test) and MI in control group were more significant than ADHD group, ADHD is likely to be associated with logical-mathematical, interpersonal, and intrapersonal profiles. PMID:29285478
Prado, Jérôme; Mutreja, Rachna; Zhang, Hongchuan; Mehta, Rucha; Desroches, Amy S.; Minas, Jennifer E.; Booth, James R.
2010-01-01
It has been proposed that recent cultural inventions such as symbolic arithmetic recycle evolutionary older neural mechanisms. A central assumption of this hypothesis is that the degree to which a pre-existing mechanism is recycled depends upon the degree of similarity between its initial function and the novel task. To test this assumption, we investigated whether the brain region involved in magnitude comparison in the intraparietal sulcus (IPS), localized by a numerosity comparison task, is recruited to a greater degree by arithmetic problems that involve number comparison (single-digit subtractions) than by problems that involve retrieving facts from memory (single-digit multiplications). Our results confirmed that subtractions are associated with greater activity in the IPS than multiplications, whereas multiplications elicit greater activity than subtractions in regions involved in verbal processing including the middle temporal gyrus and inferior frontal gyrus that were localized by a phonological processing task. Pattern analyses further indicated that the neural mechanisms more active for subtraction than multiplication in the IPS overlap with those involved in numerosity comparison, and that the strength of this overlap predicts inter-individual performance in the subtraction task. These findings provide novel evidence that elementary arithmetic relies on the co-option of evolutionary older neural circuits. PMID:21246667
Performance analysis of cross-layer design with average PER constraint over MIMO fading channels
NASA Astrophysics Data System (ADS)
Dang, Xiaoyu; Liu, Yan; Yu, Xiangbin
2015-12-01
In this article, a cross-layer design (CLD) scheme for multiple-input and multiple-output system with the dual constraints of imperfect feedback and average packet error rate (PER) is presented, which is based on the combination of the adaptive modulation and the automatic repeat request protocols. The design performance is also evaluated over wireless Rayleigh fading channel. With the constraint of target PER and average PER, the optimum switching thresholds (STs) for attaining maximum spectral efficiency (SE) are developed. An effective iterative algorithm for finding the optimal STs is proposed via Lagrange multiplier optimisation. With different thresholds available, the analytical expressions of the average SE and PER are provided for the performance evaluation. To avoid the performance loss caused by the conventional single estimate, multiple outdated estimates (MOE) method, which utilises multiple previous channel estimation information, is presented for CLD to improve the system performance. It is shown that numerical simulations for average PER and SE are in consistent with the theoretical analysis and that the developed CLD with average PER constraint can meet the target PER requirement and show better performance in comparison with the conventional CLD with instantaneous PER constraint. Especially, the CLD based on the MOE method can obviously increase the system SE and reduce the impact of feedback delay greatly.
A New Variational Approach for Multiplicative Noise and Blur Removal
Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad; Sun, HongGuang
2017-01-01
This paper proposes a new variational model for joint multiplicative denoising and deblurring. It combines a total generalized variation filter (which has been proved to be able to reduce the blocky-effects by being aware of high-order smoothness) and shearlet transform (that effectively preserves anisotropic image features such as sharp edges, curves and so on). The new model takes the advantage of both regularizers since it is able to minimize the staircase effects while preserving sharp edges, textures and other fine image details. The existence and uniqueness of a solution to the proposed variational model is also discussed. The resulting energy functional is then solved by using alternating direction method of multipliers. Numerical experiments showing that the proposed model achieves satisfactory restoration results, both visually and quantitatively in handling the blur (motion, Gaussian, disk, and Moffat) and multiplicative noise (Gaussian, Gamma, or Rayleigh) reduction. A comparison with other recent methods in this field is provided as well. The proposed model can also be applied for restoring both single and multi-channel images contaminated with multiplicative noise, and permit cross-channel blurs when the underlying image has more than one channel. Numerical tests on color images are conducted to demonstrate the effectiveness of the proposed model. PMID:28141802
Diffusion blotting: a rapid and simple method for production of multiple blots from a single gel.
Olsen, Ingrid; Wiker, Harald G
2015-01-01
A very simple and fast method for diffusion blotting of proteins from precast SDS-PAGE gels on a solid plastic support was developed. Diffusion blotting for 3 min gives a quantitative transfer of 10 % compared to 1-h electroblotting. For each subsequent blot from the same gel a doubling of transfer time is necessary to obtain the same amount of protein onto each blot. High- and low-molecular-weight components are transferred equally efficiently when compared to electroblotting. However, both methods do give a higher total transfer of the low-molecular-weight proteins compared to the large proteins. The greatest advantage of diffusion blotting is that several blots can be made from each lane, thus enabling testing of multiple antisera on virtually identical blots. The gel remains on the plastic support, which prevents it from stretching or shrinking. This ensures identical blots and facilitates more reliable molecular weight determination. Furthermore the proteins remaining in the gel can be stained with Coomassie Brilliant Blue or other methods for exact and easy comparison with the developed blots. These advantages make diffusion blotting the method of choice when quantitative protein transfer is not required.
Optical bandgap of single- and multi-layered amorphous germanium ultra-thin films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Pei; Zaslavsky, Alexander; Longo, Paolo
2016-01-07
Accurate optical methods are required to determine the energy bandgap of amorphous semiconductors and elucidate the role of quantum confinement in nanometer-scale, ultra-thin absorbing layers. Here, we provide a critical comparison between well-established methods that are generally employed to determine the optical bandgap of thin-film amorphous semiconductors, starting from normal-incidence reflectance and transmittance measurements. First, we demonstrate that a more accurate estimate of the optical bandgap can be achieved by using a multiple-reflection interference model. We show that this model generates more reliable results compared to the widely accepted single-pass absorption method. Second, we compare two most representative methods (Taucmore » and Cody plots) that are extensively used to determine the optical bandgap of thin-film amorphous semiconductors starting from the extracted absorption coefficient. Analysis of the experimental absorption data acquired for ultra-thin amorphous germanium (a-Ge) layers demonstrates that the Cody model is able to provide a less ambiguous energy bandgap value. Finally, we apply our proposed method to experimentally determine the optical bandgap of a-Ge/SiO{sub 2} superlattices with single and multiple a-Ge layers down to 2 nm thickness.« less
Boehm, A.B.; Griffith, J.; McGee, C.; Edge, T.A.; Solo-Gabriele, H. M.; Whitman, R.; Cao, Y.; Getrich, M.; Jay, J.A.; Ferguson, D.; Goodwin, K.D.; Lee, C.M.; Madison, M.; Weisberg, S.B.
2009-01-01
Aims: The absence of standardized methods for quantifying faecal indicator bacteria (FIB) in sand hinders comparison of results across studies. The purpose of the study was to compare methods for extraction of faecal bacteria from sands and recommend a standardized extraction technique. Methods and Results: Twenty-two methods of extracting enterococci and Escherichia coli from sand were evaluated, including multiple permutations of hand shaking, mechanical shaking, blending, sonication, number of rinses, settling time, eluant-to-sand ratio, eluant composition, prefiltration and type of decantation. Tests were performed on sands from California, Florida and Lake Michigan. Most extraction parameters did not significantly affect bacterial enumeration. anova revealed significant effects of eluant composition and blending; with both sodium metaphosphate buffer and blending producing reduced counts. Conclusions: The simplest extraction method that produced the highest FIB recoveries consisted of 2 min of hand shaking in phosphate-buffered saline or deionized water, a 30-s settling time, one-rinse step and a 10 : 1 eluant volume to sand weight ratio. This result was consistent across the sand compositions tested in this study but could vary for other sand types. Significance and Impact of the Study: Method standardization will improve the understanding of how sands affect surface water quality. ?? 2009 The Society for Applied Microbiology.
Dong, Ming; Fisher, Carolyn; Añez, Germán; Rios, Maria; Nakhasi, Hira L.; Hobson, J. Peyton; Beanan, Maureen; Hockman, Donna; Grigorenko, Elena; Duncan, Robert
2016-01-01
Aims To demonstrate standardized methods for spiking pathogens into human matrices for evaluation and comparison among diagnostic platforms. Methods and Results This study presents detailed methods for spiking bacteria or protozoan parasites into whole blood and virus into plasma. Proper methods must start with a documented, reproducible pathogen source followed by steps that include standardized culture, preparation of cryopreserved aliquots, quantification of the aliquots by molecular methods, production of sufficient numbers of individual specimens and testing of the platform with multiple mock specimens. Results are presented following the described procedures that showed acceptable reproducibility comparing in-house real-time PCR assays to a commercially available multiplex molecular assay. Conclusions A step by step procedure has been described that can be followed by assay developers who are targeting low prevalence pathogens. Significance and Impact of Study The development of diagnostic platforms for detection of low prevalence pathogens such as biothreat or emerging agents is challenged by the lack of clinical specimens for performance evaluation. This deficit can be overcome using mock clinical specimens made by spiking cultured pathogens into human matrices. To facilitate evaluation and comparison among platforms, standardized methods must be followed in the preparation and application of spiked specimens. PMID:26835651
OrthoMCL: Identification of Ortholog Groups for Eukaryotic Genomes
Li, Li; Stoeckert, Christian J.; Roos, David S.
2003-01-01
The identification of orthologous groups is useful for genome annotation, studies on gene/protein evolution, comparative genomics, and the identification of taxonomically restricted sequences. Methods successfully exploited for prokaryotic genome analysis have proved difficult to apply to eukaryotes, however, as larger genomes may contain multiple paralogous genes, and sequence information is often incomplete. OrthoMCL provides a scalable method for constructing orthologous groups across multiple eukaryotic taxa, using a Markov Cluster algorithm to group (putative) orthologs and paralogs. This method performs similarly to the INPARANOID algorithm when applied to two genomes, but can be extended to cluster orthologs from multiple species. OrthoMCL clusters are coherent with groups identified by EGO, but improved recognition of “recent” paralogs permits overlapping EGO groups representing the same gene to be merged. Comparison with previously assigned EC annotations suggests a high degree of reliability, implying utility for automated eukaryotic genome annotation. OrthoMCL has been applied to the proteome data set from seven publicly available genomes (human, fly, worm, yeast, Arabidopsis, the malaria parasite Plasmodium falciparum, and Escherichia coli). A Web interface allows queries based on individual genes or user-defined phylogenetic patterns (http://www.cbil.upenn.edu/gene-family). Analysis of clusters incorporating P. falciparum genes identifies numerous enzymes that were incompletely annotated in first-pass annotation of the parasite genome. PMID:12952885
NASA Astrophysics Data System (ADS)
Patzelt, A.; Sterry, W.; Lademann, J.
2010-12-01
A major function of the skin is to provide a protective barrier at the interface between external environment and the organism. For skin barrier measurement, a multiplicity of methods is available. As standard methods, the determination of the transepidermal water loss (TEWL) as well as the measurement of the stratum corneum hydration, are widely accepted, although they offer some obvious disadvantages such as increased interference liability. Recently, new optical and spectroscopic methods have been introduced to investigate skin barrier properties in vivo. Especially, laser scanning microscopy has been shown to represent an excellent tool to study skin barrier integrity in many areas of relevance such as cosmetology, occupation, diseased skin, and wound healing.
Yanagisawa, Yukio; Matsuo, Yoshimi; Shuntoh, Hisato; Horiuchi, Noriaki
2014-01-01
[Purpose] The purpose of this study was to elucidate the effect of expiratory resistive loading on orbicularis oris muscle activity. [Subjects] Subjects were 23 healthy individuals (11 males, mean age 25.5±4.3 years; 12 females, mean age 25.0±3.0 years). [Methods] Surface electromyography was performed to measure the activity of the orbicularis oris muscle during maximum lip closure and resistive loading at different expiratory pressures. Measurement was performed at 10%, 30%, 50%, and 100% of maximum expiratory pressure (MEP) for all subjects. The t-test was used to compare muscle activity between maximum lip closure and 100% MEP, and analysis of variance followed by multiple comparisons was used to compare the muscle activities observed at different expiratory pressures. [Results] No significant difference in muscle activity was observed between maximum lip closure and 100% MEP. Analysis of variance with multiple comparisons revealed significant differences among the different expiratory pressures. [Conclusion] Orbicularis oris muscle activity increased with increasing expiratory resistive loading. PMID:24648644
Statistical testing and power analysis for brain-wide association study.
Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng
2018-04-05
The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.
A novel framework of tissue membrane systems for image fusion.
Zhang, Zulin; Yi, Xinzhong; Peng, Hong
2014-01-01
This paper proposes a tissue membrane system-based framework to deal with the optimal image fusion problem. A spatial domain fusion algorithm is given, and a tissue membrane system of multiple cells is used as its computing framework. Based on the multicellular structure and inherent communication mechanism of the tissue membrane system, an improved velocity-position model is developed. The performance of the fusion framework is studied with comparison of several traditional fusion methods as well as genetic algorithm (GA)-based and differential evolution (DE)-based spatial domain fusion methods. Experimental results show that the proposed fusion framework is superior or comparable to the other methods and can be efficiently used for image fusion.
Analysis of a dual-reflector antenna system using physical optics and digital computers
NASA Technical Reports Server (NTRS)
Schmidt, R. F.
1972-01-01
The application of physical-optics diffraction theory to a deployable dual-reflector geometry is discussed. The methods employed are not restricted to the Conical-Gregorian antenna, but apply in a general way to dual and even multiple reflector systems. Complex vector wave methods are used in the Fresnel and Fraunhofer regions of the reflectors. Field amplitude, phase, polarization data, and time average Poynting vectors are obtained via an IBM 360/91 digital computer. Focal region characteristics are plotted with the aid of a CalComp plotter. Comparison between the GSFC Huygens wavelet approach, JPL measurements, and JPL computer results based on the near field spherical wave expansion method are made wherever possible.
Observing system simulation experiments with multiple methods
NASA Astrophysics Data System (ADS)
Ishibashi, Toshiyuki
2014-11-01
An observing System Simulation Experiment (OSSE) is a method to evaluate impacts of hypothetical observing systems on analysis and forecast accuracy in numerical weather prediction (NWP) systems. Since OSSE requires simulations of hypothetical observations, uncertainty of OSSE results is generally larger than that of observing system experiments (OSEs). To reduce such uncertainty, OSSEs for existing observing systems are often carried out as calibration of the OSSE system. The purpose of this study is to achieve reliable OSSE results based on results of OSSEs with multiple methods. There are three types of OSSE methods. The first one is the sensitivity observing system experiment (SOSE) based OSSE (SOSEOSSE). The second one is the ensemble of data assimilation cycles (ENDA) based OSSE (ENDA-OSSE). The third one is the nature-run (NR) based OSSE (NR-OSSE). These three OSSE methods have very different properties. The NROSSE evaluates hypothetical observations in a virtual (hypothetical) world, NR. The ENDA-OSSE is very simple method but has a sampling error problem due to a small size ensemble. The SOSE-OSSE requires a very highly accurate analysis field as a pseudo truth of the real atmosphere. We construct these three types of OSSE methods in the Japan meteorological Agency (JMA) global 4D-Var experimental system. In the conference, we will present initial results of these OSSE systems and their comparisons.
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies
Rukhin, Andrew L.
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.
Rukhin, Andrew L
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.
Pairwise Multiple Comparisons in Single Group Repeated Measures Analysis.
ERIC Educational Resources Information Center
Barcikowski, Robert S.; Elliott, Ronald S.
Research was conducted to provide educational researchers with a choice of pairwise multiple comparison procedures (P-MCPs) to use with single group repeated measures designs. The following were studied through two Monte Carlo (MC) simulations: (1) The T procedure of J. W. Tukey (1953); (2) a modification of Tukey's T (G. Keppel, 1973); (3) the…
McDermott, Imelda; Checkland, Kath; Harrison, Stephen; Snow, Stephanie; Coleman, Anna
2013-01-01
The language used by National Health Service (NHS) "commissioning" managers when discussing their roles and responsibilities can be seen as a manifestation of "identity work", defined as a process of identifying. This paper aims to offer a novel approach to analysing "identity work" by triangulation of multiple analytical methods, combining analysis of the content of text with analysis of its form. Fairclough's discourse analytic methodology is used as a framework. Following Fairclough, the authors use analytical methods associated with Halliday's systemic functional linguistics. While analysis of the content of interviews provides some information about NHS Commissioners' perceptions of their roles and responsibilities, analysis of the form of discourse that they use provides a more detailed and nuanced view. Overall, the authors found that commissioning managers have a higher level of certainty about what commissioning is not rather than what commissioning is; GP managers have a high level of certainty of their identity as a GP rather than as a manager; and both GP managers and non-GP managers oscillate between multiple identities depending on the different situations they are in. This paper offers a novel approach to triangulation, based not on the usual comparison of multiple data sources, but rather based on the application of multiple analytical methods to a single source of data. This paper also shows the latent uncertainty about the nature of commissioning enterprise in the English NHS.
Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.
Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E
2014-02-28
The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.
Measuring, modeling, and minimizing capacitances in heterojunction bipolar transistors
NASA Astrophysics Data System (ADS)
Anholt, R.; Bozada, C.; Dettmer, R.; Via, D.; Jenkins, T.; Barrette, J.; Ebel, J.; Havasy, C.; Sewell, J.; Quach, T.
1996-07-01
We demonstrate methods to separate junction and pad capacitances from on-wafer S-parameter measurements of HBTs with different areas and layouts. The measured junction capacitances are in good agreement with models, indicating that large-area devices are suitable for monitoring vendor epi-wafer doping. Measuring open HBTs does not give the correct pad capacitances. Finally, a capacitance comparison for a variety of layouts shows that bar-devices consistently give smaller base-collector values than multiple dot HBTs.
AGARWAL, SANDEEP K.; GOURH, PRAVITT; SHETE, SANJAY; PAZ, GENE; DIVECHA, DIPAL; REVEILLE, JOHN D.; ASSASSI, SHERVIN; TAN, FILEMON K.; MAYES, MAUREEN D.; ARNETT, FRANK C.
2010-01-01
Objective IL23R has been identified as a susceptibility gene for development of multiple autoimmune diseases. We investigated the possible association of IL23R with systemic sclerosis (SSc), an autoimmune disease that leads to the development of cutaneous and visceral fibrosis. Methods We tested 9 single-nucleotide polymorphisms (SNP) in IL23R for association with SSc in a cohort of 1402 SSc cases and 1038 controls. IL23R SNP tested were previously identified as SNP showing associations with inflammatory bowel disease. Results Case-control comparisons revealed no statistically significant differences between patients and healthy controls with any of the IL23R polymorphisms. Analyses of subsets of SSc patients showed that rs11209026 (Arg381Gln variant) was associated with anti-topoisomerase I antibody (ATA)-positive SSc (p = 0.001)) and rs11465804 SNP was associated with diffuse and ATA-positive SSc (p = 0.0001, p = 0.0026, respectively). These associations remained significant after accounting for multiple comparisons using the false discovery rate method. Wild-type genotype at both rs11209026 and rs11465804 showed significant protection against the presence of pulmonary hypertension (PHT). (p = 3×10−5, p = 1×10−5, respectively). Conclusion Polymorphisms in IL23R are associated with susceptibility to ATA-positive SSc and protective against development of PHT in patients with SSc. PMID:19918037
A Comparison of Single-Cycle Versus Multiple-Cycle Proof Testing Strategies
NASA Technical Reports Server (NTRS)
McClung, R. C.; Chell, G. G.; Millwater, H. R.; Russell, D. A.; Millwater, H. R.
1999-01-01
Single-cycle and multiple-cycle proof testing (SCPT and MCPT) strategies for reusable aerospace propulsion system components are critically evaluated and compared from a rigorous elastic-plastic fracture mechanics perspective. Earlier MCPT studies are briefly reviewed. New J-integral estimation methods for semielliptical surface cracks and cracks at notches are derived and validated. Engineering methods are developed to characterize crack growth rates during elastic-plastic fatigue crack growth (FCG) and the tear-fatigue interaction near instability. Surface crack growth experiments are conducted with Inconel 718 to characterize tearing resistance, FCG under small-scale yielding and elastic-plastic conditions, and crack growth during simulated MCPT. Fractography and acoustic emission studies provide additional insight. The relative merits of SCPT and MCPT are directly compared using a probabilistic analysis linked with an elastic-plastic crack growth computer code. The conditional probability of failure in service is computed for a population of components that have survived a previous proof test, based on an assumed distribution of initial crack depths. Parameter studies investigate the influence of proof factor, tearing resistance, crack shape, initial crack depth distribution, and notches on the MCPT versus SCPT comparison. The parameter studies provide a rational basis to formulate conclusions about the relative advantages and disadvantages of SCPT and MCPT. Practical engineering guidelines are proposed to help select the optimum proof test protocol in a given application.
A Comparison of Single-Cycle Versus Multiple-Cycle Proof Testing Strategies
NASA Technical Reports Server (NTRS)
McClung, R. C.; Chell, G. G.; Millwater, H. R.; Russell, D. A.; Orient, G. E.
1996-01-01
Single-cycle and multiple-cycle proof testing (SCPT and MCPT) strategies for reusable aerospace propulsion system components are critically evaluated and compared from a rigorous elastic-plastic fracture mechanics perspective. Earlier MCPT studies are briefly reviewed. New J-integral estimation methods for semi-elliptical surface cracks and cracks at notches are derived and validated. Engineering methods are developed to characterize crack growth rates during elastic-plastic fatigue crack growth (FCG) and the tear-fatigue interaction near instability. Surface crack growth experiments are conducted with Inconel 718 to characterize tearing resistance, FCG under small-scale yielding and elastic-plastic conditions, and crack growth during simulated MCPT. Fractography and acoustic emission studies provide additional insight. The relative merits of SCPT and MCPT are directly compared using a probabilistic analysis linked with an elastic-plastic crack growth computer code. The conditional probability of failure in service is computed for a population of components that have survived a previous proof test, based on an assumed distribution of initial crack depths. Parameter studies investigate the influence of proof factor, tearing resistance, crack shape, initial crack depth distribution, and notches on the MCPT vs. SCPT comparison. The parameter studies provide a rational basis to formulate conclusions about the relative advantages and disadvantages of SCPT and MCPT. Practical engineering guidelines are proposed to help select the optimum proof test protocol in a given application.
A phantom design for assessment of detectability in PET imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollenweber, Scott D., E-mail: scott.wollenweber@g
2016-09-15
Purpose: The primary clinical role of positron emission tomography (PET) imaging is the detection of anomalous regions of {sup 18}F-FDG uptake, which are often indicative of malignant lesions. The goal of this work was to create a task-configurable fillable phantom for realistic measurements of detectability in PET imaging. Design goals included simplicity, adjustable feature size, realistic size and contrast levels, and inclusion of a lumpy (i.e., heterogeneous) background. Methods: The detection targets were hollow 3D-printed dodecahedral nylon features. The exostructure sphere-like features created voids in a background of small, solid non-porous plastic (acrylic) spheres inside a fillable tank. The featuresmore » filled at full concentration while the background concentration was reduced due to filling only between the solid spheres. Results: Multiple iterations of feature size and phantom construction were used to determine a configuration at the limit of detectability for a PET/CT system. A full-scale design used a 20 cm uniform cylinder (head-size) filled with a fixed pattern of features at a contrast of approximately 3:1. Known signal-present and signal-absent PET sub-images were extracted from multiple scans of the same phantom and with detectability in a challenging (i.e., useful) range. These images enabled calculation and comparison of the quantitative observer detectability metrics between scanner designs and image reconstruction methods. The phantom design has several advantages including filling simplicity, wall-less contrast features, the control of the detectability range via feature size, and a clinically realistic lumpy background. Conclusions: This phantom provides a practical method for testing and comparison of lesion detectability as a function of imaging system, acquisition parameters, and image reconstruction methods and parameters.« less
Resolution and Assignment of Differential Ion Mobility Spectra of Sarcosine and Isomers
NASA Astrophysics Data System (ADS)
Berthias, Francis; Maatoug, Belkis; Glish, Gary L.; Moussa, Fathi; Maitre, Philippe
2018-02-01
Due to their central role in biochemical processes, fast separation and identification of amino acids (AA) is of importance in many areas of the biomedical field including the diagnosis and monitoring of inborn errors of metabolism and biomarker discovery. Due to the large number of AA together with their isomers and isobars, common methods of AA analysis are tedious and time-consuming because they include a chromatographic separation step requiring pre- or post-column derivatization. Here, we propose a rapid method of separation and identification of sarcosine, a biomarker candidate of prostate cancer, from isomers using differential ion mobility spectrometry (DIMS) interfaced with a tandem mass spectrometer (MS/MS) instrument. Baseline separation of protonated sarcosine from α- and β-alanine isomers can be easily achieved. Identification of DIMS peak is performed using an isomer-specific activation mode where DIMS- and mass-selected ions are irradiated at selected wavenumbers allowing for the specific fragmentation via an infrared multiple photon dissociation (IRMPD) process. Two orthogonal methods to MS/MS are thus added, where the MS/MS(IRMPD) is nothing but an isomer-specific multiple reaction monitoring (MRM) method. The identification relies on the comparison of DIMS-MS/MS(IRMPD) chromatograms recorded at different wavenumbers. Based on the comparison of IR spectra of the three isomers, it is shown that specific depletion of the two protonated α- and β-alanine can be achieved, thus allowing for clear identification of the sarcosine peak. It is also demonstrated that DIMS-MS/MS(IRMPD) spectra in the carboxylic C=O stretching region allow for the resolution of overlapping DIMS peaks. [Figure not available: see fulltext.
Multi-Objective Community Detection Based on Memetic Algorithm
2015-01-01
Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels. PMID:25932646
Multi-objective community detection based on memetic algorithm.
Wu, Peng; Pan, Li
2015-01-01
Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels.
Dao, Trung-Kien; Nguyen, Hung-Long; Pham, Thanh-Thuy; Castelli, Eric; Nguyen, Viet-Tung; Nguyen, Dinh-Van
2014-01-01
Many user localization technologies and methods have been proposed for either indoor or outdoor environments. However, each technology has its own drawbacks. Recently, many researches and designs have been proposed to build a combination of multiple localization technologies system which can provide higher precision results and solve the limitation in each localization technology alone. In this paper, a conceptual design of a general localization platform using combination of multiple localization technologies is introduced. The combination is realized by dividing spaces into grid points. To demonstrate this platform, a system with GPS, RFID, WiFi, and pedometer technologies is established. Experiment results show that the accuracy and availability are improved in comparison with each technology individually.
Dao, Trung-Kien; Nguyen, Hung-Long; Pham, Thanh-Thuy; Nguyen, Viet-Tung; Nguyen, Dinh-Van
2014-01-01
Many user localization technologies and methods have been proposed for either indoor or outdoor environments. However, each technology has its own drawbacks. Recently, many researches and designs have been proposed to build a combination of multiple localization technologies system which can provide higher precision results and solve the limitation in each localization technology alone. In this paper, a conceptual design of a general localization platform using combination of multiple localization technologies is introduced. The combination is realized by dividing spaces into grid points. To demonstrate this platform, a system with GPS, RFID, WiFi, and pedometer technologies is established. Experiment results show that the accuracy and availability are improved in comparison with each technology individually. PMID:25147866
Comparison of Penalty Functions for Sparse Canonical Correlation Analysis
Chalise, Prabhakar; Fridley, Brooke L.
2011-01-01
Canonical correlation analysis (CCA) is a widely used multivariate method for assessing the association between two sets of variables. However, when the number of variables far exceeds the number of subjects, such in the case of large-scale genomic studies, the traditional CCA method is not appropriate. In addition, when the variables are highly correlated the sample covariance matrices become unstable or undefined. To overcome these two issues, sparse canonical correlation analysis (SCCA) for multiple data sets has been proposed using a Lasso type of penalty. However, these methods do not have direct control over sparsity of solution. An additional step that uses Bayesian Information Criterion (BIC) has also been suggested to further filter out unimportant features. In this paper, a comparison of four penalty functions (Lasso, Elastic-net, SCAD and Hard-threshold) for SCCA with and without the BIC filtering step have been carried out using both real and simulated genotypic and mRNA expression data. This study indicates that the SCAD penalty with BIC filter would be a preferable penalty function for application of SCCA to genomic data. PMID:21984855
Taking Halo-Independent Dark Matter Methods Out of the Bin
Fox, Patrick J.; Kahn, Yonatan; McCullough, Matthew
2014-10-30
We develop a new halo-independent strategy for analyzing emerging DM hints, utilizing the method of extended maximum likelihood. This approach does not require the binning of events, making it uniquely suited to the analysis of emerging DM direct detection hints. It determines a preferred envelope, at a given confidence level, for the DM velocity integral which best fits the data using all available information and can be used even in the case of a single anomalous scattering event. All of the halo-independent information from a direct detection result may then be presented in a single plot, allowing simple comparisons betweenmore » multiple experiments. This results in the halo-independent analogue of the usual mass and cross-section plots found in typical direct detection analyses, where limit curves may be compared with best-fit regions in halo-space. The method is straightforward to implement, using already-established techniques, and its utility is demonstrated through the first unbinned halo-independent comparison of the three anomalous events observed in the CDMS-Si detector with recent limits from the LUX experiment.« less
Dose finding with the sequential parallel comparison design.
Wang, Jessie J; Ivanova, Anastasia
2014-01-01
The sequential parallel comparison design (SPCD) is a two-stage design recommended for trials with possibly high placebo response. A drug-placebo comparison in the first stage is followed in the second stage by placebo nonresponders being re-randomized between drug and placebo. We describe how SPCD can be used in trials where multiple doses of a drug or multiple treatments are compared with placebo and present two adaptive approaches. We detail how to analyze data in such trials and give recommendations about the allocation proportion to placebo in the two stages of SPCD.
Li, Ying; Shi, Xiaohu; Liang, Yanchun; Xie, Juan; Zhang, Yu; Ma, Qin
2017-01-21
RNAs have been found to carry diverse functionalities in nature. Inferring the similarity between two given RNAs is a fundamental step to understand and interpret their functional relationship. The majority of functional RNAs show conserved secondary structures, rather than sequence conservation. Those algorithms relying on sequence-based features usually have limitations in their prediction performance. Hence, integrating RNA structure features is very critical for RNA analysis. Existing algorithms mainly fall into two categories: alignment-based and alignment-free. The alignment-free algorithms of RNA comparison usually have lower time complexity than alignment-based algorithms. An alignment-free RNA comparison algorithm was proposed, in which novel numerical representations RNA-TVcurve (triple vector curve representation) of RNA sequence and corresponding secondary structure features are provided. Then a multi-scale similarity score of two given RNAs was designed based on wavelet decomposition of their numerical representation. In support of RNA mutation and phylogenetic analysis, a web server (RNA-TVcurve) was designed based on this alignment-free RNA comparison algorithm. It provides three functional modules: 1) visualization of numerical representation of RNA secondary structure; 2) detection of single-point mutation based on secondary structure; and 3) comparison of pairwise and multiple RNA secondary structures. The inputs of the web server require RNA primary sequences, while corresponding secondary structures are optional. For the primary sequences alone, the web server can compute the secondary structures using free energy minimization algorithm in terms of RNAfold tool from Vienna RNA package. RNA-TVcurve is the first integrated web server, based on an alignment-free method, to deliver a suite of RNA analysis functions, including visualization, mutation analysis and multiple RNAs structure comparison. The comparison results with two popular RNA comparison tools, RNApdist and RNAdistance, showcased that RNA-TVcurve can efficiently capture subtle relationships among RNAs for mutation detection and non-coding RNA classification. All the relevant results were shown in an intuitive graphical manner, and can be freely downloaded from this server. RNA-TVcurve, along with test examples and detailed documents, are available at: http://ml.jlu.edu.cn/tvcurve/ .
CAB-Align: A Flexible Protein Structure Alignment Method Based on the Residue-Residue Contact Area.
Terashi, Genki; Takeda-Shitaka, Mayuko
2015-01-01
Proteins are flexible, and this flexibility has an essential functional role. Flexibility can be observed in loop regions, rearrangements between secondary structure elements, and conformational changes between entire domains. However, most protein structure alignment methods treat protein structures as rigid bodies. Thus, these methods fail to identify the equivalences of residue pairs in regions with flexibility. In this study, we considered that the evolutionary relationship between proteins corresponds directly to the residue-residue physical contacts rather than the three-dimensional (3D) coordinates of proteins. Thus, we developed a new protein structure alignment method, contact area-based alignment (CAB-align), which uses the residue-residue contact area to identify regions of similarity. The main purpose of CAB-align is to identify homologous relationships at the residue level between related protein structures. The CAB-align procedure comprises two main steps: First, a rigid-body alignment method based on local and global 3D structure superposition is employed to generate a sufficient number of initial alignments. Then, iterative dynamic programming is executed to find the optimal alignment. We evaluated the performance and advantages of CAB-align based on four main points: (1) agreement with the gold standard alignment, (2) alignment quality based on an evolutionary relationship without 3D coordinate superposition, (3) consistency of the multiple alignments, and (4) classification agreement with the gold standard classification. Comparisons of CAB-align with other state-of-the-art protein structure alignment methods (TM-align, FATCAT, and DaliLite) using our benchmark dataset showed that CAB-align performed robustly in obtaining high-quality alignments and generating consistent multiple alignments with high coverage and accuracy rates, and it performed extremely well when discriminating between homologous and nonhomologous pairs of proteins in both single and multi-domain comparisons. The CAB-align software is freely available to academic users as stand-alone software at http://www.pharm.kitasato-u.ac.jp/bmd/bmd/Publications.html.
Ward, John; Sorrels, Ken; Coats, Jesse; Pourmoghaddam, Amir; Deleon, Carlos; Daigneault, Paige
2014-03-01
The purpose of this study was to pilot test our study procedures and estimate parameters for sample size calculations for a randomized controlled trial to determine if bilateral sacroiliac (SI) joint manipulation affects specific gait parameters in asymptomatic individuals with a leg length inequality (LLI). Twenty-one asymptomatic chiropractic students engaged in a baseline 90-second walking kinematic analysis using infrared Vicon® cameras. Following this, participants underwent a functional LLI test. Upon examination participants were classified as: left short leg, right short leg, or no short leg. Half of the participants in each short leg group were then randomized to receive bilateral corrective SI joint chiropractic manipulative therapy (CMT). All participants then underwent another 90-second gait analysis. Pre- versus post-intervention gait data were then analyzed within treatment groups by an individual who was blinded to participant group status. For the primary analysis, all p-values were corrected for multiple comparisons using the Bonferroni method. Within groups, no differences in measured gait parameters were statistically significant after correcting for multiple comparisons. The protocol of this study was acceptable to all subjects who were invited to participate. No participants refused randomization. Based on the data collected, we estimated that a larger main study would require 34 participants in each comparison group to detect a moderate effect size.
Positron Emission Tomography for Pre-Clinical Sub-Volume Dose Escalation
NASA Astrophysics Data System (ADS)
Bass, Christopher Paul
Purpose: This dissertation focuses on establishment of pre-clinical methods facilitating the use of PET imaging for selective sub-volume dose escalation. Specifically the problems addressed are 1.) The difficulties associated with comparing multiple PET images, 2.) The need for further validation of novel PET tracers before their implementation in dose escalation schema and 3.) The lack of concrete pre-clinical data supporting the use of PET images for guidance of selective sub-volume dose escalations. Methods and materials: In order to compare multiple PET images the confounding effects of mispositioning and anatomical change between imaging sessions needed to be alleviated. To mitigate the effects of these sources of error, deformable image registration was employed. A deformable registration algorithm was selected and the registration error was evaluated via the introduction of external fiducials to the tumor. Once a method for image registration was established, a procedure for validating the use of novel PET tracers with FDG was developed. Nude mice were used to perform in-vivo comparisons of the spatial distributions of two PET tracers, FDG and FLT. The spatial distributions were also compared across two separate tumor lines to determine the effects of tumor morphology on spatial distribution. Finally, the research establishes a method for acquiring pre-clinical data supporting the use of PET for image-guidance in selective dose escalation. Nude mice were imaged using only FDG PET/CT and the resulting images were used to plan PET-guided dose escalations to a 5 mm sub-volume within the tumor that contained the highest PET tracer uptake. These plans were then delivered using the Small Animal Radiation Research Platform (SARRP) and the efficacy of the PET-guided plans was observed. Results and Conclusions: The analysis of deformable registration algorithms revealed that the BRAINSFit B-spline deformable registration algorithm available in SLICER3D was capable of registering small animal PET/CT data sets in less than 5 minutes with an average registration error of .3 mm. The methods used in chapter 3 allowed for the comparison of the spatial distributions of multiple PET tracers imaged at different times. A comparison of FDG and FLT showed that both are positively correlated but that tumor morphology does significantly affect the correlation between the two tracers. An overlap analysis of the high intensity PET regions of FDG and FLT showed that FLT offers additional spatial information to that seen with FDG. In chapter 4 the SARRP allowed for the delivery of planned PET-guided selective dose escalations to a pre-clinical tumor model. This will facilitate future research validating the use of PET for clinical selective dose escalation.
NASA Astrophysics Data System (ADS)
So, Sung-Sau; Karplus, Martin
2001-07-01
Glycogen phosphorylase (GP) is an important enzyme that regulates blood glucose level and a key therapeutic target for the treatment of type II diabetes. In this study, a number of potential GP inhibitors are designed with a variety of computational approaches. They include the applications of MCSS, LUDI and CoMFA to identify additional fragments that can be attached to existing lead molecules; the use of 2D and 3D similarity-based QSAR models (HQSAR and SMGNN) and of the LUDI program to identify novel molecules that may bind to the glucose binding site. The designed ligands are evaluated by a multiple screening method, which is a combination of commercial and in-house ligand-receptor binding affinity prediction programs used in a previous study (So and Karplus, J. Comp.-Aid. Mol. Des., 13 (1999), 243-258). Each method is used at an appropriate point in the screening, as determined by both the accuracy of the calculations and the computational cost. A comparison of the strengths and weaknesses of the ligand design approaches is made.
A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.; Watson, Layne T.
1998-01-01
Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.
Kellogg, Joshua J; Graf, Tyler N; Paine, Mary F; McCune, Jeannine S; Kvalheim, Olav M; Oberlies, Nicholas H; Cech, Nadja B
2017-05-26
A challenge that must be addressed when conducting studies with complex natural products is how to evaluate their complexity and variability. Traditional methods of quantifying a single or a small range of metabolites may not capture the full chemical complexity of multiple samples. Different metabolomics approaches were evaluated to discern how they facilitated comparison of the chemical composition of commercial green tea [Camellia sinensis (L.) Kuntze] products, with the goal of capturing the variability of commercially used products and selecting representative products for in vitro or clinical evaluation. Three metabolomic-related methods-untargeted ultraperformance liquid chromatography-mass spectrometry (UPLC-MS), targeted UPLC-MS, and untargeted, quantitative 1 HNMR-were employed to characterize 34 commercially available green tea samples. Of these methods, untargeted UPLC-MS was most effective at discriminating between green tea, green tea supplement, and non-green-tea products. A method using reproduced correlation coefficients calculated from principal component analysis models was developed to quantitatively compare differences among samples. The obtained results demonstrated the utility of metabolomics employing UPLC-MS data for evaluating similarities and differences between complex botanical products.
NASA Technical Reports Server (NTRS)
Huang, Norden E.; Hu, Kun; Yang, Albert C. C.; Chang, Hsing-Chih; Jia, Deng; Liang, Wei-Kuang; Yeh, Jia Rong; Kao, Chu-Lan; Juan, Chi-Huang; Peng, Chung Kang;
2016-01-01
The Holo-Hilbert spectral analysis (HHSA) method is introduced to cure the deficiencies of traditional spectral analysis and to give a full informational representation of nonlinear and non-stationary data. It uses a nested empirical mode decomposition and Hilbert-Huang transform (HHT) approach to identify intrinsic amplitude and frequency modulations often present in nonlinear systems. Comparisons are first made with traditional spectrum analysis, which usually achieved its results through convolutional integral transforms based on additive expansions of an a priori determined basis, mostly under linear and stationary assumptions. Thus, for non-stationary processes, the best one could do historically was to use the time- frequency representations, in which the amplitude (or energy density) variation is still represented in terms of time. For nonlinear processes, the data can have both amplitude and frequency modulations (intra-mode and inter-mode) generated by two different mechanisms: linear additive or nonlinear multiplicative processes. As all existing spectral analysis methods are based on additive expansions, either a priori or adaptive, none of them could possibly represent the multiplicative processes. While the earlier adaptive HHT spectral analysis approach could accommodate the intra-wave nonlinearity quite remarkably, it remained that any inter-wave nonlinear multiplicative mechanisms that include cross-scale coupling and phase-lock modulations were left untreated. To resolve the multiplicative processes issue, additional dimensions in the spectrum result are needed to account for the variations in both the amplitude and frequency modulations simultaneously. HHSA accommodates all the processes: additive and multiplicative, intra-mode and inter-mode, stationary and nonstationary, linear and nonlinear interactions. The Holo prefix in HHSA denotes a multiple dimensional representation with both additive and multiplicative capabilities.
Huang, Norden E.; Hu, Kun; Yang, Albert C. C.; Chang, Hsing-Chih; Jia, Deng; Liang, Wei-Kuang; Yeh, Jia Rong; Kao, Chu-Lan; Juan, Chi-Hung; Peng, Chung Kang; Meijer, Johanna H.; Wang, Yung-Hung; Long, Steven R.; Wu, Zhauhua
2016-01-01
The Holo-Hilbert spectral analysis (HHSA) method is introduced to cure the deficiencies of traditional spectral analysis and to give a full informational representation of nonlinear and non-stationary data. It uses a nested empirical mode decomposition and Hilbert–Huang transform (HHT) approach to identify intrinsic amplitude and frequency modulations often present in nonlinear systems. Comparisons are first made with traditional spectrum analysis, which usually achieved its results through convolutional integral transforms based on additive expansions of an a priori determined basis, mostly under linear and stationary assumptions. Thus, for non-stationary processes, the best one could do historically was to use the time–frequency representations, in which the amplitude (or energy density) variation is still represented in terms of time. For nonlinear processes, the data can have both amplitude and frequency modulations (intra-mode and inter-mode) generated by two different mechanisms: linear additive or nonlinear multiplicative processes. As all existing spectral analysis methods are based on additive expansions, either a priori or adaptive, none of them could possibly represent the multiplicative processes. While the earlier adaptive HHT spectral analysis approach could accommodate the intra-wave nonlinearity quite remarkably, it remained that any inter-wave nonlinear multiplicative mechanisms that include cross-scale coupling and phase-lock modulations were left untreated. To resolve the multiplicative processes issue, additional dimensions in the spectrum result are needed to account for the variations in both the amplitude and frequency modulations simultaneously. HHSA accommodates all the processes: additive and multiplicative, intra-mode and inter-mode, stationary and non-stationary, linear and nonlinear interactions. The Holo prefix in HHSA denotes a multiple dimensional representation with both additive and multiplicative capabilities. PMID:26953180
Wang, Z Q; Zhang, F G; Guo, J; Zhang, H K; Qin, J J; Zhao, Y; Ding, Z D; Zhang, Z X; Zhang, J B; Yuan, J H; Li, H L; Qu, J R
2017-03-21
Objective: To explore the value of 3.0 T MRI using multiple sequences (star VIBE+ BLADE) in evaluating the preoperative T staging for potentially resectable esophageal cancer (EC). Methods: Between April 2015 and March 2016, a total of 66 consecutive patients with endoscopically proven resectable EC underwent 3.0T MRI in the Affiliated Cancer Hospital of Zhengzhou University.Two independent readers were assigned a T staging on MRI according to the 7th edition of UICC-AJCC TNM Classification, the results of preoperative T staging were compared and analyzed with post-operative pathologic confirmation. Results: The MRI T staging of two readers were highly consistent with histopathological findings, and the sensitivity, specificity and accuracy of preoperative T staging MR imaging were also very high. Conclusion: 3.0 T MRI using multiple sequences is with high accuracy for patients of potentially resectable EC in T staging. The staging accuracy of T1, T2 and T3 is better than that of T4a. 3.0T MRI using multiple sequences could be used as a noninvasive imaging method for pre-operative T staging of EC.
Li, Siqi; Jiang, Huiyan; Pang, Wenbo
2017-05-01
Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture. Copyright © 2017 Elsevier Ltd. All rights reserved.
Olwagen, Courtney P; Adrian, Peter V; Madhi, Shabir A
2017-07-05
S. pneumoniae is a common colonizer of the human nasopharynx in high income and low-middle income countries. Due to limitations of standard culture methods, the prevalence of concurrent colonization with multiple serotypes is unclear. We evaluated the use of multiplex quantitative PCR (qPCR) to detect multiple pneumococcal serotypes/group colonization in archived nasopharyngeal swabs of pneumococcal conjugate vaccine naive children who had previously been investigated by traditional culture methods. Overall the detection of pneumococcal colonization was higher by qPCR (82%) compared to standard culture (71%; p < 0.001), with a high concordance (kappa = 0.73) of serotypes/groups identified by culture also being identified by qPCR. Also, qPCR was more sensitive in detecting multiple serotype/groups among colonized cases (28.7%) compared to culture (4.5%; p < 0.001). Of the additional serotypes detected only by qPCR, the majority were of lower density (<10 4 CFU/ml) than the dominant colonizing serotype, with serotype/group 6A/B, 19B/F and 23F being the highest density colonizers, followed by serotype 5 and serogroup 9A/L/N/V being the most common second and third colonizers respectively. The ability of qPCR to detect multiple pneumococcal serotypes at a low carriage density might provide better insight into underlying mechanism for changes in serotype colonization in PCV vaccinated children.
Sun, Bing; Zheng, Yun-Ling
2018-01-01
Currently there is no sensitive, precise, and reproducible method to quantitate alternative splicing of mRNA transcripts. Droplet digital™ PCR (ddPCR™) analysis allows for accurate digital counting for quantification of gene expression. Human telomerase reverse transcriptase (hTERT) is one of the essential components required for telomerase activity and for the maintenance of telomeres. Several alternatively spliced forms of hTERT mRNA in human primary and tumor cells have been reported in the literature. Using one pair of primers and two probes for hTERT, four alternatively spliced forms of hTERT (α-/β+, α+/β- single deletions, α-/β- double deletion, and nondeletion α+/β+) were accurately quantified through a novel analysis method via data collected from a single ddPCR reaction. In this chapter, we describe this ddPCR method that enables direct quantitative comparison of four alternatively spliced forms of the hTERT messenger RNA without the need for internal standards or multiple pairs of primers specific for each variant, eliminating the technical variation due to differential PCR amplification efficiency for different amplicons and the challenges of quantification using standard curves. This simple and straightforward method should have general utility for quantifying alternatively spliced gene transcripts.
COACH: profile-profile alignment of protein families using hidden Markov models.
Edgar, Robert C; Sjölander, Kimmen
2004-05-22
Alignments of two multiple-sequence alignments, or statistical models of such alignments (profiles), have important applications in computational biology. The increased amount of information in a profile versus a single sequence can lead to more accurate alignments and more sensitive homolog detection in database searches. Several profile-profile alignment methods have been proposed and have been shown to improve sensitivity and alignment quality compared with sequence-sequence methods (such as BLAST) and profile-sequence methods (e.g. PSI-BLAST). Here we present a new approach to profile-profile alignment we call Comparison of Alignments by Constructing Hidden Markov Models (HMMs) (COACH). COACH aligns two multiple sequence alignments by constructing a profile HMM from one alignment and aligning the other to that HMM. We compare the alignment accuracy of COACH with two recently published methods: Yona and Levitt's prof_sim and Sadreyev and Grishin's COMPASS. On two sets of reference alignments selected from the FSSP database, we find that COACH is able, on average, to produce alignments giving the best coverage or the fewest errors, depending on the chosen parameter settings. COACH is freely available from www.drive5.com/lobster
Multivariate analysis of longitudinal rates of change.
Bryan, Matthew; Heagerty, Patrick J
2016-12-10
Longitudinal data allow direct comparison of the change in patient outcomes associated with treatment or exposure. Frequently, several longitudinal measures are collected that either reflect a common underlying health status, or characterize processes that are influenced in a similar way by covariates such as exposure or demographic characteristics. Statistical methods that can combine multivariate response variables into common measures of covariate effects have been proposed in the literature. Current methods for characterizing the relationship between covariates and the rate of change in multivariate outcomes are limited to select models. For example, 'accelerated time' methods have been developed which assume that covariates rescale time in longitudinal models for disease progression. In this manuscript, we detail an alternative multivariate model formulation that directly structures longitudinal rates of change and that permits a common covariate effect across multiple outcomes. We detail maximum likelihood estimation for a multivariate longitudinal mixed model. We show via asymptotic calculations the potential gain in power that may be achieved with a common analysis of multiple outcomes. We apply the proposed methods to the analysis of a trivariate outcome for infant growth and compare rates of change for HIV infected and uninfected infants. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Case-control analysis in highway safety: Accounting for sites with multiple crashes.
Gross, Frank
2013-12-01
There is an increased interest in the use of epidemiological methods in highway safety analysis. The case-control and cohort methods are commonly used in the epidemiological field to identify risk factors and quantify the risk or odds of disease given certain characteristics and factors related to an individual. This same concept can be applied to highway safety where the entity of interest is a roadway segment or intersection (rather than a person) and the risk factors of interest are the operational and geometric characteristics of a given roadway. One criticism of the use of these methods in highway safety is that they have not accounted for the difference between sites with single and multiple crashes. In the medical field, a disease either occurs or it does not; multiple occurrences are generally not an issue. In the highway safety field, it is necessary to evaluate the safety of a given site while accounting for multiple crashes. Otherwise, the analysis may underestimate the safety effects of a given factor. This paper explores the use of the case-control method in highway safety and two variations to account for sites with multiple crashes. Specifically, the paper presents two alternative methods for defining cases in a case-control study and compares the results in a case study. The first alternative defines a separate case for each crash in a given study period, thereby increasing the weight of the associated roadway characteristics in the analysis. The second alternative defines entire crash categories as cases (sites with one crash, sites with two crashes, etc.) and analyzes each group separately in comparison to sites with no crashes. The results are also compared to a "typical" case-control application, where the cases are simply defined as any entity that experiences at least one crash and controls are those entities without a crash in a given period. In a "typical" case-control design, the attributes associated with single-crash segments are weighted the same as the attributes of segments with multiple crashes. The results support the hypothesis that the "typical" case-control design may underestimate the safety effects of a given factor compared to methods that account for sites with multiple crashes. Compared to the first alternative case definition (where multiple crash segments represent multiple cases) the results from the "typical" case-control design are less pronounced (i.e., closer to unity). The second alternative (where case definitions are constructed for various crash categories and analyzed separately) provides further evidence that sites with single and multiple crashes should not be grouped together in a case-control analysis. This paper indicates a clear need to differentiate sites with single and multiple crashes in a case-control analysis. While the results suggest that sites with multiple crashes can be accounted for using a case-control design, further research is needed to determine the optimal method for addressing this issue. This paper provides a starting point for that research. Copyright © 2012 Elsevier Ltd. All rights reserved.
A comparison of multiprocessor scheduling methods for iterative data flow architectures
NASA Technical Reports Server (NTRS)
Storch, Matthew
1993-01-01
A comparative study is made between the Algorithm to Architecture Mapping Model (ATAMM) and three other related multiprocessing models from the published literature. The primary focus of all four models is the non-preemptive scheduling of large-grain iterative data flow graphs as required in real-time systems, control applications, signal processing, and pipelined computations. Important characteristics of the models such as injection control, dynamic assignment, multiple node instantiations, static optimum unfolding, range-chart guided scheduling, and mathematical optimization are identified. The models from the literature are compared with the ATAMM for performance, scheduling methods, memory requirements, and complexity of scheduling and design procedures.
KEY COMPARISON: Final report on CCPR K1-a: Spectral irradiance from 250 nm to 2500 nm
NASA Astrophysics Data System (ADS)
Woolliams, Emma R.; Fox, Nigel P.; Cox, Maurice G.; Harris, Peter M.; Harrison, Neil J.
2006-01-01
The CCPR K1-a key comparison of spectral irradiance (from 250 nm to 2500 nm) was carried out to meet the requirements of the Mutual Recognition Arrangement by 13 participating national metrology institutes (NMIs). Because of the fragile nature of the tungsten halogen lamps used as comparison artefacts, the comparison was arranged as a star comparison with three lamps per participant. NPL (United Kingdom) piloted the comparison and, by measuring all lamps, provided a link between participants' measurements. The other participants were BNM-INM (France), CENAM (Mexico), CSIRO (Australia), HUT (Finland), IFA-CSIC (Spain), MSL-IRL (New Zealand), NIM (China), NIST (United States of America), NMIJ (Japan), NRC (Canada), PTB (Germany) and VNIIOFI (Russian Federation). Before the analysis was completed and the results known, the pilot discussed with each participant which lamp measurements should be included as representative of their comparison. As a consequence of this check, at least one measurement was excluded from one third of the lamps because of changes due to transportations. The comparison thus highlighted the difficulty regarding the availability of suitable transfer standards for the dissemination of spectral irradiance. The use of multiple lamps and multiple measurements ensured sufficient redundancy that all participants were adequately represented. In addition, during this pre-draft A phase all participants had the opportunity to review the uncertainty budgets and methods of all other participants. This new process helped to ensure that all submitted results and their associated uncertainties were evaluated in a consistent manner. The comparison was analysed using a model-based method which regarded each lamp as having a stable spectral irradiance and the measurements made by an NMI as systematically influenced by a factor that applies to all that NMI's measurements. The aim of the analysis was to estimate the systematic factor for each NMI. Across the spectral region (250 nm to 2500 nm) there were 44 wavelengths at which a comparison was made. These were treated entirely independently and thus the report describes 44 comparisons. For wavelengths from 250 nm to 800 nm (apart from 300 nm) all participants had unilateral degrees of equivalence (DoEs) with values consistent with their uncertainties for a coverage level k = 2. At all other wavelengths (apart from 1400 nm) all participants achieved consistency at the k = 4 level for the unilateral DoEs and the vast majority within k = 3. The results are a significant improvement over those of the previous comparison in 1990, especially considering that the declared uncertainties of most participants have been substantially improved over the intervening decade. These results are evidence of the value of the effort devoted to the development of improved spectral scales (and of the evaluation of their uncertainty) by many NMIs in recent years. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCPR, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Multiple comparison analysis testing in ANOVA.
McHugh, Mary L
2011-01-01
The Analysis of Variance (ANOVA) test has long been an important tool for researchers conducting studies on multiple experimental groups and one or more control groups. However, ANOVA cannot provide detailed information on differences among the various study groups, or on complex combinations of study groups. To fully understand group differences in an ANOVA, researchers must conduct tests of the differences between particular pairs of experimental and control groups. Tests conducted on subsets of data tested previously in another analysis are called post hoc tests. A class of post hoc tests that provide this type of detailed information for ANOVA results are called "multiple comparison analysis" tests. The most commonly used multiple comparison analysis statistics include the following tests: Tukey, Newman-Keuls, Scheffee, Bonferroni and Dunnett. These statistical tools each have specific uses, advantages and disadvantages. Some are best used for testing theory while others are useful in generating new theory. Selection of the appropriate post hoc test will provide researchers with the most detailed information while limiting Type 1 errors due to alpha inflation.
Herath, Damayanthi; Tang, Sen-Lin; Tandon, Kshitij; Ackland, David; Halgamuge, Saman Kumara
2017-12-28
In metagenomics, the separation of nucleotide sequences belonging to an individual or closely matched populations is termed binning. Binning helps the evaluation of underlying microbial population structure as well as the recovery of individual genomes from a sample of uncultivable microbial organisms. Both supervised and unsupervised learning methods have been employed in binning; however, characterizing a metagenomic sample containing multiple strains remains a significant challenge. In this study, we designed and implemented a new workflow, Coverage and composition based binning of Metagenomes (CoMet), for binning contigs in a single metagenomic sample. CoMet utilizes coverage values and the compositional features of metagenomic contigs. The binning strategy in CoMet includes the initial grouping of contigs in guanine-cytosine (GC) content-coverage space and refinement of bins in tetranucleotide frequencies space in a purely unsupervised manner. With CoMet, the clustering algorithm DBSCAN is employed for binning contigs. The performances of CoMet were compared against four existing approaches for binning a single metagenomic sample, including MaxBin, Metawatt, MyCC (default) and MyCC (coverage) using multiple datasets including a sample comprised of multiple strains. Binning methods based on both compositional features and coverages of contigs had higher performances than the method which is based only on compositional features of contigs. CoMet yielded higher or comparable precision in comparison to the existing binning methods on benchmark datasets of varying complexities. MyCC (coverage) had the highest ranking score in F1-score. However, the performances of CoMet were higher than MyCC (coverage) on the dataset containing multiple strains. Furthermore, CoMet recovered contigs of more species and was 18 - 39% higher in precision than the compared existing methods in discriminating species from the sample of multiple strains. CoMet resulted in higher precision than MyCC (default) and MyCC (coverage) on a real metagenome. The approach proposed with CoMet for binning contigs, improves the precision of binning while characterizing more species in a single metagenomic sample and in a sample containing multiple strains. The F1-scores obtained from different binning strategies vary with different datasets; however, CoMet yields the highest F1-score with a sample comprised of multiple strains.
de Lusignan, Simon; Kumarapeli, Pushpa; Chan, Tom; Pflug, Bernhard; van Vlymen, Jeremy; Jones, Beryl; Freeman, George K
2008-09-08
There is a lack of tools to evaluate and compare Electronic patient record (EPR) systems to inform a rational choice or development agenda. To develop a tool kit to measure the impact of different EPR system features on the consultation. We first developed a specification to overcome the limitations of existing methods. We divided this into work packages: (1) developing a method to display multichannel video of the consultation; (2) code and measure activities, including computer use and verbal interactions; (3) automate the capture of nonverbal interactions; (4) aggregate multiple observations into a single navigable output; and (5) produce an output interpretable by software developers. We piloted this method by filming live consultations (n = 22) by 4 general practitioners (GPs) using different EPR systems. We compared the time taken and variations during coded data entry, prescribing, and blood pressure (BP) recording. We used nonparametric tests to make statistical comparisons. We contrasted methods of BP recording using Unified Modeling Language (UML) sequence diagrams. We found that 4 channels of video were optimal. We identified an existing application for manual coding of video output. We developed in-house tools for capturing use of keyboard and mouse and to time stamp speech. The transcript is then typed within this time stamp. Although we managed to capture body language using pattern recognition software, we were unable to use this data quantitatively. We loaded these observational outputs into our aggregation tool, which allows simultaneous navigation and viewing of multiple files. This also creates a single exportable file in XML format, which we used to develop UML sequence diagrams. In our pilot, the GP using the EMIS LV (Egton Medical Information Systems Limited, Leeds, UK) system took the longest time to code data (mean 11.5 s, 95% CI 8.7-14.2). Nonparametric comparison of EMIS LV with the other systems showed a significant difference, with EMIS PCS (Egton Medical Information Systems Limited, Leeds, UK) (P = .007), iSoft Synergy (iSOFT, Banbury, UK) (P = .014), and INPS Vision (INPS, London, UK) (P = .006) facilitating faster coding. In contrast, prescribing was fastest with EMIS LV (mean 23.7 s, 95% CI 20.5-26.8), but nonparametric comparison showed no statistically significant difference. UML sequence diagrams showed that the simplest BP recording interface was not the easiest to use, as users spent longer navigating or looking up previous blood pressures separately. Complex interfaces with free-text boxes left clinicians unsure of what to add. The ALFA method allows the precise observation of the clinical consultation. It enables rigorous comparison of core elements of EPR systems. Pilot data suggests its capacity to demonstrate differences between systems. Its outputs could provide the evidence base for making more objective choices between systems.
Fuzzy Regression Prediction and Application Based on Multi-Dimensional Factors of Freight Volume
NASA Astrophysics Data System (ADS)
Xiao, Mengting; Li, Cheng
2018-01-01
Based on the reality of the development of air cargo, the multi-dimensional fuzzy regression method is used to determine the influencing factors, and the three most important influencing factors of GDP, total fixed assets investment and regular flight route mileage are determined. The system’s viewpoints and analogy methods, the use of fuzzy numbers and multiple regression methods to predict the civil aviation cargo volume. In comparison with the 13th Five-Year Plan for China’s Civil Aviation Development (2016-2020), it is proved that this method can effectively improve the accuracy of forecasting and reduce the risk of forecasting. It is proved that this model predicts civil aviation freight volume of the feasibility, has a high practical significance and practical operation.
Verification of Emergent Behaviors in Swarm-based Systems
NASA Technical Reports Server (NTRS)
Rouff, Christopher; Vanderbilt, Amy; Hinchey, Mike; Truszkowski, Walt; Rash, James
2004-01-01
The emergent properties of swarms make swarm-based missions powerful, but at the same time more difficult to design and to assure that the proper behaviors will emerge. We are currently investigating formal methods and techniques for verification and validation of swarm-based missions. The Autonomous Nano-Technology Swarm (ANTS) mission is being used as an example and case study for swarm-based missions to experiment and test current formal methods with intelligent swarms. Using the ANTS mission, we have evaluated multiple formal methods to determine their effectiveness in modeling and assuring swarm behavior. This paper introduces how intelligent swarm technology is being proposed for NASA missions, and gives the results of a comparison of several formal methods and approaches for specifying intelligent swarm-based systems and their effectiveness for predicting emergent behavior.
Multilayer Volume Holographic Optical Memory
NASA Technical Reports Server (NTRS)
Markov, Vladimir; Millerd, James; Trolinger, James; Norrie, Mark; Downie, John; Timucin, Dogan; Lau, Sonie (Technical Monitor)
1998-01-01
We demonstrate a scheme for volume holographic storage based on the features of shift selectivity of a speckle reference wave hologram. The proposed recording method allows more efficient use of the recording medium and increases the storage density in comparison with spherical or plane-wave reference beams. Experimental results of multiple hologram storage and replay in a photorefractive crystal of iron-doped lithium niobate are presented. The mechanism of lateral and longitudinal shift selectivity are described theoretically and shown to agree with experimental measurements.
Doshi, Neena Piyush
2017-01-01
Team-based learning (TBL) combines small and large group learning by incorporating multiple small groups in a large group setting. It is a teacher-directed method that encourages student-student interaction. This study compares student learning and teaching satisfaction between conventional lecture and TBL in the subject of pathology. The present study is aimed to assess the effectiveness of TBL method of teaching over the conventional lecture. The present study was conducted in the Department of Pathology, GMERS Medical College and General Hospital, Gotri, Vadodara, Gujarat. The study population comprised 126 students of second-year MBBS, in their third semester of the academic year 2015-2016. "Hemodynamic disorders" were taught by conventional method and "transfusion medicine" by TBL method. Effectiveness of both the methods was assessed. A posttest multiple choice question was conducted at the end of "hemodynamic disorders." Assessment of TBL was based on individual score, team score, and each member's contribution to the success of the team. The individual score and overall score were compared with the posttest score on "hemodynamic disorders." A feedback was taken from the students regarding their experience with TBL. Tukey's multiple comparisons test and ANOVA summary were used to find the significance of scores between didactic and TBL methods. Student feedback was taken using "Student Satisfaction Scale" based on Likert scoring method. The mean of student scores by didactic, Individual Readiness Assurance Test (score "A"), and overall (score "D") was 49.8% (standard deviation [SD]-14.8), 65.6% (SD-10.9), and 65.6% (SD-13.8), respectively. The study showed positive educational outcome in terms of knowledge acquisition, participation and engagement, and team performance with TBL.
NASA Astrophysics Data System (ADS)
Ono, T.; Takahashi, T.
2017-12-01
Non-structural mitigation measures such as flood hazard map based on estimated inundation area have been more important because heavy rains exceeding the design rainfall frequently occur in recent years. However, conventional method may lead to an underestimation of the area because assumed locations of dike breach in river flood analysis are limited to the cases exceeding the high-water level. The objective of this study is to consider the uncertainty of estimated inundation area with difference of the location of dike breach in river flood analysis. This study proposed multiple flood scenarios which can set automatically multiple locations of dike breach in river flood analysis. The major premise of adopting this method is not to be able to predict the location of dike breach correctly. The proposed method utilized interval of dike breach which is distance of dike breaches placed next to each other. That is, multiple locations of dike breach were set every interval of dike breach. The 2D shallow water equations was adopted as the governing equation of river flood analysis, and the leap-frog scheme with staggered grid was used. The river flood analysis was verified by applying for the 2015 Kinugawa river flooding, and the proposed multiple flood scenarios was applied for the Akutagawa river in Takatsuki city. As the result of computation in the Akutagawa river, a comparison with each computed maximum inundation depth of dike breaches placed next to each other proved that the proposed method enabled to prevent underestimation of estimated inundation area. Further, the analyses on spatial distribution of inundation class and maximum inundation depth in each of the measurement points also proved that the optimum interval of dike breach which can evaluate the maximum inundation area using the minimum assumed locations of dike breach. In brief, this study found the optimum interval of dike breach in the Akutagawa river, which enabled estimated maximum inundation area to predict efficiently and accurately. The river flood analysis by using this proposed method will contribute to mitigate flood disaster by improving the accuracy of estimated inundation area.
Küçük, Fadime; Kara, Bilge; Poyraz, Esra Çoşkuner; İdiman, Egemen
2016-01-01
[Purpose] The aim of this study was to determine the effects of clinical Pilates in multiple sclerosis patients. [Subjects and Methods] Twenty multiple sclerosis patients were enrolled in this study. The participants were divided into two groups as the clinical Pilates and control groups. Cognition (Multiple Sclerosis Functional Composite), balance (Berg Balance Scale), physical performance (timed performance tests, Timed up and go test), tiredness (Modified Fatigue Impact scale), depression (Beck Depression Inventory), and quality of life (Multiple Sclerosis International Quality of Life Questionnaire) were measured before and after treatment in all participants. [Results] There were statistically significant differences in balance, timed performance, tiredness and Multiple Sclerosis Functional Composite tests between before and after treatment in the clinical Pilates group. We also found significant differences in timed performance tests, the Timed up and go test and the Multiple Sclerosis Functional Composite between before and after treatment in the control group. According to the difference analyses, there were significant differences in Multiple Sclerosis Functional Composite and Multiple Sclerosis International Quality of Life Questionnaire scores between the two groups in favor of the clinical Pilates group. There were statistically significant clinical differences in favor of the clinical Pilates group in comparison of measurements between the groups. Clinical Pilates improved cognitive functions and quality of life compared with traditional exercise. [Conclusion] In Multiple Sclerosis treatment, clinical Pilates should be used as a holistic approach by physical therapists. PMID:27134355
Küçük, Fadime; Kara, Bilge; Poyraz, Esra Çoşkuner; İdiman, Egemen
2016-03-01
[Purpose] The aim of this study was to determine the effects of clinical Pilates in multiple sclerosis patients. [Subjects and Methods] Twenty multiple sclerosis patients were enrolled in this study. The participants were divided into two groups as the clinical Pilates and control groups. Cognition (Multiple Sclerosis Functional Composite), balance (Berg Balance Scale), physical performance (timed performance tests, Timed up and go test), tiredness (Modified Fatigue Impact scale), depression (Beck Depression Inventory), and quality of life (Multiple Sclerosis International Quality of Life Questionnaire) were measured before and after treatment in all participants. [Results] There were statistically significant differences in balance, timed performance, tiredness and Multiple Sclerosis Functional Composite tests between before and after treatment in the clinical Pilates group. We also found significant differences in timed performance tests, the Timed up and go test and the Multiple Sclerosis Functional Composite between before and after treatment in the control group. According to the difference analyses, there were significant differences in Multiple Sclerosis Functional Composite and Multiple Sclerosis International Quality of Life Questionnaire scores between the two groups in favor of the clinical Pilates group. There were statistically significant clinical differences in favor of the clinical Pilates group in comparison of measurements between the groups. Clinical Pilates improved cognitive functions and quality of life compared with traditional exercise. [Conclusion] In Multiple Sclerosis treatment, clinical Pilates should be used as a holistic approach by physical therapists.
Storck, Michael; Krumm, Rainer; Dugas, Martin
2016-01-01
Medical documentation is applied in various settings including patient care and clinical research. Since procedures of medical documentation are heterogeneous and developed further, secondary use of medical data is complicated. Development of medical forms, merging of data from different sources and meta-analyses of different data sets are currently a predominantly manual process and therefore difficult and cumbersome. Available applications to automate these processes are limited. In particular, tools to compare multiple documentation forms are missing. The objective of this work is to design, implement and evaluate the new system ODMSummary for comparison of multiple forms with a high number of semantically annotated data elements and a high level of usability. System requirements are the capability to summarize and compare a set of forms, enable to estimate the documentation effort, track changes in different versions of forms and find comparable items in different forms. Forms are provided in Operational Data Model format with semantic annotations from the Unified Medical Language System. 12 medical experts were invited to participate in a 3-phase evaluation of the tool regarding usability. ODMSummary (available at https://odmtoolbox.uni-muenster.de/summary/summary.html) provides a structured overview of multiple forms and their documentation fields. This comparison enables medical experts to assess multiple forms or whole datasets for secondary use. System usability was optimized based on expert feedback. The evaluation demonstrates that feedback from domain experts is needed to identify usability issues. In conclusion, this work shows that automatic comparison of multiple forms is feasible and the results are usable for medical experts.
Kinematic and Hydrometer Data Products from Scanning Radars during MC3E
matthews, Alyssa; Dolan, Brenda; Rutledge, Steven
2016-02-29
Recently the Radar Meteorology Group at Colorado State University has completed major case studies of some top cases from MC3E including 25 April, 20 May and 23 May 2011. A discussion on the analysis methods as well as radar quality control methods is included. For each case, a brief overview is first provided. Then, multiple Doppler (using available X-SAPR and C-SAPR data) analyses are presented including statistics on vertical air motions, sub-divided by convective and stratiform precipitation. Mean profiles and CFAD's of vertical motion are included to facilitate comparison with ASR model simulations. Retrieved vertical motion has also been verified with vertically pointing profiler data. Finally for each case, hydrometeor types are included derived from polarimetric radar observations. The latter can be used to provide comparisons to model-generated hydrometeor fields. Instructions for accessing all the data fields are also included. The web page can be found at: http://radarmet.atmos.colostate.edu/mc3e/research/
Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W; Popp, Jürgen
2017-07-27
Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC.
A method for estimating the incident PAR on inclined surfaces
NASA Astrophysics Data System (ADS)
Xie, Xiaoping; Gao, Wei; Gao, Zhiqiang
2008-08-01
A new simple model has been developed that incorporates Digital Elevation Model (DEM) and Moderate Resolution Imaging Spectroradiometer (MODIS) products to produce incident photosynthetically active radiation (PAR) for tilted surface. The method is based on a simplification of the general radiative transfer equation, which considers five major processes of attenuation of solar radiation: 1) Rayleigh scattering, 2) absorption by ozone and water vapor, 3) aerosol scattering, 4) multiple reflectance between surface and atmosphere, and 5) three terrain factors: slope and aspect, isotropic sky view factor, and additional radiation by neighbor reflectance. A comparison of the model results with observational data from the Yucheng and Changbai Mountain sites of the Chinese Ecosystem Research Network (CERN) shows the correlation coefficient as 0.929 and 0.904, respectively. A comparison of the model results with the 2006 filed measured PAR in the Yucheng and Changbai sites shows the correlation coefficient as 0.929 and 0.904, respectively, and the average percent error as 10% and 15%, respectively.
Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W.; Popp, Jürgen
2017-01-01
Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC. PMID:28749450
Zhao, Yu-Qi; Li, Gong-Hua; Huang, Jing-Fei
2013-04-01
Animal models provide myriad benefits to both experimental and clinical research. Unfortunately, in many situations, they fall short of expected results or provide contradictory results. In part, this can be the result of traditional molecular biological approaches that are relatively inefficient in elucidating underlying molecular mechanism. To improve the efficacy of animal models, a technological breakthrough is required. The growing availability and application of the high-throughput methods make systematic comparisons between human and animal models easier to perform. In the present study, we introduce the concept of the comparative systems biology, which we define as "comparisons of biological systems in different states or species used to achieve an integrated understanding of life forms with all their characteristic complexity of interactions at multiple levels". Furthermore, we discuss the applications of RNA-seq and ChIP-seq technologies to comparative systems biology between human and animal models and assess the potential applications for this approach in the future studies.
Liang, Xiaoyun; Vaughan, David N; Connelly, Alan; Calamante, Fernando
2018-05-01
The conventional way to estimate functional networks is primarily based on Pearson correlation along with classic Fisher Z test. In general, networks are usually calculated at the individual-level and subsequently aggregated to obtain group-level networks. However, such estimated networks are inevitably affected by the inherent large inter-subject variability. A joint graphical model with Stability Selection (JGMSS) method was recently shown to effectively reduce inter-subject variability, mainly caused by confounding variations, by simultaneously estimating individual-level networks from a group. However, its benefits might be compromised when two groups are being compared, given that JGMSS is blinded to other groups when it is applied to estimate networks from a given group. We propose a novel method for robustly estimating networks from two groups by using group-fused multiple graphical-lasso combined with stability selection, named GMGLASS. Specifically, by simultaneously estimating similar within-group networks and between-group difference, it is possible to address inter-subject variability of estimated individual networks inherently related with existing methods such as Fisher Z test, and issues related to JGMSS ignoring between-group information in group comparisons. To evaluate the performance of GMGLASS in terms of a few key network metrics, as well as to compare with JGMSS and Fisher Z test, they are applied to both simulated and in vivo data. As a method aiming for group comparison studies, our study involves two groups for each case, i.e., normal control and patient groups; for in vivo data, we focus on a group of patients with right mesial temporal lobe epilepsy.
A radiographic survey of monumental masonry workers in Aberdeen
Davies, T. A. Lloyd; Doig, A. T.; Fox, A. J.; Greenberg, M.
1973-01-01
Lloyd Davies, T. A., Doig, A. T., Fox, A. J., and Greenberg, M. (1973).British Journal of Industrial Medicine,30, 227-231. A radiographic survey of monumental masonry workers in Aberdeen. A survey of radiographic appearances of the lungs of monumental masonry workers in Aberdeen was carried out to determine the present prevalence of abnormalities and to serve as a standard for future comparisons in view of changes in methods of working. No major change could be detected in the status of these granite workers in Aberdeen over the past 20 years but the different methods of survey used by Mair in 1951 and by the present study did not allow of strict comparison. Chest radiographs were reported on by three readers independently using the National Coal Board elaboration of the ILO classification and a score was given to each film using Oldham's method. Multiple regression analysis showed that ϰ-ray changes were related to years in granite but progression was slow in comparison with foundry workers. The prevalence of radiographic appearances of category 1 or greater was 3·0% overall and 4·6% for workers in dusty jobs. Evidence of pneumoconiosis was not observed in workers exposed for less than 20 years. With the environmental control attained the threshold limit values for respirable dust were not often much exceeded. PMID:4353240
Biological intuition in alignment-free methods: response to Posada.
Ragan, Mark A; Chan, Cheong Xin
2013-08-01
A recent editorial in Journal of Molecular Evolution highlights opportunities and challenges facing molecular evolution in the era of next-generation sequencing. Abundant sequence data should allow more-complex models to be fit at higher confidence, making phylogenetic inference more reliable and improving our understanding of evolution at the molecular level. However, concern that approaches based on multiple sequence alignment may be computationally infeasible for large datasets is driving the development of so-called alignment-free methods for sequence comparison and phylogenetic inference. The recent editorial characterized these approaches as model-free, not based on the concept of homology, and lacking in biological intuition. We argue here that alignment-free methods have not abandoned models or homology, and can be biologically intuitive.
Topological Vulnerability Evaluation Model Based on Fractal Dimension of Complex Networks.
Gou, Li; Wei, Bo; Sadiq, Rehan; Sadiq, Yong; Deng, Yong
2016-01-01
With an increasing emphasis on network security, much more attentions have been attracted to the vulnerability of complex networks. In this paper, the fractal dimension, which can reflect space-filling capacity of networks, is redefined as the origin moment of the edge betweenness to obtain a more reasonable evaluation of vulnerability. The proposed model combining multiple evaluation indexes not only overcomes the shortage of average edge betweenness's failing to evaluate vulnerability of some special networks, but also characterizes the topological structure and highlights the space-filling capacity of networks. The applications to six US airline networks illustrate the practicality and effectiveness of our proposed method, and the comparisons with three other commonly used methods further validate the superiority of our proposed method.
Autonomous Scanning Probe Microscopy in Situ Tip Conditioning through Machine Learning.
Rashidi, Mohammad; Wolkow, Robert A
2018-05-23
Atomic-scale characterization and manipulation with scanning probe microscopy rely upon the use of an atomically sharp probe. Here we present automated methods based on machine learning to automatically detect and recondition the quality of the probe of a scanning tunneling microscope. As a model system, we employ these techniques on the technologically relevant hydrogen-terminated silicon surface, training the network to recognize abnormalities in the appearance of surface dangling bonds. Of the machine learning methods tested, a convolutional neural network yielded the greatest accuracy, achieving a positive identification of degraded tips in 97% of the test cases. By using multiple points of comparison and majority voting, the accuracy of the method is improved beyond 99%.
Unsupervised multiple kernel learning for heterogeneous data integration.
Mariette, Jérôme; Villa-Vialaneix, Nathalie
2018-03-15
Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.
Sun, Bing; Tao, Lian; Zheng, Yun-Ling
2014-06-01
Human telomerase reverse transcriptase (hTERT) is an essential component required for telomerase activity and telomere maintenance. Several alternatively spliced forms of hTERT mRNA have been reported in human primary and tumor cells. Currently, however, there is no sensitive and accurate method for the simultaneous quantification of multiple alternatively spliced RNA transcripts, such as in the case of hTERT. Here we show droplet digital PCR (ddPCR) provides sensitive, simultaneous digital quantification in a single reaction of two alternatively spliced single deletion hTERT transcripts (α-/β+ and α+/β-) as well as the opportunity to manually quantify non-deletion (α+/β+) and double deletion (α-/β-) transcripts. Our ddPCR method enables direct comparison among four alternatively spliced mRNAs without the need for internal standards or multiple primer pairs specific for each variant as real-time PCR (qPCR) requires, thus eliminating potential variation due to differences in PCR amplification efficiency.
Design of an image encryption scheme based on a multiple chaotic map
NASA Astrophysics Data System (ADS)
Tong, Xiao-Jun
2013-07-01
In order to solve the problem that chaos is degenerated in limited computer precision and Cat map is the small key space, this paper presents a chaotic map based on topological conjugacy and the chaotic characteristics are proved by Devaney definition. In order to produce a large key space, a Cat map named block Cat map is also designed for permutation process based on multiple-dimensional chaotic maps. The image encryption algorithm is based on permutation-substitution, and each key is controlled by different chaotic maps. The entropy analysis, differential analysis, weak-keys analysis, statistical analysis, cipher random analysis, and cipher sensibility analysis depending on key and plaintext are introduced to test the security of the new image encryption scheme. Through the comparison to the proposed scheme with AES, DES and Logistic encryption methods, we come to the conclusion that the image encryption method solves the problem of low precision of one dimensional chaotic function and has higher speed and higher security.
Aerodynamic analysis for aircraft with nacelles, pylons, and winglets at transonic speeds
NASA Technical Reports Server (NTRS)
Boppe, Charles W.
1987-01-01
A computational method has been developed to provide an analysis for complex realistic aircraft configurations at transonic speeds. Wing-fuselage configurations with various combinations of pods, pylons, nacelles, and winglets can be analyzed along with simpler shapes such as airfoils, isolated wings, and isolated bodies. The flexibility required for the treatment of such diverse geometries is obtained by using a multiple nested grid approach in the finite-difference relaxation scheme. Aircraft components (and their grid systems) can be added or removed as required. As a result, the computational method can be used in the same manner as a wind tunnel to study high-speed aerodynamic interference effects. The multiple grid approach also provides high boundary point density/cost ratio. High resolution pressure distributions can be obtained. Computed results are correlated with wind tunnel and flight data using four different transport configurations. Experimental/computational component interference effects are included for cases where data are available. The computer code used for these comparisons is described in the appendices.
Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin
2011-01-01
This article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on computed tomography (CT) examinations. We developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. The scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing data set of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. The proposed method is able to robustly and accurately disconnect all connections between left and right lungs, and the guided dynamic programming algorithm is able to remove redundant processing.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
Regional flow duration curves: Geostatistical techniques versus multivariate regression
Pugliese, Alessio; Farmer, William H.; Castellarin, Attilio; Archfield, Stacey A.; Vogel, Richard M.
2016-01-01
A period-of-record flow duration curve (FDC) represents the relationship between the magnitude and frequency of daily streamflows. Prediction of FDCs is of great importance for locations characterized by sparse or missing streamflow observations. We present a detailed comparison of two methods which are capable of predicting an FDC at ungauged basins: (1) an adaptation of the geostatistical method, Top-kriging, employing a linear weighted average of dimensionless empirical FDCs, standardised with a reference streamflow value; and (2) regional multiple linear regression of streamflow quantiles, perhaps the most common method for the prediction of FDCs at ungauged sites. In particular, Top-kriging relies on a metric for expressing the similarity between catchments computed as the negative deviation of the FDC from a reference streamflow value, which we termed total negative deviation (TND). Comparisons of these two methods are made in 182 largely unregulated river catchments in the southeastern U.S. using a three-fold cross-validation algorithm. Our results reveal that the two methods perform similarly throughout flow-regimes, with average Nash-Sutcliffe Efficiencies 0.566 and 0.662, (0.883 and 0.829 on log-transformed quantiles) for the geostatistical and the linear regression models, respectively. The differences between the reproduction of FDC's occurred mostly for low flows with exceedance probability (i.e. duration) above 0.98.
An argument for the use of multiple segment stents in curved arteries.
Kasiri, Saeid; Kelly, Daniel J
2011-08-01
Stenting of curved arteries is generally perceived to be more challenging than straight vessels. Conceptually implanting multiple shorter stents rather than a single longer stent into such a curved artery represents a promising concept, but little is known about the impact of such an approach. The objective of this study is to evaluate the effectiveness of using a multiple segment stent rather than a single long stent to dilate a curved artery using the finite element method. A double segment stent (DSS) and a single segment stent (SSS) were modeled. The stents were compared when expanded into a model of a curved artery. The model predicts that the DSS provides higher flexibility, more conformity, and lower recoil in comparison to the SSS. The volume of arterial tissue experiencing high levels of stress due to stent implantation is also reduced for the DSS. It is suggested that a multiple segment stenting system is a potential solution to the problem of higher rates of in-stent restenosis in curved arteries and mechanically challenging environments.
Comparing the index-flood and multiple-regression methods using L-moments
NASA Astrophysics Data System (ADS)
Malekinezhad, H.; Nachtnebel, H. P.; Klik, A.
In arid and semi-arid regions, the length of records is usually too short to ensure reliable quantile estimates. Comparing index-flood and multiple-regression analyses based on L-moments was the main objective of this study. Factor analysis was applied to determine main influencing variables on flood magnitude. Ward’s cluster and L-moments approaches were applied to several sites in the Namak-Lake basin in central Iran to delineate homogeneous regions based on site characteristics. Homogeneity test was done using L-moments-based measures. Several distributions were fitted to the regional flood data and index-flood and multiple-regression methods as two regional flood frequency methods were compared. The results of factor analysis showed that length of main waterway, compactness coefficient, mean annual precipitation, and mean annual temperature were the main variables affecting flood magnitude. The study area was divided into three regions based on the Ward’s method of clustering approach. The homogeneity test based on L-moments showed that all three regions were acceptably homogeneous. Five distributions were fitted to the annual peak flood data of three homogeneous regions. Using the L-moment ratios and the Z-statistic criteria, GEV distribution was identified as the most robust distribution among five candidate distributions for all the proposed sub-regions of the study area, and in general, it was concluded that the generalised extreme value distribution was the best-fit distribution for every three regions. The relative root mean square error (RRMSE) measure was applied for evaluating the performance of the index-flood and multiple-regression methods in comparison with the curve fitting (plotting position) method. In general, index-flood method gives more reliable estimations for various flood magnitudes of different recurrence intervals. Therefore, this method should be adopted as regional flood frequency method for the study area and the Namak-Lake basin in central Iran. To estimate floods of various return periods for gauged catchments in the study area, the mean annual peak flood of the catchments may be multiplied by corresponding values of the growth factors, and computed using the GEV distribution.
MIDAS: Mining differentially activated subpaths of KEGG pathways from multi-class RNA-seq data.
Lee, Sangseon; Park, Youngjune; Kim, Sun
2017-07-15
Pathway based analysis of high throughput transcriptome data is a widely used approach to investigate biological mechanisms. Since a pathway consists of multiple functions, the recent approach is to determine condition specific sub-pathways or subpaths. However, there are several challenges. First, few existing methods utilize explicit gene expression information from RNA-seq. More importantly, subpath activity is usually an average of statistical scores, e.g., correlations, of edges in a candidate subpath, which fails to reflect gene expression quantity information. In addition, none of existing methods can handle multiple phenotypes. To address these technical problems, we designed and implemented an algorithm, MIDAS, that determines condition specific subpaths, each of which has different activities across multiple phenotypes. MIDAS utilizes gene expression quantity information fully and the network centrality information to determine condition specific subpaths. To test performance of our tool, we used TCGA breast cancer RNA-seq gene expression profiles with five molecular subtypes. 36 differentially activate subpaths were determined. The utility of our method, MIDAS, was demonstrated in four ways. All 36 subpaths are well supported by the literature information. Subsequently, we showed that these subpaths had a good discriminant power for five cancer subtype classification and also had a prognostic power in terms of survival analysis. Finally, in a performance comparison of MIDAS to a recent subpath prediction method, PATHOME, our method identified more subpaths and much more genes that are well supported by the literature information. http://biohealth.snu.ac.kr/software/MIDAS/. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Evaluation and Field Assessment of Bifacial Photovoltaic Module Power Rating Methodologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deline, Chris; MacAlpine, Sara; Marion, Bill
2016-11-21
1-sun power ratings for bifacial modules are currently undefined. This is partly because there is no standard definition of rear irradiance given 1000 Wm-2 on the front. Using field measurements and simulations, we evaluate multiple deployment scenarios for bifacial modules and provide details on the amount of irradiance that could be expected. A simplified case that represents a single module deployed under conditions consistent with existing 1-sun irradiance standards leads to a bifacial reference condition of 1000 Wm-2 Gfront and 130-140 Wm-2 Grear. For fielded systems of bifacial modules, Grear magnitude and spatial uniformity will be affected by self-shade frommore » adjacent modules, varied ground cover, and ground-clearance height. A standard measurement procedure for bifacial modules is also currently undefined. A proposed international standard is under development, which provides the motivation for this work. Here, we compare outdoor field measurements of bifacial modules with irradiance on both sides with proposed indoor test methods where irradiance is only applied to one side at a time. The indoor method has multiple advantages, including controlled and repeatable irradiance and thermal environment, along with allowing the use of conventional single-sided flash test equipment. The comparison results are promising, showing that the indoor and outdoor methods agree within 1%-2% for multiple rear-irradiance conditions and bifacial module types.« less
Evaluation and Field Assessment of Bifacial Photovoltaic Module Power Rating Methodologies: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deline, Chris; MacAlpine, Sara; Marion, Bill
2016-06-16
1-sun power ratings for bifacial modules are currently undefined. This is partly because there is no standard definition of rear irradiance given 1000 Wm-2 on the front. Using field measurements and simulations, we evaluate multiple deployment scenarios for bifacial modules and provide details on the amount of irradiance that could be expected. A simplified case that represents a single module deployed under conditions consistent with existing 1-sun irradiance standards leads to a bifacial reference condition of 1000 Wm-2 Gfront and 130-140 Wm-2 Grear. For fielded systems of bifacial modules, Grear magnitude and spatial uniformity will be affected by self-shade frommore » adjacent modules, varied ground cover, and ground-clearance height. A standard measurement procedure for bifacial modules is also currently undefined. A proposed international standard is under development, which provides the motivation for this work. Here, we compare outdoor field measurements of bifacial modules with irradiance on both sides with proposed indoor test methods where irradiance is only applied to one side at a time. The indoor method has multiple advantages, including controlled and repeatable irradiance and thermal environment, along with allowing the use of conventional single-sided flash test equipment. The comparison results are promising, showing that the indoor and outdoor methods agree within 1%-2% for multiple rear-irradiance conditions and bifacial module types.« less
Brooks, M.H.; Schroder, L.J.; Malo, B.A.
1985-01-01
Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)
Measuring Thermodynamic Properties of Metals and Alloys With Knudsen Effusion Mass Spectrometry
NASA Technical Reports Server (NTRS)
Copland, Evan H.; Jacobson, Nathan S.
2010-01-01
This report reviews Knudsen effusion mass spectrometry (KEMS) as it relates to thermodynamic measurements of metals and alloys. First, general aspects are reviewed, with emphasis on the Knudsen-cell vapor source and molecular beam formation, and mass spectrometry issues germane to this type of instrument are discussed briefly. The relationship between the vapor pressure inside the effusion cell and the measured ion intensity is the key to KEMS and is derived in detail. Then common methods used to determine thermodynamic quantities with KEMS are discussed. Enthalpies of vaporization, the fundamental measurement, are determined from the variation of relative partial pressure with temperature using the second-law method or by calculating a free energy of formation and subtracting the entropy contribution using the third-law method. For single-cell KEMS instruments, measurements can be used to determine the partial Gibbs free energy if the sensitivity factor remains constant over multiple experiments. The ion-current ratio method and dimer-monomer method are also viable in some systems. For a multiple-cell KEMS instrument, activities are obtained by direct comparison with a suitable component reference state or a secondary standard. Internal checks for correct instrument operation and general procedural guidelines also are discussed. Finally, general comments are made about future directions in measuring alloy thermodynamics with KEMS.
Shahriari, Mohammadali; Biglarbegian, Mohammad
2018-01-01
This paper presents a new conflict resolution methodology for multiple mobile robots while ensuring their motion-liveness, especially for cluttered and dynamic environments. Our method constructs a mathematical formulation in a form of an optimization problem by minimizing the overall travel times of the robots subject to resolving all the conflicts in their motion. This optimization problem can be easily solved through coordinating only the robots' speeds. To overcome the computational cost in executing the algorithm for very cluttered environments, we develop an innovative method through clustering the environment into independent subproblems that can be solved using parallel programming techniques. We demonstrate the scalability of our approach through performing extensive simulations. Simulation results showed that our proposed method is capable of resolving the conflicts of 100 robots in less than 1.23 s in a cluttered environment that has 4357 intersections in the paths of the robots. We also developed an experimental testbed and demonstrated that our approach can be implemented in real time. We finally compared our approach with other existing methods in the literature both quantitatively and qualitatively. This comparison shows while our approach is mathematically sound, it is more computationally efficient, scalable for very large number of robots, and guarantees the live and smooth motion of robots.
Disseminating the unit of mass from multiple primary realisations
NASA Astrophysics Data System (ADS)
Nielsen, Lars
2016-12-01
When a new definition of the kilogram has been adopted in 2018 as expected, the unit of mass will be realised by the watt balance method, the x-ray crystal density method or perhaps other primary methods still to be developed. So far, the standard uncertainties associated with the available primary methods are at least one order of magnitude larger than the standard uncertainty associated with mass comparisons using mass comparators, so differences in primary realisations of the kilogram are easily detected, whereas many National Metrology Institutes would have to increase their calibration and measurement capabilities (CMCs) if they were traceable to a single primary realisation. This paper presents a scheme for obtaining traceability to multiple primary realisations of the kilogram using a small group of stainless steel 1 kg weights, which are allowed to change their masses over time in a way known to be realistic, and which are calibrated and stored in air. An analysis of the scheme shows that if the relative standard uncertainties of future primary realisations are equal to the relative standard uncertainties of the present methods used to measure the Planck constant, the unit of mass can be disseminated with a standard uncertainty less than 0.015 mg, which matches the smallest CMCs currently claimed for the calibration of 1 kg weights.
A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images
Tang, Yunwei; Jing, Linhai; Ding, Haifeng
2017-01-01
The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA). Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods. PMID:29064416
Voxelwise multivariate analysis of multimodality magnetic resonance imaging
Naylor, Melissa G.; Cardenas, Valerie A.; Tosun, Duygu; Schuff, Norbert; Weiner, Michael; Schwartzman, Armin
2015-01-01
Most brain magnetic resonance imaging (MRI) studies concentrate on a single MRI contrast or modality, frequently structural MRI. By performing an integrated analysis of several modalities, such as structural, perfusion-weighted, and diffusion-weighted MRI, new insights may be attained to better understand the underlying processes of brain diseases. We compare two voxelwise approaches: (1) fitting multiple univariate models, one for each outcome and then adjusting for multiple comparisons among the outcomes and (2) fitting a multivariate model. In both cases, adjustment for multiple comparisons is performed over all voxels jointly to account for the search over the brain. The multivariate model is able to account for the multiple comparisons over outcomes without assuming independence because the covariance structure between modalities is estimated. Simulations show that the multivariate approach is more powerful when the outcomes are correlated and, even when the outcomes are independent, the multivariate approach is just as powerful or more powerful when at least two outcomes are dependent on predictors in the model. However, multiple univariate regressions with Bonferroni correction remains a desirable alternative in some circumstances. To illustrate the power of each approach, we analyze a case control study of Alzheimer's disease, in which data from three MRI modalities are available. PMID:23408378
Dong, Runze; Pan, Shuo; Peng, Zhenling; Zhang, Yang; Yang, Jianyi
2018-05-21
With the rapid increase of the number of protein structures in the Protein Data Bank, it becomes urgent to develop algorithms for efficient protein structure comparisons. In this article, we present the mTM-align server, which consists of two closely related modules: one for structure database search and the other for multiple structure alignment. The database search is speeded up based on a heuristic algorithm and a hierarchical organization of the structures in the database. The multiple structure alignment is performed using the recently developed algorithm mTM-align. Benchmark tests demonstrate that our algorithms outperform other peering methods for both modules, in terms of speed and accuracy. One of the unique features for the server is the interplay between database search and multiple structure alignment. The server provides service not only for performing fast database search, but also for making accurate multiple structure alignment with the structures found by the search. For the database search, it takes about 2-5 min for a structure of a medium size (∼300 residues). For the multiple structure alignment, it takes a few seconds for ∼10 structures of medium sizes. The server is freely available at: http://yanglab.nankai.edu.cn/mTM-align/.
NASA Astrophysics Data System (ADS)
Koo, A.; Clare, J. F.
2012-06-01
Analysis of CIPM international comparisons is increasingly being carried out using a model-based approach that leads naturally to a generalized least-squares (GLS) solution. While this method offers the advantages of being easier to audit and having general applicability to any form of comparison protocol, there is a lack of consensus over aspects of its implementation. Two significant results are presented that show the equivalence of three differing approaches discussed by or applied in comparisons run by Consultative Committees of the CIPM. Both results depend on a mathematical condition equivalent to the requirement that any two artefacts in the comparison are linked through a sequence of measurements of overlapping pairs of artefacts. The first result is that a GLS estimator excluding all sources of error common to all measurements of a participant is equal to the GLS estimator incorporating all sources of error, including those associated with any bias in the standards or procedures of the measuring laboratory. The second result identifies the component of uncertainty in the estimate of bias that arises from possible systematic effects in the participants' measurement standards and procedures. The expression so obtained is a generalization of an expression previously published for a one-artefact comparison with no inter-participant correlations, to one for a comparison comprising any number of repeat measurements of multiple artefacts and allowing for inter-laboratory correlations.
NASA Technical Reports Server (NTRS)
Filyushkin, V. V.; Madronich, S.; Brasseur, G. P.; Petropavlovskikh, I. V.
1994-01-01
Based on a derivation of the two-stream daytime-mean equations of radiative flux transfer, a method for computing the daytime-mean actinic fluxes in the absorbing and scattering vertically inhomogeneous atmosphere is suggested. The method applies direct daytime integration of the particular solutions of the two-stream approximations or the source functions. It is valid for any duration of period of averaging. The merit of the method is that the multiple scattering computation is carried out only once for the whole averaging period. It can be implemented with a number of widely used two-stream approximations. The method agrees with the results obtained with 200-point multiple scattering calculations. The method was also tested in runs with a 1-km cloud layer with optical depth of 10, as well as with aerosol background. Comparison of the results obtained for a cloud subdivided into 20 layers with those obtained for a one-layer cloud with the same optical parameters showed that direct integration of particular solutions possesses an 'analytical' accuracy. In the case of the source function interpolation, the actinic fluxes calculated above the one-layer and 20-layer clouds agreed within 1%-1.5%, while below the cloud they may differ up to 5% (in the worst case). The ways of enhancing the accuracy (in a 'two-stream sense') and computational efficiency of the method are discussed.
A comparison of experiment and theory for sound propagation in variable area ducts
NASA Technical Reports Server (NTRS)
Nayfeh, A. H.; Kaiser, J. E.; Marshall, R. L.; Hurst, C. J.
1980-01-01
An experimental and analytical program has been carried out to evaluate sound suppression techniques in ducts that produce refraction effects due to axial velocity gradients. The analytical program employs a computer code based on the method of multiple scales to calculate the influence of axial variations due to slow changes in the cross-sectional area as well as transverse gradients due to the wall boundary layers. Detailed comparisons between the analytical predictions and the experimental measurements have been made. The circumferential variations of pressure amplitudes and phases at several axial positions have been examined in straight and variable area ducts, with hard walls and lined sections, and with and without a mean flow. Reasonable agreement between the theoretical and experimental results has been found.
Multiple Acquisition InSAR Analysis: Persistent Scatterer and Small Baseline Approaches
NASA Astrophysics Data System (ADS)
Hooper, A.
2006-12-01
InSAR techniques that process data from multiple acquisitions enable us to form time series of deformation and also allow us to reduce error terms present in single interferograms. There are currently two broad categories of methods that deal with multiple images: persistent scatterer methods and small baseline methods. The persistent scatterer approach relies on identifying pixels whose scattering properties vary little with time and look angle. Pixels that are dominated by a singular scatterer best meet these criteria; therefore, images are processed at full resolution to both increase the chance of there being only one dominant scatterer present, and to reduce the contribution from other scatterers within each pixel. In images where most pixels contain multiple scatterers of similar strength, even at the highest possible resolution, the persistent scatterer approach is less optimal, as the scattering characteristics of these pixels vary substantially with look angle. In this case, an approach that interferes only pairs of images for which the difference in look angle is small makes better sense, and resolution can be sacrificed to reduce the effects of the look angle difference by band-pass filtering. This is the small baseline approach. Existing small baseline methods depend on forming a series of multilooked interferograms and unwrapping each one individually. This approach fails to take advantage of two of the benefits of processing multiple acquisitions, however, which are usually embodied in persistent scatterer methods: the ability to find and extract the phase for single-look pixels with good signal-to-noise ratio that are surrounded by noisy pixels, and the ability to unwrap more robustly in three dimensions, the third dimension being that of time. We have developed, therefore, a new small baseline method to select individual single-look pixels that behave coherently in time, so that isolated stable pixels may be found. After correction for various error terms, the phase values of the selected pixels are unwrapped using a new three-dimensional algorithm. We apply our small baseline method to an area in southern Iceland that includes Katla and Eyjafjallajökull volcanoes, and retrieve a time series of deformation that shows transient deformation due to intrusion of magma beneath Eyjafjallajökull. We also process the data using the Stanford method for persistent scatterers (StaMPS) for comparison.
NASA Astrophysics Data System (ADS)
Fathurrohman, Maman; Porter, Anne; Worthy, Annette L.
2014-07-01
In this paper, the use of guided hyperlearning, unguided hyperlearning, and conventional learning methods in mathematics are compared. The design of the research involved a quasi-experiment with a modified single-factor multiple treatment design comparing the three learning methods, guided hyperlearning, unguided hyperlearning, and conventional learning. The participants were from three first-year university classes, numbering 115 students in total. Each group received guided, unguided, or conventional learning methods in one of the three different topics, namely number systems, functions, and graphing. The students' academic performance differed according to the type of learning. Evaluation of the three methods revealed that only guided hyperlearning and conventional learning were appropriate methods for the psychomotor aspects of drawing in the graphing topic. There was no significant difference between the methods when learning the cognitive aspects involved in the number systems topic and the functions topic.
Is multiple-sequence alignment required for accurate inference of phylogeny?
Höhl, Michael; Ragan, Mark A
2007-04-01
The process of inferring phylogenetic trees from molecular sequences almost always starts with a multiple alignment of these sequences but can also be based on methods that do not involve multiple sequence alignment. Very little is known about the accuracy with which such alignment-free methods recover the correct phylogeny or about the potential for increasing their accuracy. We conducted a large-scale comparison of ten alignment-free methods, among them one new approach that does not calculate distances and a faster variant of our pattern-based approach; all distance-based alignment-free methods are freely available from http://www.bioinformatics.org.au (as Python package decaf+py). We show that most methods exhibit a higher overall reconstruction accuracy in the presence of high among-site rate variation. Under all conditions that we considered, variants of the pattern-based approach were significantly better than the other alignment-free methods. The new pattern-based variant achieved a speed-up of an order of magnitude in the distance calculation step, accompanied by a small loss of tree reconstruction accuracy. A method of Bayesian inference from k-mers did not improve on classical alignment-free (and distance-based) methods but may still offer other advantages due to its Bayesian nature. We found the optimal word length k of word-based methods to be stable across various data sets, and we provide parameter ranges for two different alphabets. The influence of these alphabets was analyzed to reveal a trade-off in reconstruction accuracy between long and short branches. We have mapped the phylogenetic accuracy for many alignment-free methods, among them several recently introduced ones, and increased our understanding of their behavior in response to biologically important parameters. In all experiments, the pattern-based approach emerged as superior, at the expense of higher resource consumption. Nonetheless, no alignment-free method that we examined recovers the correct phylogeny as accurately as does an approach based on maximum-likelihood distance estimates of multiply aligned sequences.
Phylo-VISTA: Interactive visualization of multiple DNA sequence alignments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Nameeta; Couronne, Olivier; Pennacchio, Len A.
The power of multi-sequence comparison for biological discovery is well established. The need for new capabilities to visualize and compare cross-species alignment data is intensified by the growing number of genomic sequence datasets being generated for an ever-increasing number of organisms. To be efficient these visualization algorithms must support the ability to accommodate consistently a wide range of evolutionary distances in a comparison framework based upon phylogenetic relationships. Results: We have developed Phylo-VISTA, an interactive tool for analyzing multiple alignments by visualizing a similarity measure for multiple DNA sequences. The complexity of visual presentation is effectively organized using a frameworkmore » based upon interspecies phylogenetic relationships. The phylogenetic organization supports rapid, user-guided interspecies comparison. To aid in navigation through large sequence datasets, Phylo-VISTA leverages concepts from VISTA that provide a user with the ability to select and view data at varying resolutions. The combination of multiresolution data visualization and analysis, combined with the phylogenetic framework for interspecies comparison, produces a highly flexible and powerful tool for visual data analysis of multiple sequence alignments. Availability: Phylo-VISTA is available at http://www-gsd.lbl. gov/phylovista. It requires an Internet browser with Java Plugin 1.4.2 and it is integrated into the global alignment program LAGAN at http://lagan.stanford.edu« less
Tennen, H; Hall, J A; Affleck, G
1995-05-01
Personality and social psychological studies of depression and depressive phenomena have become more methodologically sophisticated in recent years. In response to earlier problems in this literature, investigators have formulated sound suggestions for research designs. Studies of depression published in the Journal of Personality and Social Psychology (JPSP) between 1988 and 1993 were reviewed to evaluate how well these recommendations have been followed. Forty-one articles were examined for adherence to 3 suggestions appearing consistently in the literature: (a) multiple assessment periods, (b) multiple assessment methods, and (c) appropriate comparison groups. The studies published in JPSP have not adhered well to these standards. The authors recommend resetting minimum methodological criteria for studies of depression published in the premier journal in personality and social psychology.
Horrocks, Erin L; Morgan, Robert L
2009-01-01
The authors compare two methods of identifying job preferences for individuals with significant intellectual disabilities. Three individuals with intellectual disabilities between the ages of 19 and 21 participated in a video-based preference assessment and a multiple stimulus without replacement (MSWO) assessment. Stimulus preference assessment procedures typically involve giving participants access to the selected stimuli to increase the probability that participants will associate the selected choice with the actual stimuli. Although individuals did not have access to the selected stimuli in the video-based assessment, results indicated that both assessments identified the same highest preference job for all participants. Results are discussed in terms of using a video-based assessment to accurately identify job preferences for individuals with developmental disabilities.
Stevens, John R; Jones, Todd R; Lefevre, Michael; Ganesan, Balasubramanian; Weimer, Bart C
2017-01-01
Microbial community analysis experiments to assess the effect of a treatment intervention (or environmental change) on the relative abundance levels of multiple related microbial species (or operational taxonomic units) simultaneously using high throughput genomics are becoming increasingly common. Within the framework of the evolutionary phylogeny of all species considered in the experiment, this translates to a statistical need to identify the phylogenetic branches that exhibit a significant consensus response (in terms of operational taxonomic unit abundance) to the intervention. We present the R software package SigTree , a collection of flexible tools that make use of meta-analysis methods and regular expressions to identify and visualize significantly responsive branches in a phylogenetic tree, while appropriately adjusting for multiple comparisons.
Nonmechanistic forecasts of seasonal influenza with iterative one-week-ahead distributions.
Brooks, Logan C; Farrow, David C; Hyun, Sangwon; Tibshirani, Ryan J; Rosenfeld, Roni
2018-06-15
Accurate and reliable forecasts of seasonal epidemics of infectious disease can assist in the design of countermeasures and increase public awareness and preparedness. This article describes two main contributions we made recently toward this goal: a novel approach to probabilistic modeling of surveillance time series based on "delta densities", and an optimization scheme for combining output from multiple forecasting methods into an adaptively weighted ensemble. Delta densities describe the probability distribution of the change between one observation and the next, conditioned on available data; chaining together nonparametric estimates of these distributions yields a model for an entire trajectory. Corresponding distributional forecasts cover more observed events than alternatives that treat the whole season as a unit, and improve upon multiple evaluation metrics when extracting key targets of interest to public health officials. Adaptively weighted ensembles integrate the results of multiple forecasting methods, such as delta density, using weights that can change from situation to situation. We treat selection of optimal weightings across forecasting methods as a separate estimation task, and describe an estimation procedure based on optimizing cross-validation performance. We consider some details of the data generation process, including data revisions and holiday effects, both in the construction of these forecasting methods and when performing retrospective evaluation. The delta density method and an adaptively weighted ensemble of other forecasting methods each improve significantly on the next best ensemble component when applied separately, and achieve even better cross-validated performance when used in conjunction. We submitted real-time forecasts based on these contributions as part of CDC's 2015/2016 FluSight Collaborative Comparison. Among the fourteen submissions that season, this system was ranked by CDC as the most accurate.
The role of perceived discrimination on active aging.
Fernandez-Ballesteros, Rocio; Olmos, Ricardo; Santacreu, Marta; Bustillos, Antonio; Molina, Maria Angeles
2017-07-01
Among older adults, perceived age discrimination is highly associated with unhealthy outcomes and dissatisfaction. Active aging is a multidimensional concept described by a set of characteristics, particularly health, positive mood and control; most importantly, active aging is currently at the core of public policies. The aim of the present study was to test to what extent perceived discrimination influences active aging. Methods A total of 2005 older adults in three representative samples from regions of Germany, Mexico and Spain participated; they were tested on active aging and perceived discrimination. First, active aging was defined as high reported health, life satisfaction and self-perception of aging. Second, authors introduced the assumption that, in the total sample, structural equation modelling would confirm the hypothesis of a direct negative link between perceived age discrimination and active aging. Finally, multiple group comparison performed through structural equation modelling also provided support for the negative association between perceived discrimination and active aging proposed. In spite of the differences found among the three countries in both active aging variables and age discrimination perception, multiple group comparison indicates that regardless of the culture, perceived discrimination is a negative predictor of active aging. Copyright © 2017 Elsevier B.V. All rights reserved.
Bruns, W; Steinborn, F; Menzel, R; Staritz, B; Bibergeil, H
1990-03-15
The whole-day continuous subcutaneous insulin infusion (CSII) with portable pumps in daily blood glucose autocontrol guarantees a more stabile and favourable glycaemia than multiple injections in labile type I diabetics. The success is mainly to be traced back to the continuous replacement of the basal secretion, particularly to the nocturnal fasting phase. In this study the effect on the glycaemia is investigated with exclusively nocturnal administration of the CSII under maintenance of multiple insulin injections during this day. In a group of 18 type I diabetics the nocturnal CSII in comparison to intermediate insulin administrations in the evening led to a significant improvement of glycaemia (p less than 0.01), in particular to the decrease of the fasting blood sugars (p less than 0.05). In two casuistic observations in comparison to all the other conventional methods for the compensation of the nocturnal glycaemia (depot insulin, nocturnal injection of normal insulin) the nocturnal CSII proved to be superior. Therefore, the nocturnal CSII is an--though more rarely to be used--alternative, which may be taken into consideration, of a whole-day CSII is temporarily unwished for.
López-Guerra, Enrique A
2017-01-01
We explore the contact problem of a flat-end indenter penetrating intermittently a generalized viscoelastic surface, containing multiple characteristic times. This problem is especially relevant for nanoprobing of viscoelastic surfaces with the highly popular tapping-mode AFM imaging technique. By focusing on the material perspective and employing a rigorous rheological approach, we deliver analytical closed-form solutions that provide physical insight into the viscoelastic sources of repulsive forces, tip–sample dissipation and virial of the interaction. We also offer a systematic comparison to the well-established standard harmonic excitation, which is the case relevant for dynamic mechanical analysis (DMA) and for AFM techniques where tip–sample sinusoidal interaction is permanent. This comparison highlights the substantial complexity added by the intermittent-contact nature of the interaction, which precludes the derivation of straightforward equations as is the case for the well-known harmonic excitations. The derivations offered have been thoroughly validated through numerical simulations. Despite the complexities inherent to the intermittent-contact nature of the technique, the analytical findings highlight the potential feasibility of extracting meaningful viscoelastic properties with this imaging method. PMID:29114450
Omics analysis of mouse brain models of human diseases.
Paban, Véronique; Loriod, Béatrice; Villard, Claude; Buee, Luc; Blum, David; Pietropaolo, Susanna; Cho, Yoon H; Gory-Faure, Sylvie; Mansour, Elodie; Gharbi, Ali; Alescio-Lautier, Béatrice
2017-02-05
The identification of common gene/protein profiles related to brain alterations, if they exist, may indicate the convergence of the pathogenic mechanisms driving brain disorders. Six genetically engineered mouse lines modelling neurodegenerative diseases and neuropsychiatric disorders were considered. Omics approaches, including transcriptomic and proteomic methods, were used. The gene/protein lists were used for inter-disease comparisons and further functional and network investigations. When the inter-disease comparison was performed using the gene symbol identifiers, the number of genes/proteins involved in multiple diseases decreased rapidly. Thus, no genes/proteins were shared by all 6 mouse models. Only one gene/protein (Gfap) was shared among 4 disorders, providing strong evidence that a common molecular signature does not exist among brain diseases. The inter-disease comparison of functional processes showed the involvement of a few major biological processes indicating that brain diseases of diverse aetiologies might utilize common biological pathways in the nervous system, without necessarily involving similar molecules. Copyright © 2016 Elsevier B.V. All rights reserved.
High dose rate brachytherapy source measurement intercomparison.
Poder, Joel; Smith, Ryan L; Shelton, Nikki; Whitaker, May; Butler, Duncan; Haworth, Annette
2017-06-01
This work presents a comparison of air kerma rate (AKR) measurements performed by multiple radiotherapy centres for a single HDR 192 Ir source. Two separate groups (consisting of 15 centres) performed AKR measurements at one of two host centres in Australia. Each group travelled to one of the host centres and measured the AKR of a single 192 Ir source using their own equipment and local protocols. Results were compared to the 192 Ir source calibration certificate provided by the manufacturer by means of a ratio of measured to certified AKR. The comparisons showed remarkably consistent results with the maximum deviation in measurement from the decay-corrected source certificate value being 1.1%. The maximum percentage difference between any two measurements was less than 2%. The comparisons demonstrated the consistency of well-chambers used for 192 Ir AKR measurements in Australia, despite the lack of a local calibration service, and served as a valuable focal point for the exchange of ideas and dosimetry methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, D.G.; Parks, J.M.
1984-04-01
Silhouette shapes are two-dimensional projections of three-dimensional objects such as sand grains, gravel, and fossils. Within-the-margin markings such as chamber boundaries, sutures, or ribs are ignored. Comparisons between populations of objects from similar and differential origins (i.e., environments, species or genera, growth series, etc) is aided by quantifying the shapes. The Multiple Rotations Method (MRM) uses a variation of ''eigenshapes'', which is capable of distinguishing most of the subtle variations that the ''trained eye'' can detect. With a video-digitizer and microcomputer, MRM is fast, more accurate, and more objective than the human eye. The resulting shape descriptors comprise 5 ormore » 6 numbers per object that can be stored and retrieved to compare with similar descriptions of other objects. The original-shape outlines can be reconstituted sufficiently for gross recognition from these few numerical descriptors. Thus, a semi-automated data-retrieval system becomes feasible, with silhouette-shape descriptions as one of several recognition criteria. MRM consists of four ''rotations'': rotation about a center to a comparable orientation; a principal-components rotation to reduce the many original shape descriptors to a few; a VARIMAX orthogonal-factor rotation to achieve simple structure; and a rotation to achieve factor scores on individual objects. A variety of subtly different shapes includes sand grains from several locations, ages, and environments, and fossils of several types. This variety illustrates the feasibility of quantitative comparisons by MRM.« less
`Dem DEMs: Comparing Methods of Digital Elevation Model Creation
NASA Astrophysics Data System (ADS)
Rezza, C.; Phillips, C. B.; Cable, M. L.
2017-12-01
Topographic details of Europa's surface yield implications for large-scale processes that occur on the moon, including surface strength, modification, composition, and formation mechanisms for geologic features. In addition, small scale details presented from this data are imperative for future exploration of Europa's surface, such as by a potential Europa Lander mission. A comparison of different methods of Digital Elevation Model (DEM) creation and variations between them can help us quantify the relative accuracy of each model and improve our understanding of Europa's surface. In this work, we used data provided by Phillips et al. (2013, AGU Fall meeting, abs. P34A-1846) and Schenk and Nimmo (2017, in prep.) to compare DEMs that were created using Ames Stereo Pipeline (ASP), SOCET SET, and Paul Schenk's own method. We began by locating areas of the surface with multiple overlapping DEMs, and our initial comparisons were performed near the craters Manannan, Pwyll, and Cilix. For each region, we used ArcGIS to draw profile lines across matching features to determine elevation. Some of the DEMs had vertical or skewed offsets, and thus had to be corrected. The vertical corrections were applied by adding or subtracting the global minimum of the data set to create a common zero-point. The skewed data sets were corrected by rotating the plot so that it had a global slope of zero and then subtracting for a zero-point vertical offset. Once corrections were made, we plotted the three methods on one graph for each profile of each region. Upon analysis, we found relatively good feature correlation between the three methods. The smoothness of a DEM depends on both the input set of images and the stereo processing methods used. In our comparison, the DEMs produced by SOCET SET were less smoothed than those from ASP or Schenk. Height comparisons show that ASP and Schenk's model appear similar, alternating in maximum height. SOCET SET has more topographic variability due to its decreased smoothing, which is borne out by preliminary offset calculations. In the future, we plan to expand upon this preliminary work with more regions of Europa, continue quantifying the height differences and relative accuracy of each method, and generate more DEMs to expand our available comparison regions.
Jeong, Woo Chul; Chauhan, Munish; Sajib, Saurav Z K; Kim, Hyung Joong; Serša, Igor; Kwon, Oh In; Woo, Eung Je
2014-09-07
Magnetic Resonance Electrical Impedance Tomography (MREIT) is an MRI method that enables mapping of internal conductivity and/or current density via measurements of magnetic flux density signals. The MREIT measures only the z-component of the induced magnetic flux density B = (Bx, By, Bz) by external current injection. The measured noise of Bz complicates recovery of magnetic flux density maps, resulting in lower quality conductivity and current-density maps. We present a new method for more accurate measurement of the spatial gradient of the magnetic flux density gradient (∇ Bz). The method relies on the use of multiple radio-frequency receiver coils and an interleaved multi-echo pulse sequence that acquires multiple sampling points within each repetition time. The noise level of the measured magnetic flux density Bz depends on the decay rate of the signal magnitude, the injection current duration, and the coil sensitivity map. The proposed method uses three key steps. The first step is to determine a representative magnetic flux density gradient from multiple receiver coils by using a weighted combination and by denoising the measured noisy data. The second step is to optimize the magnetic flux density gradient by using multi-echo magnetic flux densities at each pixel in order to reduce the noise level of ∇ Bz and the third step is to remove a random noise component from the recovered ∇ Bz by solving an elliptic partial differential equation in a region of interest. Numerical simulation experiments using a cylindrical phantom model with included regions of low MRI signal to noise ('defects') verified the proposed method. Experimental results using a real phantom experiment, that included three different kinds of anomalies, demonstrated that the proposed method reduced the noise level of the measured magnetic flux density. The quality of the recovered conductivity maps using denoised ∇ Bz data showed that the proposed method reduced the conductivity noise level up to 3-4 times at each anomaly region in comparison to the conventional method.
The Influence of Counterfactual Comparison on Fairness in Gain-Loss Contexts.
Li, Qi; Wang, Chunsheng; Taxer, Jamie; Yang, Zhong; Zheng, Ya; Liu, Xun
2017-01-01
Fairness perceptions may be affected by counterfactual comparisons. Although certain studies using a two-player ultimatum game (UG) have shown that comparison with the proposers influences the responders' fairness perceptions in a gain context, the effect of counterfactual comparison in a UG with multiple responders or proposers remains unclear, especially in a loss context. To resolve these issues, this study used a modified three-player UG with multiple responders in Experiment 1 and multiple proposers in Experiment 2 to examine the influence of counterfactual comparison on fairness-related decision-making in gain and loss contexts. The two experiments consistently showed that regardless of the gain or loss context, the level of inequality of the offer and counterfactual comparison influenced acceptance rates (ARs), response times (RTs), and fairness ratings (FRs). If the offers that were received were better than the counterfactual offers, unequal offers were more likely to be accepted than equal offers, and participants were more likely to report higher FRs and to make decisions more quickly. In contrast, when the offers they received were worse than the counterfactual offers, participants were more likely to reject unequal offers than equal offers, reported lower FRs, and made decisions more slowly. These results demonstrate that responders' fairness perceptions are influenced by not only comparisons of the absolute amount of money that they would receive but also specific counterfactuals from other proposers or responders. These findings improve our understanding of fairness perceptions.
The Influence of Counterfactual Comparison on Fairness in Gain-Loss Contexts
Li, Qi; Wang, Chunsheng; Taxer, Jamie; Yang, Zhong; Zheng, Ya; Liu, Xun
2017-01-01
Fairness perceptions may be affected by counterfactual comparisons. Although certain studies using a two-player ultimatum game (UG) have shown that comparison with the proposers influences the responders' fairness perceptions in a gain context, the effect of counterfactual comparison in a UG with multiple responders or proposers remains unclear, especially in a loss context. To resolve these issues, this study used a modified three-player UG with multiple responders in Experiment 1 and multiple proposers in Experiment 2 to examine the influence of counterfactual comparison on fairness-related decision-making in gain and loss contexts. The two experiments consistently showed that regardless of the gain or loss context, the level of inequality of the offer and counterfactual comparison influenced acceptance rates (ARs), response times (RTs), and fairness ratings (FRs). If the offers that were received were better than the counterfactual offers, unequal offers were more likely to be accepted than equal offers, and participants were more likely to report higher FRs and to make decisions more quickly. In contrast, when the offers they received were worse than the counterfactual offers, participants were more likely to reject unequal offers than equal offers, reported lower FRs, and made decisions more slowly. These results demonstrate that responders' fairness perceptions are influenced by not only comparisons of the absolute amount of money that they would receive but also specific counterfactuals from other proposers or responders. These findings improve our understanding of fairness perceptions. PMID:28536542
2008-01-01
Background There is a lack of tools to evaluate and compare Electronic patient record (EPR) systems to inform a rational choice or development agenda. Objective To develop a tool kit to measure the impact of different EPR system features on the consultation. Methods We first developed a specification to overcome the limitations of existing methods. We divided this into work packages: (1) developing a method to display multichannel video of the consultation; (2) code and measure activities, including computer use and verbal interactions; (3) automate the capture of nonverbal interactions; (4) aggregate multiple observations into a single navigable output; and (5) produce an output interpretable by software developers. We piloted this method by filming live consultations (n = 22) by 4 general practitioners (GPs) using different EPR systems. We compared the time taken and variations during coded data entry, prescribing, and blood pressure (BP) recording. We used nonparametric tests to make statistical comparisons. We contrasted methods of BP recording using Unified Modeling Language (UML) sequence diagrams. Results We found that 4 channels of video were optimal. We identified an existing application for manual coding of video output. We developed in-house tools for capturing use of keyboard and mouse and to time stamp speech. The transcript is then typed within this time stamp. Although we managed to capture body language using pattern recognition software, we were unable to use this data quantitatively. We loaded these observational outputs into our aggregation tool, which allows simultaneous navigation and viewing of multiple files. This also creates a single exportable file in XML format, which we used to develop UML sequence diagrams. In our pilot, the GP using the EMIS LV (Egton Medical Information Systems Limited, Leeds, UK) system took the longest time to code data (mean 11.5 s, 95% CI 8.7-14.2). Nonparametric comparison of EMIS LV with the other systems showed a significant difference, with EMIS PCS (Egton Medical Information Systems Limited, Leeds, UK) (P = .007), iSoft Synergy (iSOFT, Banbury, UK) (P = .014), and INPS Vision (INPS, London, UK) (P = .006) facilitating faster coding. In contrast, prescribing was fastest with EMIS LV (mean 23.7 s, 95% CI 20.5-26.8), but nonparametric comparison showed no statistically significant difference. UML sequence diagrams showed that the simplest BP recording interface was not the easiest to use, as users spent longer navigating or looking up previous blood pressures separately. Complex interfaces with free-text boxes left clinicians unsure of what to add. Conclusions The ALFA method allows the precise observation of the clinical consultation. It enables rigorous comparison of core elements of EPR systems. Pilot data suggests its capacity to demonstrate differences between systems. Its outputs could provide the evidence base for making more objective choices between systems. PMID:18812313
Braatne, Jeffrey H.; Goater, Lori A.; Blair, Charles L.
2007-01-01
River damming provides a dominant human impact on river environments worldwide, and while local impacts of reservoir flooding are immediate, subsequent ecological impacts downstream can be extensive. In this article, we assess seven research strategies for analyzing the impacts of dams and river flow regulation on riparian ecosystems. These include spatial comparisons of (1) upstream versus downstream reaches, (2) progressive downstream patterns, or (3) the dammed river versus an adjacent free-flowing or differently regulated river(s). Temporal comparisons consider (4) pre- versus post-dam, or (5) sequential post-dam conditions. However, spatial comparisons are complicated by the fact that dams are not randomly located, and temporal comparisons are commonly limited by sparse historic information. As a result, comparative approaches are often correlative and vulnerable to confounding factors. To complement these analyses, (6) flow or sediment modifications can be implemented to test causal associations. Finally, (7) process-based modeling represents a predictive approach incorporating hydrogeomorphic processes and their biological consequences. In a case study of Hells Canyon, the upstream versus downstream comparison is confounded by a dramatic geomorphic transition. Comparison of the multiple reaches below the dams should be useful, and the comparison of Snake River with the adjacent free-flowing Salmon River may provide the strongest spatial comparison. A pre- versus post-dam comparison would provide the most direct study approach, but pre-dam information is limited to historic reports and archival photographs. We conclude that multiple study approaches are essential to provide confident interpretations of ecological impacts downstream from dams, and propose a comprehensive study for Hells Canyon that integrates multiple research strategies. PMID:18043964
2010-01-01
Background Animals, including humans, exhibit a variety of biological rhythms. This article describes a method for the detection and simultaneous comparison of multiple nycthemeral rhythms. Methods A statistical method for detecting periodic patterns in time-related data via harmonic regression is described. The method is particularly capable of detecting nycthemeral rhythms in medical data. Additionally a method for simultaneously comparing two or more periodic patterns is described, which derives from the analysis of variance (ANOVA). This method statistically confirms or rejects equality of periodic patterns. Mathematical descriptions of the detecting method and the comparing method are displayed. Results Nycthemeral rhythms of incidents of bodily harm in Middle Franconia are analyzed in order to demonstrate both methods. Every day of the week showed a significant nycthemeral rhythm of bodily harm. These seven patterns of the week were compared to each other revealing only two different nycthemeral rhythms, one for Friday and Saturday and one for the other weekdays. PMID:21059197
Evaluation of selected methods for determining streamflow during periods of ice effect
Melcher, N.B.; Walker, J.F.
1990-01-01
The methods are classified into two general categories, subjective and analytical, depending on whether individual judgement is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods, and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used for streamflow-gaging stations where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice adjustment factor) may be appropriate for use for stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge ratio and multiple regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.
Integrative Exploratory Analysis of Two or More Genomic Datasets.
Meng, Chen; Culhane, Aedin
2016-01-01
Exploratory analysis is an essential step in the analysis of high throughput data. Multivariate approaches such as correspondence analysis (CA), principal component analysis, and multidimensional scaling are widely used in the exploratory analysis of single dataset. Modern biological studies often assay multiple types of biological molecules (e.g., mRNA, protein, phosphoproteins) on a same set of biological samples, thereby creating multiple different types of omics data or multiassay data. Integrative exploratory analysis of these multiple omics data is required to leverage the potential of multiple omics studies. In this chapter, we describe the application of co-inertia analysis (CIA; for analyzing two datasets) and multiple co-inertia analysis (MCIA; for three or more datasets) to address this problem. These methods are powerful yet simple multivariate approaches that represent samples using a lower number of variables, allowing a more easily identification of the correlated structure in and between multiple high dimensional datasets. Graphical representations can be employed to this purpose. In addition, the methods simultaneously project samples and variables (genes, proteins) onto the same lower dimensional space, so the most variant variables from each dataset can be selected and associated with samples, which can be further used to facilitate biological interpretation and pathway analysis. We applied CIA to explore the concordance between mRNA and protein expression in a panel of 60 tumor cell lines from the National Cancer Institute. In the same 60 cell lines, we used MCIA to perform a cross-platform comparison of mRNA gene expression profiles obtained on four different microarray platforms. Last, as an example of integrative analysis of multiassay or multi-omics data we analyzed transcriptomic, proteomic, and phosphoproteomic data from pluripotent (iPS) and embryonic stem (ES) cell lines.
NASA Astrophysics Data System (ADS)
Wang, Dong; Zhao, Yang; Yang, Fangfang; Tsui, Kwok-Leung
2017-09-01
Brownian motion with adaptive drift has attracted much attention in prognostics because its first hitting time is highly relevant to remaining useful life prediction and it follows the inverse Gaussian distribution. Besides linear degradation modeling, nonlinear-drifted Brownian motion has been developed to model nonlinear degradation. Moreover, the first hitting time distribution of the nonlinear-drifted Brownian motion has been approximated by time-space transformation. In the previous studies, the drift coefficient is the only hidden state used in state space modeling of the nonlinear-drifted Brownian motion. Besides the drift coefficient, parameters of a nonlinear function used in the nonlinear-drifted Brownian motion should be treated as additional hidden states of state space modeling to make the nonlinear-drifted Brownian motion more flexible. In this paper, a prognostic method based on nonlinear-drifted Brownian motion with multiple hidden states is proposed and then it is applied to predict remaining useful life of rechargeable batteries. 26 sets of rechargeable battery degradation samples are analyzed to validate the effectiveness of the proposed prognostic method. Moreover, some comparisons with a standard particle filter based prognostic method, a spherical cubature particle filter based prognostic method and two classic Bayesian prognostic methods are conducted to highlight the superiority of the proposed prognostic method. Results show that the proposed prognostic method has lower average prediction errors than the particle filter based prognostic methods and the classic Bayesian prognostic methods for battery remaining useful life prediction.
Mendoza, G A; Prabhu, R
2000-12-01
This paper describes an application of multiple criteria analysis (MCA) in assessing criteria and indicators adapted for a particular forest management unit. The methods include: ranking, rating, and pairwise comparisons. These methods were used in a participatory decision-making environment where a team representing various stakeholders and professionals used their expert opinions and judgements in assessing different criteria and indicators (C&I) on the one hand, and how suitable and applicable they are to a forest management unit on the other. A forest concession located in Kalimantan, Indonesia, was used as the site for the case study. Results from the study show that the multicriteria methods are effective tools that can be used as structured decision aids to evaluate, prioritize, and select sets of C&I for a particular forest management unit. Ranking and rating approaches can be used as a screening tool to develop an initial list of C&I. Pairwise comparison, on the other hand, can be used as a finer filter to further reduce the list. In addition to using these three MCA methods, the study also examines two commonly used group decision-making techniques, the Delphi method and the nominal group technique. Feedback received from the participants indicates that the methods are transparent, easy to implement, and provide a convenient environment for participatory decision-making.
Ryan, K; Williams, D Gareth; Balding, David J
2016-11-01
Many DNA profiles recovered from crime scene samples are of a quality that does not allow them to be searched against, nor entered into, databases. We propose a method for the comparison of profiles arising from two DNA samples, one or both of which can have multiple donors and be affected by low DNA template or degraded DNA. We compute likelihood ratios to evaluate the hypothesis that the two samples have a common DNA donor, and hypotheses specifying the relatedness of two donors. Our method uses a probability distribution for the genotype of the donor of interest in each sample. This distribution can be obtained from a statistical model, or we can exploit the ability of trained human experts to assess genotype probabilities, thus extracting much information that would be discarded by standard interpretation rules. Our method is compatible with established methods in simple settings, but is more widely applicable and can make better use of information than many current methods for the analysis of mixed-source, low-template DNA profiles. It can accommodate uncertainty arising from relatedness instead of or in addition to uncertainty arising from noisy genotyping. We describe a computer program GPMDNA, available under an open source licence, to calculate LRs using the method presented in this paper. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Cross-Domain Multi-View Object Retrieval via Multi-Scale Topic Models.
Hong, Richang; Hu, Zhenzhen; Wang, Ruxin; Wang, Meng; Tao, Dacheng
2016-09-27
The increasing number of 3D objects in various applications has increased the requirement for effective and efficient 3D object retrieval methods, which attracted extensive research efforts in recent years. Existing works mainly focus on how to extract features and conduct object matching. With the increasing applications, 3D objects come from different areas. In such circumstances, how to conduct object retrieval becomes more important. To address this issue, we propose a multi-view object retrieval method using multi-scale topic models in this paper. In our method, multiple views are first extracted from each object, and then the dense visual features are extracted to represent each view. To represent the 3D object, multi-scale topic models are employed to extract the hidden relationship among these features with respected to varied topic numbers in the topic model. In this way, each object can be represented by a set of bag of topics. To compare the objects, we first conduct topic clustering for the basic topics from two datasets, and then generate the common topic dictionary for new representation. Then, the two objects can be aligned to the same common feature space for comparison. To evaluate the performance of the proposed method, experiments are conducted on two datasets. The 3D object retrieval experimental results and comparison with existing methods demonstrate the effectiveness of the proposed method.
Widaman, Keith F.; Grimm, Kevin J.; Early, Dawnté R.; Robins, Richard W.; Conger, Rand D.
2013-01-01
Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group. PMID:24019738
Multiple signal classification algorithm for super-resolution fluorescence microscopy
Agarwal, Krishna; Macháň, Radek
2016-01-01
Single-molecule localization techniques are restricted by long acquisition and computational times, or the need of special fluorophores or biologically toxic photochemical environments. Here we propose a statistical super-resolution technique of wide-field fluorescence microscopy we call the multiple signal classification algorithm which has several advantages. It provides resolution down to at least 50 nm, requires fewer frames and lower excitation power and works even at high fluorophore concentrations. Further, it works with any fluorophore that exhibits blinking on the timescale of the recording. The multiple signal classification algorithm shows comparable or better performance in comparison with single-molecule localization techniques and four contemporary statistical super-resolution methods for experiments of in vitro actin filaments and other independently acquired experimental data sets. We also demonstrate super-resolution at timescales of 245 ms (using 49 frames acquired at 200 frames per second) in samples of live-cell microtubules and live-cell actin filaments imaged without imaging buffers. PMID:27934858
Practical and Efficient Searching in Proteomics: A Cross Engine Comparison
Paulo, Joao A.
2014-01-01
Background Analysis of large datasets produced by mass spectrometry-based proteomics relies on database search algorithms to sequence peptides and identify proteins. Several such scoring methods are available, each based on different statistical foundations and thereby not producing identical results. Here, the aim is to compare peptide and protein identifications using multiple search engines and examine the additional proteins gained by increasing the number of technical replicate analyses. Methods A HeLa whole cell lysate was analyzed on an Orbitrap mass spectrometer for 10 technical replicates. The data were combined and searched using Mascot, SEQUEST, and Andromeda. Comparisons were made of peptide and protein identifications among the search engines. In addition, searches using each engine were performed with incrementing number of technical replicates. Results The number and identity of peptides and proteins differed across search engines. For all three search engines, the differences in proteins identifications were greater than the differences in peptide identifications indicating that the major source of the disparity may be at the protein inference grouping level. The data also revealed that analysis of 2 technical replicates can increase protein identifications by up to 10-15%, while a third replicate results in an additional 4-5%. Conclusions The data emphasize two practical methods of increasing the robustness of mass spectrometry data analysis. The data show that 1) using multiple search engines can expand the number of identified proteins (union) and validate protein identifications (intersection), and 2) analysis of 2 or 3 technical replicates can substantially expand protein identifications. Moreover, information can be extracted from a dataset by performing database searching with different engines and performing technical repeats, which requires no additional sample preparation and effectively utilizes research time and effort. PMID:25346847
Voormolen, Eduard H.J.; Wei, Corie; Chow, Eva W.C.; Bassett, Anne S.; Mikulis, David J.; Crawley, Adrian P.
2011-01-01
Voxel-based morphometry (VBM) and automated lobar region of interest (ROI) volumetry are comprehensive and fast methods to detect differences in overall brain anatomy on magnetic resonance images. However, VBM and automated lobar ROI volumetry have detected dissimilar gray matter differences within identical image sets in our own experience and in previous reports. To gain more insight into how diverging results arise and to attempt to establish whether one method is superior to the other, we investigated how differences in spatial scale and in the need to statistically correct for multiple spatial comparisons influence the relative sensitivity of either technique to group differences in gray matter volumes. We assessed the performance of both techniques on a small dataset containing simulated gray matter deficits and additionally on a dataset of 22q11-deletion syndrome patients with schizophrenia (22q11DS-SZ) vs. matched controls. VBM was more sensitive to simulated focal deficits compared to automated ROI volumetry, and could detect global cortical deficits equally well. Moreover, theoretical calculations of VBM and ROI detection sensitivities to focal deficits showed that at increasing ROI size, ROI volumetry suffers more from loss in sensitivity than VBM. Furthermore, VBM and automated ROI found corresponding GM deficits in 22q11DS-SZ patients, except in the parietal lobe. Here, automated lobar ROI volumetry found a significant deficit only after a smaller subregion of interest was employed. Thus, sensitivity to focal differences is impaired relatively more by averaging over larger volumes in automated ROI methods than by the correction for multiple comparisons in VBM. These findings indicate that VBM is to be preferred over automated lobar-scale ROI volumetry for assessing gray matter volume differences between groups. PMID:19619660
Network meta-analysis: a technique to gather evidence from direct and indirect comparisons
2017-01-01
Systematic reviews and pairwise meta-analyses of randomized controlled trials, at the intersection of clinical medicine, epidemiology and statistics, are positioned at the top of evidence-based practice hierarchy. These are important tools to base drugs approval, clinical protocols and guidelines formulation and for decision-making. However, this traditional technique only partially yield information that clinicians, patients and policy-makers need to make informed decisions, since it usually compares only two interventions at the time. In the market, regardless the clinical condition under evaluation, usually many interventions are available and few of them have been studied in head-to-head studies. This scenario precludes conclusions to be drawn from comparisons of all interventions profile (e.g. efficacy and safety). The recent development and introduction of a new technique – usually referred as network meta-analysis, indirect meta-analysis, multiple or mixed treatment comparisons – has allowed the estimation of metrics for all possible comparisons in the same model, simultaneously gathering direct and indirect evidence. Over the last years this statistical tool has matured as technique with models available for all types of raw data, producing different pooled effect measures, using both Frequentist and Bayesian frameworks, with different software packages. However, the conduction, report and interpretation of network meta-analysis still poses multiple challenges that should be carefully considered, especially because this technique inherits all assumptions from pairwise meta-analysis but with increased complexity. Thus, we aim to provide a basic explanation of network meta-analysis conduction, highlighting its risks and benefits for evidence-based practice, including information on statistical methods evolution, assumptions and steps for performing the analysis. PMID:28503228
Voillet, Valentin; Besse, Philippe; Liaubet, Laurence; San Cristobal, Magali; González, Ignacio
2016-10-03
In omics data integration studies, it is common, for a variety of reasons, for some individuals to not be present in all data tables. Missing row values are challenging to deal with because most statistical methods cannot be directly applied to incomplete datasets. To overcome this issue, we propose a multiple imputation (MI) approach in a multivariate framework. In this study, we focus on multiple factor analysis (MFA) as a tool to compare and integrate multiple layers of information. MI involves filling the missing rows with plausible values, resulting in M completed datasets. MFA is then applied to each completed dataset to produce M different configurations (the matrices of coordinates of individuals). Finally, the M configurations are combined to yield a single consensus solution. We assessed the performance of our method, named MI-MFA, on two real omics datasets. Incomplete artificial datasets with different patterns of missingness were created from these data. The MI-MFA results were compared with two other approaches i.e., regularized iterative MFA (RI-MFA) and mean variable imputation (MVI-MFA). For each configuration resulting from these three strategies, the suitability of the solution was determined against the true MFA configuration obtained from the original data and a comprehensive graphical comparison showing how the MI-, RI- or MVI-MFA configurations diverge from the true configuration was produced. Two approaches i.e., confidence ellipses and convex hulls, to visualize and assess the uncertainty due to missing values were also described. We showed how the areas of ellipses and convex hulls increased with the number of missing individuals. A free and easy-to-use code was proposed to implement the MI-MFA method in the R statistical environment. We believe that MI-MFA provides a useful and attractive method for estimating the coordinates of individuals on the first MFA components despite missing rows. MI-MFA configurations were close to the true configuration even when many individuals were missing in several data tables. This method takes into account the uncertainty of MI-MFA configurations induced by the missing rows, thereby allowing the reliability of the results to be evaluated.
Mach-Zehnder interferometer implementation for thermo-optical and Kerr effect study
NASA Astrophysics Data System (ADS)
Bundulis, Arturs; Nitiss, Edgars; Busenbergs, Janis; Rutkis, Martins
2018-04-01
In this paper, we propose the Mach-Zehnder interferometric method for third-order nonlinear optical and thermo-optical studies. Both effects manifest themselves as refractive index dependence on the incident light intensity and are widely employed for multiple opto-optical and thermo-optical applications. With the implemented method, we have measured the Kerr and thermo-optical coefficients of chloroform under CW, ns and ps laser irradiance. The application of lasers with different light wavelengths, pulse duration and energy allowed us to distinguish the processes responsible for refractive index changes in the investigated solution. Presented setup was also used for demonstration of opto-optical switching. Results from Mach-Zehnder experiment were compared to Z-scan data obtained in our previous studies. Based on this, a quality comparison of both methods was assessed and advantages and disadvantages of each method were analyzed.
Comparison of CAM-Chem with Trace Gas Measurements from Airborne Field Campaigns from 2009-2016.
NASA Astrophysics Data System (ADS)
Schauffler, S.; Atlas, E. L.; Kinnison, D. E.; Lamarque, J. F.; Saiz-Lopez, A.; Navarro, M. A.; Donets, V.; Blake, D. R.; Blake, N. J.
2016-12-01
Trace gas measurements collected during seven field campaigns, two with multiple deployments, will be compared with the NCAR CAM-Chem model to evaluate the model performance over multiple years. The campaigns include HIPPO (2009-2011) pole to pole observations in the Pacific on the NSF/NCAR GV over multiple seasons; SEAC4RS (Aug./Sept., 2013) in the central and southern U.S. and western Gulf of Mexico on the NASA ER-2 and DC8; ATTREX (2011-2015) on the NASA Global Hawk over multiple seasons and locations; CONTRAST (Jan/Feb, 2014) in the western Pacific on the NSF/NCAR GV; VIRGAS (Oct., 2015) in the south central US and western Gulf of Mexico on the NASA WB-57; ORCAS (Jan/Feb, 2016) over the southern ocean on the NSF/NCAR GV; and POSIDON (Oct, 2016) in the western Pacific on the NASA WB-57. We will focus on along the flight tracks comparisons with the model and will also examine comparisons of vertical distributions and various tracer-tracer correlations.
Hamel, J F; Sebille, V; Le Neel, T; Kubis, G; Boyer, F C; Hardouin, J B
2017-12-01
Subjective health measurements using Patient Reported Outcomes (PRO) are increasingly used in randomized trials, particularly for patient groups comparisons. Two main types of analytical strategies can be used for such data: Classical Test Theory (CTT) and Item Response Theory models (IRT). These two strategies display very similar characteristics when data are complete, but in the common case when data are missing, whether IRT or CTT would be the most appropriate remains unknown and was investigated using simulations. We simulated PRO data such as quality of life data. Missing responses to items were simulated as being completely random, depending on an observable covariate or on an unobserved latent trait. The considered CTT-based methods allowed comparing scores using complete-case analysis, personal mean imputations or multiple-imputations based on a two-way procedure. The IRT-based method was the Wald test on a Rasch model including a group covariate. The IRT-based method and the multiple-imputations-based method for CTT displayed the highest observed power and were the only unbiased method whatever the kind of missing data. Online software and Stata® modules compatibles with the innate mi impute suite are provided for performing such analyses. Traditional procedures (listwise deletion and personal mean imputations) should be avoided, due to inevitable problems of biases and lack of power.
NASA Astrophysics Data System (ADS)
Zahari, Siti Meriam; Ramli, Norazan Mohamed; Moktar, Balkiah; Zainol, Mohammad Said
2014-09-01
In the presence of multicollinearity and multiple outliers, statistical inference of linear regression model using ordinary least squares (OLS) estimators would be severely affected and produces misleading results. To overcome this, many approaches have been investigated. These include robust methods which were reported to be less sensitive to the presence of outliers. In addition, ridge regression technique was employed to tackle multicollinearity problem. In order to mitigate both problems, a combination of ridge regression and robust methods was discussed in this study. The superiority of this approach was examined when simultaneous presence of multicollinearity and multiple outliers occurred in multiple linear regression. This study aimed to look at the performance of several well-known robust estimators; M, MM, RIDGE and robust ridge regression estimators, namely Weighted Ridge M-estimator (WRM), Weighted Ridge MM (WRMM), Ridge MM (RMM), in such a situation. Results of the study showed that in the presence of simultaneous multicollinearity and multiple outliers (in both x and y-direction), the RMM and RIDGE are more or less similar in terms of superiority over the other estimators, regardless of the number of observation, level of collinearity and percentage of outliers used. However, when outliers occurred in only single direction (y-direction), the WRMM estimator is the most superior among the robust ridge regression estimators, by producing the least variance. In conclusion, the robust ridge regression is the best alternative as compared to robust and conventional least squares estimators when dealing with simultaneous presence of multicollinearity and outliers.
Aqil, Muhammad; Kita, Ichiro; Yano, Akira; Nishiyama, Soichi
2007-10-01
Traditionally, the multiple linear regression technique has been one of the most widely used models in simulating hydrological time series. However, when the nonlinear phenomenon is significant, the multiple linear will fail to develop an appropriate predictive model. Recently, neuro-fuzzy systems have gained much popularity for calibrating the nonlinear relationships. This study evaluated the potential of a neuro-fuzzy system as an alternative to the traditional statistical regression technique for the purpose of predicting flow from a local source in a river basin. The effectiveness of the proposed identification technique was demonstrated through a simulation study of the river flow time series of the Citarum River in Indonesia. Furthermore, in order to provide the uncertainty associated with the estimation of river flow, a Monte Carlo simulation was performed. As a comparison, a multiple linear regression analysis that was being used by the Citarum River Authority was also examined using various statistical indices. The simulation results using 95% confidence intervals indicated that the neuro-fuzzy model consistently underestimated the magnitude of high flow while the low and medium flow magnitudes were estimated closer to the observed data. The comparison of the prediction accuracy of the neuro-fuzzy and linear regression methods indicated that the neuro-fuzzy approach was more accurate in predicting river flow dynamics. The neuro-fuzzy model was able to improve the root mean square error (RMSE) and mean absolute percentage error (MAPE) values of the multiple linear regression forecasts by about 13.52% and 10.73%, respectively. Considering its simplicity and efficiency, the neuro-fuzzy model is recommended as an alternative tool for modeling of flow dynamics in the study area.
NASA Astrophysics Data System (ADS)
Hargrove, W. W.; Norman, S. P.; Kumar, J.; Hoffman, F. M.
2017-12-01
National-scale polar analysis of MODIS NDVI allows quantification of degree of seasonality expressed by local vegetation, and also selects the most optimum start/end of a local "phenological year" that is empirically customized for the vegetation that is growing at each location. Interannual differences in timing of phenology make direct comparisons of vegetation health and performance between years difficult, whether at the same or different locations. By "sliding" the two phenologies in time using a Procrustean linear time shift, any particular phenological event or "completion milestone" can be synchronized, allowing direct comparison of differences in timing of other remaining milestones. Going beyond a simple linear translation, time can be "rubber-sheeted," compressed or dilated. Considering one phenology curve to be a reference, the second phenology can be "rubber-sheeted" to fit that baseline as well as possible by stretching or shrinking time to match multiple control points, which can be any recognizable phenological events. Similar to "rubber sheeting" to georectify a map inside a GIS, rubber sheeting a phenology curve also yields a warping signature that shows at every time and every location how many days the adjusted phenology is ahead or behind the phenological development of the reference vegetation. Using such temporal methods to "adjust" phenologies may help to quantify vegetation impacts from frost, drought, wildfire, insects and diseases by permitting the most commensurate quantitative comparisons with unaffected vegetation.
Extrastriatal dopamine D2-receptor availability in social anxiety disorder.
Plavén-Sigray, Pontus; Hedman, Erik; Victorsson, Pauliina; Matheson, Granville J; Forsberg, Anton; Djurfeldt, Diana R; Rück, Christian; Halldin, Christer; Lindefors, Nils; Cervenka, Simon
2017-05-01
Alterations in the dopamine system are hypothesized to influence the expression of social anxiety disorder (SAD) symptoms. However, molecular imaging studies comparing dopamine function between patients and control subjects have yielded conflicting results. Importantly, while all previous investigations focused on the striatum, findings from activation and blood flow studies indicate that prefrontal and limbic brain regions have a central role in the pathophysiology. The objective of this study was to investigate extrastriatal dopamine D2-receptor (D2-R) availability in SAD. We examined 12 SAD patients and 16 healthy controls using positron emission tomography and the high-affinity D2-R radioligand [ 11 C]FLB457. Parametric images of D2-R binding potential were derived using the Logan graphical method with cerebellum as reference region. Two-tailed one-way independent ANCOVAs, with age as covariate, were used to examine differences in D2-R availability between groups using both region-based and voxel-wise analyses. The region-based analysis showed a medium effect size of higher D2-R levels in the orbitofrontal cortex (OFC) in patients, although this result did not remain significant after correction for multiple comparisons. The voxel-wise comparison revealed elevated D2-R availability in patients within OFC and right dorsolateral prefrontal cortex after correction for multiple comparisons. These preliminary results suggest that an aberrant extrastriatal dopamine system may be part of the disease mechanism in SAD. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Improvements in simulation of multiple scattering effects in ATLAS fast simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basalaev, A. E., E-mail: artem.basalaev@cern.ch
Fast ATLAS Tracking Simulation (Fatras) package was verified on single layer geometry with respect to full simulation with GEANT4. Fatras hadronic interactions and multiple scattering simulation were studied in comparison with GEANT4. Disagreement was found in multiple scattering distributions of primary charged particles (μ, π, e). A new model for multiple scattering simulation was implemented in Fatras. The model was based on R. Frühwirth’s mixture models. New model was tested on single layer geometry and a good agreement with GEANT4 was achieved. Also a comparison of reconstructed tracks’ parameters was performed for Inner Detector geometry, and Fatras with new multiplemore » scattering model proved to have better agreement with GEANT4. New model of multiple scattering was added as a part of Fatras package in the development release of ATLAS software—ATHENA.« less
An adaptive two-stage dose-response design method for establishing proof of concept.
Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R
2013-01-01
We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.
Comparison of three optical methods to study erythrocyte aggregation.
Zhao, H; Wang, X; Stoltz, J F
1999-01-01
The aim of this work was to evaluate three optical methods designed to determine erythrocyte aggregation: Erythroaggregometer (EA; Regulest, France), Laser-assisted Optical Rotational Cell Analyzer (LORCA; Mechatronics, Netherlands) and Fully Automatic Erythrocyte Aggregometer (FAEA; Myrenne, GmbH, Germany). Blood samples were taken from fifty donors (26 males and 24 females). The aggregation of normal red blood cell (RBC) and RBCs suspended in three normo- and hyperaggregating suspending media was studied. The results revealed some significant correlations between parameters measured by these instruments, in particular, between the indexes of aggregation of EA and LORCA. Further, RBC aggregation of multiple myeloma patients was also studied and a hyper erythrocyte aggregation state was found by EA and LORCA.
Comparison of methods for developing the dynamics of rigid-body systems
NASA Technical Reports Server (NTRS)
Ju, M. S.; Mansour, J. M.
1989-01-01
Several approaches for developing the equations of motion for a three-degree-of-freedom PUMA robot were compared on the basis of computational efficiency (i.e., the number of additions, subtractions, multiplications, and divisions). Of particular interest was the investigation of the use of computer algebra as a tool for developing the equations of motion. Three approaches were implemented algebraically: Lagrange's method, Kane's method, and Wittenburg's method. Each formulation was developed in absolute and relative coordinates. These six cases were compared to each other and to a recursive numerical formulation. The results showed that all of the formulations implemented algebraically required fewer calculations than the recursive numerical algorithm. The algebraic formulations required fewer calculations in absolute coordinates than in relative coordinates. Each of the algebraic formulations could be simplified, using patterns from Kane's method, to yield the same number of calculations in a given coordinate system.
2010-01-01
Background The vast sequence divergence among different virus groups has presented a great challenge to alignment-based analysis of virus phylogeny. Due to the problems caused by the uncertainty in alignment, existing tools for phylogenetic analysis based on multiple alignment could not be directly applied to the whole-genome comparison and phylogenomic studies of viruses. There has been a growing interest in alignment-free methods for phylogenetic analysis using complete genome data. Among the alignment-free methods, a dynamical language (DL) method proposed by our group has successfully been applied to the phylogenetic analysis of bacteria and chloroplast genomes. Results In this paper, the DL method is used to analyze the whole-proteome phylogeny of 124 large dsDNA viruses and 30 parvoviruses, two data sets with large difference in genome size. The trees from our analyses are in good agreement to the latest classification of large dsDNA viruses and parvoviruses by the International Committee on Taxonomy of Viruses (ICTV). Conclusions The present method provides a new way for recovering the phylogeny of large dsDNA viruses and parvoviruses, and also some insights on the affiliation of a number of unclassified viruses. In comparison, some alignment-free methods such as the CV Tree method can be used for recovering the phylogeny of large dsDNA viruses, but they are not suitable for resolving the phylogeny of parvoviruses with a much smaller genome size. PMID:20565983
NASA Astrophysics Data System (ADS)
Clark, Martyn; Essery, Richard
2017-04-01
When faced with the complex and interdisciplinary challenge of building process-based land models, different modelers make different decisions at different points in the model development process. These modeling decisions are generally based on several considerations, including fidelity (e.g., what approaches faithfully simulate observed processes), complexity (e.g., which processes should be represented explicitly), practicality (e.g., what is the computational cost of the model simulations; are there sufficient resources to implement the desired modeling concepts), and data availability (e.g., is there sufficient data to force and evaluate models). Consequently the research community, comprising modelers of diverse background, experience, and modeling philosophy, has amassed a wide range of models, which differ in almost every aspect of their conceptualization and implementation. Model comparison studies have been undertaken to explore model differences, but have not been able to meaningfully attribute inter-model differences in predictive ability to individual model components because there are often too many structural and implementation differences among the different models considered. As a consequence, model comparison studies to date have provided limited insight into the causes of differences in model behavior, and model development has often relied on the inspiration and experience of individual modelers rather than on a systematic analysis of model shortcomings. This presentation will summarize the use of "multiple-hypothesis" modeling frameworks to understand differences in process-based snow models. Multiple-hypothesis frameworks define a master modeling template, and include a a wide variety of process parameterizations and spatial configurations that are used in existing models. Such frameworks provide the capability to decompose complex models into the individual decisions that are made as part of model development, and evaluate each decision in isolation. It is hence possible to attribute differences in system-scale model predictions to individual modeling decisions, providing scope to mimic the behavior of existing models, understand why models differ, characterize model uncertainty, and identify productive pathways to model improvement. Results will be presented applying multiple hypothesis frameworks to snow model comparison projects, including PILPS, SnowMIP, and the upcoming ESM-SnowMIP project.
Autocorrelation techniques for soft photogrammetry
NASA Astrophysics Data System (ADS)
Yao, Wu
In this thesis research is carried out on image processing, image matching searching strategies, feature type and image matching, and optimal window size in image matching. To make comparisons, the soft photogrammetry package SoftPlotter is used. Two aerial photographs from the Iowa State University campus high flight 94 are scanned into digital format. In order to create a stereo model from them, interior orientation, single photograph rectification and stereo rectification are done. Two new image matching methods, multi-method image matching (MMIM) and unsquare window image matching are developed and compared. MMIM is used to determine the optimal window size in image matching. Twenty four check points from four different types of ground features are used for checking the results from image matching. Comparison between these four types of ground feature shows that the methods developed here improve the speed and the precision of image matching. A process called direct transformation is described and compared with the multiple steps in image processing. The results from image processing are consistent with those from SoftPlotter. A modified LAN image header is developed and used to store the information about the stereo model and image matching. A comparison is also made between cross correlation image matching (CCIM), least difference image matching (LDIM) and least square image matching (LSIM). The quality of image matching in relation to ground features are compared using two methods developed in this study, the coefficient surface for CCIM and the difference surface for LDIM. To reduce the amount of computation in image matching, the best-track searching algorithm, developed in this research, is used instead of the whole range searching algorithm.
2013-01-01
Background As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. Results We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS A : DE genes with non-zero effect sizes in all studies, (2) HS B : DE genes with non-zero effect sizes in one or more studies and (3) HS r : DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. Conclusions The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS A , HS B , and HS r ). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author’s publication website. PMID:24359104
Chang, Lun-Ching; Lin, Hui-Min; Sibille, Etienne; Tseng, George C
2013-12-21
As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS(A): DE genes with non-zero effect sizes in all studies, (2) HS(B): DE genes with non-zero effect sizes in one or more studies and (3) HS(r): DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS(A), HS(B), and HS(r)). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author's publication website.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
NASA Technical Reports Server (NTRS)
Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen
2016-01-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.