A Simple Illustration for the Need of Multiple Comparison Procedures
ERIC Educational Resources Information Center
Carter, Rickey E.
2010-01-01
Statistical adjustments to accommodate multiple comparisons are routinely covered in introductory statistical courses. The fundamental rationale for such adjustments, however, may not be readily understood. This article presents a simple illustration to help remedy this.
Why We (Usually) Don't Have to Worry about Multiple Comparisons
ERIC Educational Resources Information Center
Gelman, Andrew; Hill, Jennifer; Yajima, Masanao
2012-01-01
Applied researchers often find themselves making statistical inferences in settings that would seem to require multiple comparisons adjustments. We challenge the Type I error paradigm that underlies these corrections. Moreover we posit that the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian…
Multiple alignment-free sequence comparison
Ren, Jie; Song, Kai; Sun, Fengzhu; Deng, Minghua; Reinert, Gesine
2013-01-01
Motivation: Recently, a range of new statistics have become available for the alignment-free comparison of two sequences based on k-tuple word content. Here, we extend these statistics to the simultaneous comparison of more than two sequences. Our suite of statistics contains, first, and , extensions of statistics for pairwise comparison of the joint k-tuple content of all the sequences, and second, , and , averages of sums of pairwise comparison statistics. The two tasks we consider are, first, to identify sequences that are similar to a set of target sequences, and, second, to measure the similarity within a set of sequences. Results: Our investigation uses both simulated data as well as cis-regulatory module data where the task is to identify cis-regulatory modules with similar transcription factor binding sites. We find that although for real data, all of our statistics show a similar performance, on simulated data the Shepp-type statistics are in some instances outperformed by star-type statistics. The multiple alignment-free statistics are more sensitive to contamination in the data than the pairwise average statistics. Availability: Our implementation of the five statistics is available as R package named ‘multiAlignFree’ at be http://www-rcf.usc.edu/∼fsun/Programs/multiAlignFree/multiAlignFreemain.html. Contact: reinert@stats.ox.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23990418
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Ploutz-Snyder, Robert; Fiedler, James
2011-01-01
In their 2009 Annals of Statistics paper, Gavrilov, Benjamini, and Sarkar report the results of a simulation assessing the robustness of their adaptive step-down procedure (GBS) for controlling the false discovery rate (FDR) when normally distributed test statistics are serially correlated. In this study we extend the investigation to the case of multiple comparisons involving correlated non-central t-statistics, in particular when several treatments or time periods are being compared to a control in a repeated-measures design with many dependent outcome measures. In addition, we consider several dependence structures other than serial correlation and illustrate how the FDR depends on the interaction between effect size and the type of correlation structure as indexed by Foerstner s distance metric from an identity. The relationship between the correlation matrix R of the original dependent variables and R, the correlation matrix of associated t-statistics is also studied. In general R depends not only on R, but also on sample size and the signed effect sizes for the multiple comparisons.
Watanabe, Hiroshi
2012-01-01
Procedures of statistical analysis are reviewed to provide an overview of applications of statistics for general use. Topics that are dealt with are inference on a population, comparison of two populations with respect to means and probabilities, and multiple comparisons. This study is the second part of series in which we survey medical statistics. Arguments related to statistical associations and regressions will be made in subsequent papers.
Mi, Zhibao; Novitzky, Dimitri; Collins, Joseph F; Cooper, David KC
2015-01-01
The management of brain-dead organ donors is complex. The use of inotropic agents and replacement of depleted hormones (hormonal replacement therapy) is crucial for successful multiple organ procurement, yet the optimal hormonal replacement has not been identified, and the statistical adjustment to determine the best selection is not trivial. Traditional pair-wise comparisons between every pair of treatments, and multiple comparisons to all (MCA), are statistically conservative. Hsu’s multiple comparisons with the best (MCB) – adapted from the Dunnett’s multiple comparisons with control (MCC) – has been used for selecting the best treatment based on continuous variables. We selected the best hormonal replacement modality for successful multiple organ procurement using a two-step approach. First, we estimated the predicted margins by constructing generalized linear models (GLM) or generalized linear mixed models (GLMM), and then we applied the multiple comparison methods to identify the best hormonal replacement modality given that the testing of hormonal replacement modalities is independent. Based on 10-year data from the United Network for Organ Sharing (UNOS), among 16 hormonal replacement modalities, and using the 95% simultaneous confidence intervals, we found that the combination of thyroid hormone, a corticosteroid, antidiuretic hormone, and insulin was the best modality for multiple organ procurement for transplantation. PMID:25565890
Robust Lee local statistic filter for removal of mixed multiplicative and impulse noise
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Astola, Jaakko T.
2004-05-01
A robust version of Lee local statistic filter able to effectively suppress the mixed multiplicative and impulse noise in images is proposed. The performance of the proposed modification is studied for a set of test images, several values of multiplicative noise variance, Gaussian and Rayleigh probability density functions of speckle, and different characteris-tics of impulse noise. The advantages of the designed filter in comparison to the conventional Lee local statistic filter and some other filters able to cope with mixed multiplicative+impulse noise are demonstrated.
A Bayesian Missing Data Framework for Generalized Multiple Outcome Mixed Treatment Comparisons
ERIC Educational Resources Information Center
Hong, Hwanhee; Chu, Haitao; Zhang, Jing; Carlin, Bradley P.
2016-01-01
Bayesian statistical approaches to mixed treatment comparisons (MTCs) are becoming more popular because of their flexibility and interpretability. Many randomized clinical trials report multiple outcomes with possible inherent correlations. Moreover, MTC data are typically sparse (although richer than standard meta-analysis, comparing only two…
Multiple comparison analysis testing in ANOVA.
McHugh, Mary L
2011-01-01
The Analysis of Variance (ANOVA) test has long been an important tool for researchers conducting studies on multiple experimental groups and one or more control groups. However, ANOVA cannot provide detailed information on differences among the various study groups, or on complex combinations of study groups. To fully understand group differences in an ANOVA, researchers must conduct tests of the differences between particular pairs of experimental and control groups. Tests conducted on subsets of data tested previously in another analysis are called post hoc tests. A class of post hoc tests that provide this type of detailed information for ANOVA results are called "multiple comparison analysis" tests. The most commonly used multiple comparison analysis statistics include the following tests: Tukey, Newman-Keuls, Scheffee, Bonferroni and Dunnett. These statistical tools each have specific uses, advantages and disadvantages. Some are best used for testing theory while others are useful in generating new theory. Selection of the appropriate post hoc test will provide researchers with the most detailed information while limiting Type 1 errors due to alpha inflation.
ERIC Educational Resources Information Center
Nolan, Meaghan M.; Beran, Tanya; Hecker, Kent G.
2012-01-01
Students with positive attitudes toward statistics are likely to show strong academic performance in statistics courses. Multiple surveys measuring students' attitudes toward statistics exist; however, a comparison of the validity and reliability of interpretations based on their scores is needed. A systematic review of relevant electronic…
Slotnick, Scott D
2017-07-01
Analysis of functional magnetic resonance imaging (fMRI) data typically involves over one hundred thousand independent statistical tests; therefore, it is necessary to correct for multiple comparisons to control familywise error. In a recent paper, Eklund, Nichols, and Knutsson used resting-state fMRI data to evaluate commonly employed methods to correct for multiple comparisons and reported unacceptable rates of familywise error. Eklund et al.'s analysis was based on the assumption that resting-state fMRI data reflect null data; however, their 'null data' actually reflected default network activity that inflated familywise error. As such, Eklund et al.'s results provide no basis to question the validity of the thousands of published fMRI studies that have corrected for multiple comparisons or the commonly employed methods to correct for multiple comparisons.
Published GMO studies find no evidence of harm when corrected for multiple comparisons.
Panchin, Alexander Y; Tuzhikov, Alexander I
2017-03-01
A number of widely debated research articles claiming possible technology-related health concerns have influenced the public opinion on genetically modified food safety. We performed a statistical reanalysis and review of experimental data presented in some of these studies and found that quite often in contradiction with the authors' conclusions the data actually provides weak evidence of harm that cannot be differentiated from chance. In our opinion the problem of statistically unaccounted multiple comparisons has led to some of the most cited anti-genetically modified organism health claims in history. We hope this analysis puts the original results of these studies into proper context.
Brown, Angus M
2010-04-01
The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.
Evaluating Neurotoxicity of a Mixture of Five OP Pesticides Using a Composite Score
The evaluation of the cumulative effects of neurotoxic pesticides often involves the analysis of both neurochemical and behavioral endpoints. Multiple statistical tests on many endpoints can greatly inflate Type I error rates. Multiple comparison adjustments are often overly con...
A novel statistical method for quantitative comparison of multiple ChIP-seq datasets.
Chen, Li; Wang, Chi; Qin, Zhaohui S; Wu, Hao
2015-06-15
ChIP-seq is a powerful technology to measure the protein binding or histone modification strength in the whole genome scale. Although there are a number of methods available for single ChIP-seq data analysis (e.g. 'peak detection'), rigorous statistical method for quantitative comparison of multiple ChIP-seq datasets with the considerations of data from control experiment, signal to noise ratios, biological variations and multiple-factor experimental designs is under-developed. In this work, we develop a statistical method to perform quantitative comparison of multiple ChIP-seq datasets and detect genomic regions showing differential protein binding or histone modification. We first detect peaks from all datasets and then union them to form a single set of candidate regions. The read counts from IP experiment at the candidate regions are assumed to follow Poisson distribution. The underlying Poisson rates are modeled as an experiment-specific function of artifacts and biological signals. We then obtain the estimated biological signals and compare them through the hypothesis testing procedure in a linear model framework. Simulations and real data analyses demonstrate that the proposed method provides more accurate and robust results compared with existing ones. An R software package ChIPComp is freely available at http://web1.sph.emory.edu/users/hwu30/software/ChIPComp.html. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Everson, Howard T.; And Others
This paper explores the feasibility of neural computing methods such as artificial neural networks (ANNs) and abductory induction mechanisms (AIM) for use in educational measurement. ANNs and AIMS methods are contrasted with more traditional statistical techniques, such as multiple regression and discriminant function analyses, for making…
In analyses supporting the development of numeric nutrient criteria, multiple statistical techniques can be used to extract critical values from stressor response relationships. However there is little guidance for choosing among techniques, and the extent to which log-transfor...
A comparison of forest canopy transmittance estimators
E.R. Smith; Kurt H. Riitters
1994-01-01
Multiple sensors, and alternate statistical estimators, were tested for measuring canopy transmittance in four stands under a variety of sky conditions. On a given day, stand average transmittance estimates were insensitive to degree of synchronization of the sensors used to measure under-canopy and incoming radiation. In comparisons to periodic measurement of incoming...
A Comparison of Latent Growth Models for Constructs Measured by Multiple Items
ERIC Educational Resources Information Center
Leite, Walter L.
2007-01-01
Univariate latent growth modeling (LGM) of composites of multiple items (e.g., item means or sums) has been frequently used to analyze the growth of latent constructs. This study evaluated whether LGM of composites yields unbiased parameter estimates, standard errors, chi-square statistics, and adequate fit indexes. Furthermore, LGM was compared…
Estimating the mass variance in neutron multiplicity counting-A comparison of approaches
NASA Astrophysics Data System (ADS)
Dubi, C.; Croft, S.; Favalli, A.; Ocherashvili, A.; Pedersen, B.
2017-12-01
In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α , n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.
Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubi, C.; Croft, S.; Favalli, A.
In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches
Dubi, C.; Croft, S.; Favalli, A.; ...
2017-09-14
In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
SPICE: exploration and analysis of post-cytometric complex multivariate datasets.
Roederer, Mario; Nozzi, Joshua L; Nason, Martha C
2011-02-01
Polychromatic flow cytometry results in complex, multivariate datasets. To date, tools for the aggregate analysis of these datasets across multiple specimens grouped by different categorical variables, such as demographic information, have not been optimized. Often, the exploration of such datasets is accomplished by visualization of patterns with pie charts or bar charts, without easy access to statistical comparisons of measurements that comprise multiple components. Here we report on algorithms and a graphical interface we developed for these purposes. In particular, we discuss thresholding necessary for accurate representation of data in pie charts, the implications for display and comparison of normalized versus unnormalized data, and the effects of averaging when samples with significant background noise are present. Finally, we define a statistic for the nonparametric comparison of complex distributions to test for difference between groups of samples based on multi-component measurements. While originally developed to support the analysis of T cell functional profiles, these techniques are amenable to a broad range of datatypes. Published 2011 Wiley-Liss, Inc.
ERIC Educational Resources Information Center
Barrows, Russell D.
2007-01-01
A one-way ANOVA experiment is performed to determine whether or not the three standardization methods are statistically different in determining the concentration of the three paraffin analytes. The laboratory exercise asks students to combine the three methods in a single analytical procedure of their own design to determine the concentration of…
Galaxy mergers and gravitational lens statistics
NASA Technical Reports Server (NTRS)
Rix, Hans-Walter; Maoz, Dan; Turner, Edwin L.; Fukugita, Masataka
1994-01-01
We investigate the impact of hierarchical galaxy merging on the statistics of gravitational lensing of distant sources. Since no definite theoretical predictions for the merging history of luminous galaxies exist, we adopt a parameterized prescription, which allows us to adjust the expected number of pieces comprising a typical present galaxy at z approximately 0.65. The existence of global parameter relations for elliptical galaxies and constraints on the evolution of the phase space density in dissipationless mergers, allow us to limit the possible evolution of galaxy lens properties under merging. We draw two lessons from implementing this lens evolution into statistical lens calculations: (1) The total optical depth to multiple imaging (e.g., of quasars) is quite insensitive to merging. (2) Merging leads to a smaller mean separation of observed multiple images. Because merging does not reduce drastically the expected lensing frequency, it cannot make lambda-dominated cosmologies compatible with the existing lensing observations. A comparison with the data from the Hubble Space Telescope (HST) Snapshot Survey shows that models with little or no evolution of the lens population are statistically favored over strong merging scenarios. A specific merging scenario proposed to Toomre can be rejected (95% level) by such a comparison. Some versions of the scenario proposed by Broadhurst, Ellis, & Glazebrook are statistically acceptable.
ERIC Educational Resources Information Center
Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.
2010-01-01
This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…
Association analysis of multiple traits by an approach of combining P values.
Chen, Lili; Wang, Yong; Zhou, Yajing
2018-03-01
Increasing evidence shows that one variant can affect multiple traits, which is a widespread phenomenon in complex diseases. Joint analysis of multiple traits can increase statistical power of association analysis and uncover the underlying genetic mechanism. Although there are many statistical methods to analyse multiple traits, most of these methods are usually suitable for detecting common variants associated with multiple traits. However, because of low minor allele frequency of rare variant, these methods are not optimal for rare variant association analysis. In this paper, we extend an adaptive combination of P values method (termed ADA) for single trait to test association between multiple traits and rare variants in the given region. For a given region, we use reverse regression model to test each rare variant associated with multiple traits and obtain the P value of single-variant test. Further, we take the weighted combination of these P values as the test statistic. Extensive simulation studies show that our approach is more powerful than several other comparison methods in most cases and is robust to the inclusion of a high proportion of neutral variants and the different directions of effects of causal variants.
Mass univariate analysis of event-related brain potentials/fields I: a critical tutorial review.
Groppe, David M; Urbach, Thomas P; Kutas, Marta
2011-12-01
Event-related potentials (ERPs) and magnetic fields (ERFs) are typically analyzed via ANOVAs on mean activity in a priori windows. Advances in computing power and statistics have produced an alternative, mass univariate analyses consisting of thousands of statistical tests and powerful corrections for multiple comparisons. Such analyses are most useful when one has little a priori knowledge of effect locations or latencies, and for delineating effect boundaries. Mass univariate analyses complement and, at times, obviate traditional analyses. Here we review this approach as applied to ERP/ERF data and four methods for multiple comparison correction: strong control of the familywise error rate (FWER) via permutation tests, weak control of FWER via cluster-based permutation tests, false discovery rate control, and control of the generalized FWER. We end with recommendations for their use and introduce free MATLAB software for their implementation. Copyright © 2011 Society for Psychophysiological Research.
Implementation of false discovery rate for exploring novel paradigms and trait dimensions with ERPs.
Crowley, Michael J; Wu, Jia; McCreary, Scott; Miller, Kelly; Mayes, Linda C
2012-01-01
False discovery rate (FDR) is a multiple comparison procedure that targets the expected proportion of false discoveries among the discoveries. Employing FDR methods in event-related potential (ERP) research provides an approach to explore new ERP paradigms and ERP-psychological trait/behavior relations. In Study 1, we examined neural responses to escape behavior from an aversive noise. In Study 2, we correlated a relatively unexplored trait dimension, ostracism, with neural response. In both situations we focused on the frontal cortical region, applying a channel by time plots to display statistically significant uncorrected data and FDR corrected data, controlling for multiple comparisons.
Han, Hyemin; Glenn, Andrea L
2018-06-01
In fMRI research, the goal of correcting for multiple comparisons is to identify areas of activity that reflect true effects, and thus would be expected to replicate in future studies. Finding an appropriate balance between trying to minimize false positives (Type I error) while not being too stringent and omitting true effects (Type II error) can be challenging. Furthermore, the advantages and disadvantages of these types of errors may differ for different areas of study. In many areas of social neuroscience that involve complex processes and considerable individual differences, such as the study of moral judgment, effects are typically smaller and statistical power weaker, leading to the suggestion that less stringent corrections that allow for more sensitivity may be beneficial and also result in more false positives. Using moral judgment fMRI data, we evaluated four commonly used methods for multiple comparison correction implemented in Statistical Parametric Mapping 12 by examining which method produced the most precise overlap with results from a meta-analysis of relevant studies and with results from nonparametric permutation analyses. We found that voxelwise thresholding with familywise error correction based on Random Field Theory provides a more precise overlap (i.e., without omitting too few regions or encompassing too many additional regions) than either clusterwise thresholding, Bonferroni correction, or false discovery rate correction methods.
Shifflett, Benjamin; Huang, Rong; Edland, Steven D
2017-01-01
Genotypic association studies are prone to inflated type I error rates if multiple hypothesis testing is performed, e.g., sequentially testing for recessive, multiplicative, and dominant risk. Alternatives to multiple hypothesis testing include the model independent genotypic χ 2 test, the efficiency robust MAX statistic, which corrects for multiple comparisons but with some loss of power, or a single Armitage test for multiplicative trend, which has optimal power when the multiplicative model holds but with some loss of power when dominant or recessive models underlie the genetic association. We used Monte Carlo simulations to describe the relative performance of these three approaches under a range of scenarios. All three approaches maintained their nominal type I error rates. The genotypic χ 2 and MAX statistics were more powerful when testing a strictly recessive genetic effect or when testing a dominant effect when the allele frequency was high. The Armitage test for multiplicative trend was most powerful for the broad range of scenarios where heterozygote risk is intermediate between recessive and dominant risk. Moreover, all tests had limited power to detect recessive genetic risk unless the sample size was large, and conversely all tests were relatively well powered to detect dominant risk. Taken together, these results suggest the general utility of the multiplicative trend test when the underlying genetic model is unknown.
A probabilistic analysis of electrical equipment vulnerability to carbon fibers
NASA Technical Reports Server (NTRS)
Elber, W.
1980-01-01
The statistical problems of airborne carbon fibers falling onto electrical circuits were idealized and analyzed. The probability of making contact between randomly oriented finite length fibers and sets of parallel conductors with various spacings and lengths was developed theoretically. The probability of multiple fibers joining to bridge a single gap between conductors, or forming continuous networks is included. From these theoretical considerations, practical statistical analyses to assess the likelihood of causing electrical malfunctions was produced. The statistics obtained were confirmed by comparison with results of controlled experiments.
Gram-Negative Bacterial Wound Infections
2015-05-01
not statistically differ- ent from that of the control group . The levels (CFU/g) of bacteria in lung tissue correlated with the survival curves. The...median levels in the control and 2.5 mg/kg- treated groups were almost identical, at 9.04 and 9.07 log CFU/g, respectively. Figure 6B shows a decrease...Dunn’s multiple comparison test, found a statistically significant difference in bacterial burden when the control group was com- pared to animals
Sun, Zong-ke; Wu, Rong; Ding, Pei; Xue, Jin-Rong
2006-07-01
To compare between rapid detection method of enzyme substrate technique and multiple-tube fermentation technique in water coliform bacteria detection. Using inoculated and real water samples to compare the equivalence and false positive rate between two methods. Results demonstrate that enzyme substrate technique shows equivalence with multiple-tube fermentation technique (P = 0.059), false positive rate between the two methods has no statistical difference. It is suggested that enzyme substrate technique can be used as a standard method for water microbiological safety evaluation.
Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances
ERIC Educational Resources Information Center
Jan, Show-Li; Shieh, Gwowen
2014-01-01
The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…
Environmental Health Practice: Statistically Based Performance Measurement
Enander, Richard T.; Gagnon, Ronald N.; Hanumara, R. Choudary; Park, Eugene; Armstrong, Thomas; Gute, David M.
2007-01-01
Objectives. State environmental and health protection agencies have traditionally relied on a facility-by-facility inspection-enforcement paradigm to achieve compliance with government regulations. We evaluated the effectiveness of a new approach that uses a self-certification random sampling design. Methods. Comprehensive environmental and occupational health data from a 3-year statewide industry self-certification initiative were collected from representative automotive refinishing facilities located in Rhode Island. Statistical comparisons between baseline and postintervention data facilitated a quantitative evaluation of statewide performance. Results. The analysis of field data collected from 82 randomly selected automotive refinishing facilities showed statistically significant improvements (P<.05, Fisher exact test) in 4 major performance categories: occupational health and safety, air pollution control, hazardous waste management, and wastewater discharge. Statistical significance was also shown when a modified Bonferroni adjustment for multiple comparisons was performed. Conclusions. Our findings suggest that the new self-certification approach to environmental and worker protection is effective and can be used as an adjunct to further enhance state and federal enforcement programs. PMID:17267709
Network meta-analysis: a technique to gather evidence from direct and indirect comparisons
2017-01-01
Systematic reviews and pairwise meta-analyses of randomized controlled trials, at the intersection of clinical medicine, epidemiology and statistics, are positioned at the top of evidence-based practice hierarchy. These are important tools to base drugs approval, clinical protocols and guidelines formulation and for decision-making. However, this traditional technique only partially yield information that clinicians, patients and policy-makers need to make informed decisions, since it usually compares only two interventions at the time. In the market, regardless the clinical condition under evaluation, usually many interventions are available and few of them have been studied in head-to-head studies. This scenario precludes conclusions to be drawn from comparisons of all interventions profile (e.g. efficacy and safety). The recent development and introduction of a new technique – usually referred as network meta-analysis, indirect meta-analysis, multiple or mixed treatment comparisons – has allowed the estimation of metrics for all possible comparisons in the same model, simultaneously gathering direct and indirect evidence. Over the last years this statistical tool has matured as technique with models available for all types of raw data, producing different pooled effect measures, using both Frequentist and Bayesian frameworks, with different software packages. However, the conduction, report and interpretation of network meta-analysis still poses multiple challenges that should be carefully considered, especially because this technique inherits all assumptions from pairwise meta-analysis but with increased complexity. Thus, we aim to provide a basic explanation of network meta-analysis conduction, highlighting its risks and benefits for evidence-based practice, including information on statistical methods evolution, assumptions and steps for performing the analysis. PMID:28503228
Clarke, John R; Ragone, Andrew V; Greenwald, Lloyd
2005-09-01
We conducted a comparison of methods for predicting survival using survival risk ratios (SRRs), including new comparisons based on International Classification of Diseases, Ninth Revision (ICD-9) versus Abbreviated Injury Scale (AIS) six-digit codes. From the Pennsylvania trauma center's registry, all direct trauma admissions were collected through June 22, 1999. Patients with no comorbid medical diagnoses and both ICD-9 and AIS injury codes were used for comparisons based on a single set of data. SRRs for ICD-9 and then for AIS diagnostic codes were each calculated two ways: from the survival rate of patients with each diagnosis and when each diagnosis was an isolated diagnosis. Probabilities of survival for the cohort were calculated using each set of SRRs by the multiplicative ICISS method and, where appropriate, the minimum SRR method. These prediction sets were then internally validated against actual survival by the Hosmer-Lemeshow goodness-of-fit statistic. The 41,364 patients had 1,224 different ICD-9 injury diagnoses in 32,261 combinations and 1,263 corresponding AIS injury diagnoses in 31,755 combinations, ranging from 1 to 27 injuries per patient. All conventional ICD-9-based combinations of SRRs and methods had better Hosmer-Lemeshow goodness-of-fit statistic fits than their AIS-based counterparts. The minimum SRR method produced better calibration than the multiplicative methods, presumably because it did not magnify inaccuracies in the SRRs that might occur with multiplication. Predictions of survival based on anatomic injury alone can be performed using ICD-9 codes, with no advantage from extra coding of AIS diagnoses. Predictions based on the single worst SRR were closer to actual outcomes than those based on multiplying SRRs.
Implications of clinical trial design on sample size requirements.
Leon, Andrew C
2008-07-01
The primary goal in designing a randomized controlled clinical trial (RCT) is to minimize bias in the estimate of treatment effect. Randomized group assignment, double-blinded assessments, and control or comparison groups reduce the risk of bias. The design must also provide sufficient statistical power to detect a clinically meaningful treatment effect and maintain a nominal level of type I error. An attempt to integrate neurocognitive science into an RCT poses additional challenges. Two particularly relevant aspects of such a design often receive insufficient attention in an RCT. Multiple outcomes inflate type I error, and an unreliable assessment process introduces bias and reduces statistical power. Here we describe how both unreliability and multiple outcomes can increase the study costs and duration and reduce the feasibility of the study. The objective of this article is to consider strategies that overcome the problems of unreliability and multiplicity.
easyGWAS: A Cloud-Based Platform for Comparing the Results of Genome-Wide Association Studies.
Grimm, Dominik G; Roqueiro, Damian; Salomé, Patrice A; Kleeberger, Stefan; Greshake, Bastian; Zhu, Wangsheng; Liu, Chang; Lippert, Christoph; Stegle, Oliver; Schölkopf, Bernhard; Weigel, Detlef; Borgwardt, Karsten M
2017-01-01
The ever-growing availability of high-quality genotypes for a multitude of species has enabled researchers to explore the underlying genetic architecture of complex phenotypes at an unprecedented level of detail using genome-wide association studies (GWAS). The systematic comparison of results obtained from GWAS of different traits opens up new possibilities, including the analysis of pleiotropic effects. Other advantages that result from the integration of multiple GWAS are the ability to replicate GWAS signals and to increase statistical power to detect such signals through meta-analyses. In order to facilitate the simple comparison of GWAS results, we present easyGWAS, a powerful, species-independent online resource for computing, storing, sharing, annotating, and comparing GWAS. The easyGWAS tool supports multiple species, the uploading of private genotype data and summary statistics of existing GWAS, as well as advanced methods for comparing GWAS results across different experiments and data sets in an interactive and user-friendly interface. easyGWAS is also a public data repository for GWAS data and summary statistics and already includes published data and results from several major GWAS. We demonstrate the potential of easyGWAS with a case study of the model organism Arabidopsis thaliana , using flowering and growth-related traits. © 2016 American Society of Plant Biologists. All rights reserved.
Multiple comparisons permutation test for image based data mining in radiotherapy.
Chen, Chun; Witte, Marnix; Heemsbergen, Wilma; van Herk, Marcel
2013-12-23
: Comparing incidental dose distributions (i.e. images) of patients with different outcomes is a straightforward way to explore dose-response hypotheses in radiotherapy. In this paper, we introduced a permutation test that compares images, such as dose distributions from radiotherapy, while tackling the multiple comparisons problem. A test statistic Tmax was proposed that summarizes the differences between the images into a single value and a permutation procedure was employed to compute the adjusted p-value. We demonstrated the method in two retrospective studies: a prostate study that relates 3D dose distributions to failure, and an esophagus study that relates 2D surface dose distributions of the esophagus to acute esophagus toxicity. As a result, we were able to identify suspicious regions that are significantly associated with failure (prostate study) or toxicity (esophagus study). Permutation testing allows direct comparison of images from different patient categories and is a useful tool for data mining in radiotherapy.
Comparison of two stand-alone CADe systems at multiple operating points
NASA Astrophysics Data System (ADS)
Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas
2015-03-01
Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.
NASA Astrophysics Data System (ADS)
Xu, Lei; Chen, Nengcheng; Zhang, Xiang
2018-02-01
Drought is an extreme natural disaster that can lead to huge socioeconomic losses. Drought prediction ahead of months is helpful for early drought warning and preparations. In this study, we developed a statistical model, two weighted dynamic models and a statistical-dynamic (hybrid) model for 1-6 month lead drought prediction in China. Specifically, statistical component refers to climate signals weighting by support vector regression (SVR), dynamic components consist of the ensemble mean (EM) and Bayesian model averaging (BMA) of the North American Multi-Model Ensemble (NMME) climatic models, and the hybrid part denotes a combination of statistical and dynamic components by assigning weights based on their historical performances. The results indicate that the statistical and hybrid models show better rainfall predictions than NMME-EM and NMME-BMA models, which have good predictability only in southern China. In the 2011 China winter-spring drought event, the statistical model well predicted the spatial extent and severity of drought nationwide, although the severity was underestimated in the mid-lower reaches of Yangtze River (MLRYR) region. The NMME-EM and NMME-BMA models largely overestimated rainfall in northern and western China in 2011 drought. In the 2013 China summer drought, the NMME-EM model forecasted the drought extent and severity in eastern China well, while the statistical and hybrid models falsely detected negative precipitation anomaly (NPA) in some areas. Model ensembles such as multiple statistical approaches, multiple dynamic models or multiple hybrid models for drought predictions were highlighted. These conclusions may be helpful for drought prediction and early drought warnings in China.
ERIC Educational Resources Information Center
Mun, Eun Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.
2008-01-01
Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of nonnested models using the Bayesian information criterion to compare multiple models and identify the…
Hashim, Muhammad Jawad
2010-09-01
Post-hoc secondary data analysis with no prespecified hypotheses has been discouraged by textbook authors and journal editors alike. Unfortunately no single term describes this phenomenon succinctly. I would like to coin the term "sigsearch" to define this practice and bring it within the teaching lexicon of statistics courses. Sigsearch would include any unplanned, post-hoc search for statistical significance using multiple comparisons of subgroups. It would also include data analysis with outcomes other than the prespecified primary outcome measure of a study as well as secondary data analyses of earlier research.
Rule-based statistical data mining agents for an e-commerce application
NASA Astrophysics Data System (ADS)
Qin, Yi; Zhang, Yan-Qing; King, K. N.; Sunderraman, Rajshekhar
2003-03-01
Intelligent data mining techniques have useful e-Business applications. Because an e-Commerce application is related to multiple domains such as statistical analysis, market competition, price comparison, profit improvement and personal preferences, this paper presents a hybrid knowledge-based e-Commerce system fusing intelligent techniques, statistical data mining, and personal information to enhance QoS (Quality of Service) of e-Commerce. A Web-based e-Commerce application software system, eDVD Web Shopping Center, is successfully implemented uisng Java servlets and an Oracle81 database server. Simulation results have shown that the hybrid intelligent e-Commerce system is able to make smart decisions for different customers.
Effects of preprocessing Landsat MSS data on derived features
NASA Technical Reports Server (NTRS)
Parris, T. M.; Cicone, R. C.
1983-01-01
Important to the use of multitemporal Landsat MSS data for earth resources monitoring, such as agricultural inventories, is the ability to minimize the effects of varying atmospheric and satellite viewing conditions, while extracting physically meaningful features from the data. In general, the approaches to the preprocessing problem have been derived from either physical or statistical models. This paper compares three proposed algorithms; XSTAR haze correction, Color Normalization, and Multiple Acquisition Mean Level Adjustment. These techniques represent physical, statistical, and hybrid physical-statistical models, respectively. The comparisons are made in the context of three feature extraction techniques; the Tasseled Cap, the Cate Color Cube. and Normalized Difference.
Felix, Leonardo Bonato; Miranda de Sá, Antonio Mauricio Ferreira Leite; Infantosi, Antonio Fernando Catelli; Yehia, Hani Camille
2007-03-01
The presence of cerebral evoked responses can be tested by using objective response detectors. They are statistical tests that provide a threshold above which responses can be assumed to have occurred. The detection power depends on the signal-to-noise ratio (SNR) of the response and the amount of data available. However, the correlation within the background noise could also affect the power of such detectors. For a fixed SNR, the detection can only be improved at the expense of using a longer stretch of signal. This can constitute a limitation, for instance, in monitored surgeries. Alternatively, multivariate objective response detection (MORD) could be used. This work applies two MORD techniques (multiple coherence and multiple component synchrony measure) to EEG data collected during intermittent photic stimulation. They were evaluated throughout Monte Carlo simulations, which also allowed verifying that correlation in the background reduces the detection rate. Considering the N EEG derivations as close as possible to the primary visual cortex, if N = 4, 6 or 8, multiple coherence leads to a statistically significant higher detection rate in comparison with multiple component synchrony measure. With the former, the best performance was obtained with six signals (O1, O2, T5, T6, P3 and P4).
Empirical Reference Distributions for Networks of Different Size
Smith, Anna; Calder, Catherine A.; Browning, Christopher R.
2016-01-01
Network analysis has become an increasingly prevalent research tool across a vast range of scientific fields. Here, we focus on the particular issue of comparing network statistics, i.e. graph-level measures of network structural features, across multiple networks that differ in size. Although “normalized” versions of some network statistics exist, we demonstrate via simulation why direct comparison is often inappropriate. We consider normalizing network statistics relative to a simple fully parameterized reference distribution and demonstrate via simulation how this is an improvement over direct comparison, but still sometimes problematic. We propose a new adjustment method based on a reference distribution constructed as a mixture model of random graphs which reflect the dependence structure exhibited in the observed networks. We show that using simple Bernoulli models as mixture components in this reference distribution can provide adjusted network statistics that are relatively comparable across different network sizes but still describe interesting features of networks, and that this can be accomplished at relatively low computational expense. Finally, we apply this methodology to a collection of ecological networks derived from the Los Angeles Family and Neighborhood Survey activity location data. PMID:27721556
A Comparison of Statistical Models for Calculating Reliability of the Hoffmann Reflex
ERIC Educational Resources Information Center
Christie, A.; Kamen, G.; Boucher, Jean P.; Inglis, J. Greig; Gabriel, David A.
2010-01-01
The Hoffmann reflex is obtained through surface electromyographic recordings, and it is one of the most common neurophysiological techniques in exercise science. Measurement and evaluation of the peak-to-peak amplitude of the Hoffmann reflex has been guided by the observation that it is a variable response that requires multiple trials to obtain a…
Construct and Compare Gene Coexpression Networks with DAPfinder and DAPview.
Skinner, Jeff; Kotliarov, Yuri; Varma, Sudhir; Mine, Karina L; Yambartsev, Anatoly; Simon, Richard; Huyen, Yentram; Morgun, Andrey
2011-07-14
DAPfinder and DAPview are novel BRB-ArrayTools plug-ins to construct gene coexpression networks and identify significant differences in pairwise gene-gene coexpression between two phenotypes. Each significant difference in gene-gene association represents a Differentially Associated Pair (DAP). Our tools include several choices of filtering methods, gene-gene association metrics, statistical testing methods and multiple comparison adjustments. Network results are easily displayed in Cytoscape. Analyses of glioma experiments and microarray simulations demonstrate the utility of these tools. DAPfinder is a new friendly-user tool for reconstruction and comparison of biological networks.
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Ploutz-Snyder, Robert; Fiedler, James
2011-01-01
As part of a 2009 Annals of Statistics paper, Gavrilov, Benjamini, and Sarkar report results of simulations that estimated the false discovery rate (FDR) for equally correlated test statistics using a well-known multiple-test procedure. In our study we estimate the distribution of the false discovery proportion (FDP) for the same procedure under a variety of correlation structures among multiple dependent variables in a MANOVA context. Specifically, we study the mean (the FDR), skewness, kurtosis, and percentiles of the FDP distribution in the case of multiple comparisons that give rise to correlated non-central t-statistics when results at several time periods are being compared to baseline. Even if the FDR achieves its nominal value, other aspects of the distribution of the FDP depend on the interaction between signed effect sizes and correlations among variables, proportion of true nulls, and number of dependent variables. We show examples where the mean FDP (the FDR) is 10% as designed, yet there is a surprising probability of having 30% or more false discoveries. Thus, in a real experiment, the proportion of false discoveries could be quite different from the stipulated FDR.
Pare, Guillaume; Mao, Shihong; Deng, Wei Q
2016-06-08
Despite considerable efforts, known genetic associations only explain a small fraction of predicted heritability. Regional associations combine information from multiple contiguous genetic variants and can improve variance explained at established association loci. However, regional associations are not easily amenable to estimation using summary association statistics because of sensitivity to linkage disequilibrium (LD). We now propose a novel method, LD Adjusted Regional Genetic Variance (LARGV), to estimate phenotypic variance explained by regional associations using summary statistics while accounting for LD. Our method is asymptotically equivalent to a multiple linear regression model when no interaction or haplotype effects are present. It has several applications, such as ranking of genetic regions according to variance explained or comparison of variance explained by two or more regions. Using height and BMI data from the Health Retirement Study (N = 7,776), we show that most genetic variance lies in a small proportion of the genome and that previously identified linkage peaks have higher than expected regional variance.
Pare, Guillaume; Mao, Shihong; Deng, Wei Q.
2016-01-01
Despite considerable efforts, known genetic associations only explain a small fraction of predicted heritability. Regional associations combine information from multiple contiguous genetic variants and can improve variance explained at established association loci. However, regional associations are not easily amenable to estimation using summary association statistics because of sensitivity to linkage disequilibrium (LD). We now propose a novel method, LD Adjusted Regional Genetic Variance (LARGV), to estimate phenotypic variance explained by regional associations using summary statistics while accounting for LD. Our method is asymptotically equivalent to a multiple linear regression model when no interaction or haplotype effects are present. It has several applications, such as ranking of genetic regions according to variance explained or comparison of variance explained by two or more regions. Using height and BMI data from the Health Retirement Study (N = 7,776), we show that most genetic variance lies in a small proportion of the genome and that previously identified linkage peaks have higher than expected regional variance. PMID:27273519
Kissling, Grace E; Haseman, Joseph K; Zeiger, Errol
2015-09-02
A recent article by Gaus (2014) demonstrates a serious misunderstanding of the NTP's statistical analysis and interpretation of rodent carcinogenicity data as reported in Technical Report 578 (Ginkgo biloba) (NTP, 2013), as well as a failure to acknowledge the abundant literature on false positive rates in rodent carcinogenicity studies. The NTP reported Ginkgo biloba extract to be carcinogenic in mice and rats. Gaus claims that, in this study, 4800 statistical comparisons were possible, and that 209 of them were statistically significant (p<0.05) compared with 240 (4800×0.05) expected by chance alone; thus, the carcinogenicity of Ginkgo biloba extract cannot be definitively established. However, his assumptions and calculations are flawed since he incorrectly assumes that the NTP uses no correction for multiple comparisons, and that significance tests for discrete data operate at exactly the nominal level. He also misrepresents the NTP's decision making process, overstates the number of statistical comparisons made, and ignores the fact that the mouse liver tumor effects were so striking (e.g., p<0.0000000000001) that it is virtually impossible that they could be false positive outcomes. Gaus' conclusion that such obvious responses merely "generate a hypothesis" rather than demonstrate a real carcinogenic effect has no scientific credibility. Moreover, his claims regarding the high frequency of false positive outcomes in carcinogenicity studies are misleading because of his methodological misconceptions and errors. Published by Elsevier Ireland Ltd.
Kissling, Grace E.; Haseman, Joseph K.; Zeiger, Errol
2014-01-01
A recent article by Gaus (2014) demonstrates a serious misunderstanding of the NTP’s statistical analysis and interpretation of rodent carcinogenicity data as reported in Technical Report 578 (Ginkgo biloba) (NTP 2013), as well as a failure to acknowledge the abundant literature on false positive rates in rodent carcinogenicity studies. The NTP reported Ginkgo biloba extract to be carcinogenic in mice and rats. Gaus claims that, in this study, 4800 statistical comparisons were possible, and that 209 of them were statistically significant (p<0.05) compared with 240 (4800 × 0.05) expected by chance alone; thus, the carcinogenicity of Ginkgo biloba extract cannot be definitively established. However, his assumptions and calculations are flawed since he incorrectly assumes that the NTP uses no correction for multiple comparisons, and that significance tests for discrete data operate at exactly the nominal level. He also misrepresents the NTP’s decision making process, overstates the number of statistical comparisons made, and ignores that fact that that the mouse liver tumor effects were so striking (e.g., p<0.0000000000001) that it is virtually impossible that they could be false positive outcomes. Gaus’ conclusion that such obvious responses merely “generate a hypothesis” rather than demonstrate a real carcinogenic effect has no scientific credibility. Moreover, his claims regarding the high frequency of false positive outcomes in carcinogenicity studies are misleading because of his methodological misconceptions and errors. PMID:25261588
Ondeck, Nathaniel T; Fu, Michael C; Skrip, Laura A; McLynn, Ryan P; Su, Edwin P; Grauer, Jonathan N
2018-03-01
Despite the advantages of large, national datasets, one continuing concern is missing data values. Complete case analysis, where only cases with complete data are analyzed, is commonly used rather than more statistically rigorous approaches such as multiple imputation. This study characterizes the potential selection bias introduced using complete case analysis and compares the results of common regressions using both techniques following unicompartmental knee arthroplasty. Patients undergoing unicompartmental knee arthroplasty were extracted from the 2005 to 2015 National Surgical Quality Improvement Program. As examples, the demographics of patients with and without missing preoperative albumin and hematocrit values were compared. Missing data were then treated with both complete case analysis and multiple imputation (an approach that reproduces the variation and associations that would have been present in a full dataset) and the conclusions of common regressions for adverse outcomes were compared. A total of 6117 patients were included, of which 56.7% were missing at least one value. Younger, female, and healthier patients were more likely to have missing preoperative albumin and hematocrit values. The use of complete case analysis removed 3467 patients from the study in comparison with multiple imputation which included all 6117 patients. The 2 methods of handling missing values led to differing associations of low preoperative laboratory values with commonly studied adverse outcomes. The use of complete case analysis can introduce selection bias and may lead to different conclusions in comparison with the statistically rigorous multiple imputation approach. Joint surgeons should consider the methods of handling missing values when interpreting arthroplasty research. Copyright © 2017 Elsevier Inc. All rights reserved.
Multiple signal classification algorithm for super-resolution fluorescence microscopy
Agarwal, Krishna; Macháň, Radek
2016-01-01
Single-molecule localization techniques are restricted by long acquisition and computational times, or the need of special fluorophores or biologically toxic photochemical environments. Here we propose a statistical super-resolution technique of wide-field fluorescence microscopy we call the multiple signal classification algorithm which has several advantages. It provides resolution down to at least 50 nm, requires fewer frames and lower excitation power and works even at high fluorophore concentrations. Further, it works with any fluorophore that exhibits blinking on the timescale of the recording. The multiple signal classification algorithm shows comparable or better performance in comparison with single-molecule localization techniques and four contemporary statistical super-resolution methods for experiments of in vitro actin filaments and other independently acquired experimental data sets. We also demonstrate super-resolution at timescales of 245 ms (using 49 frames acquired at 200 frames per second) in samples of live-cell microtubules and live-cell actin filaments imaged without imaging buffers. PMID:27934858
Fast alignment-free sequence comparison using spaced-word frequencies.
Leimeister, Chris-Andre; Boden, Marcus; Horwege, Sebastian; Lindner, Sebastian; Morgenstern, Burkhard
2014-07-15
Alignment-free methods for sequence comparison are increasingly used for genome analysis and phylogeny reconstruction; they circumvent various difficulties of traditional alignment-based approaches. In particular, alignment-free methods are much faster than pairwise or multiple alignments. They are, however, less accurate than methods based on sequence alignment. Most alignment-free approaches work by comparing the word composition of sequences. A well-known problem with these methods is that neighbouring word matches are far from independent. To reduce the statistical dependency between adjacent word matches, we propose to use 'spaced words', defined by patterns of 'match' and 'don't care' positions, for alignment-free sequence comparison. We describe a fast implementation of this approach using recursive hashing and bit operations, and we show that further improvements can be achieved by using multiple patterns instead of single patterns. To evaluate our approach, we use spaced-word frequencies as a basis for fast phylogeny reconstruction. Using real-world and simulated sequence data, we demonstrate that our multiple-pattern approach produces better phylogenies than approaches relying on contiguous words. Our program is freely available at http://spaced.gobics.de/. © The Author 2014. Published by Oxford University Press.
Multiple comparisons permutation test for image based data mining in radiotherapy
2013-01-01
Comparing incidental dose distributions (i.e. images) of patients with different outcomes is a straightforward way to explore dose-response hypotheses in radiotherapy. In this paper, we introduced a permutation test that compares images, such as dose distributions from radiotherapy, while tackling the multiple comparisons problem. A test statistic Tmax was proposed that summarizes the differences between the images into a single value and a permutation procedure was employed to compute the adjusted p-value. We demonstrated the method in two retrospective studies: a prostate study that relates 3D dose distributions to failure, and an esophagus study that relates 2D surface dose distributions of the esophagus to acute esophagus toxicity. As a result, we were able to identify suspicious regions that are significantly associated with failure (prostate study) or toxicity (esophagus study). Permutation testing allows direct comparison of images from different patient categories and is a useful tool for data mining in radiotherapy. PMID:24365155
Sample size and power considerations in network meta-analysis
2012-01-01
Background Network meta-analysis is becoming increasingly popular for establishing comparative effectiveness among multiple interventions for the same disease. Network meta-analysis inherits all methodological challenges of standard pairwise meta-analysis, but with increased complexity due to the multitude of intervention comparisons. One issue that is now widely recognized in pairwise meta-analysis is the issue of sample size and statistical power. This issue, however, has so far only received little attention in network meta-analysis. To date, no approaches have been proposed for evaluating the adequacy of the sample size, and thus power, in a treatment network. Findings In this article, we develop easy-to-use flexible methods for estimating the ‘effective sample size’ in indirect comparison meta-analysis and network meta-analysis. The effective sample size for a particular treatment comparison can be interpreted as the number of patients in a pairwise meta-analysis that would provide the same degree and strength of evidence as that which is provided in the indirect comparison or network meta-analysis. We further develop methods for retrospectively estimating the statistical power for each comparison in a network meta-analysis. We illustrate the performance of the proposed methods for estimating effective sample size and statistical power using data from a network meta-analysis on interventions for smoking cessation including over 100 trials. Conclusion The proposed methods are easy to use and will be of high value to regulatory agencies and decision makers who must assess the strength of the evidence supporting comparative effectiveness estimates. PMID:22992327
FAMILY ANALYSIS OF IMMUNOGLOBULIN CLASSES AND SUBCLASSES IN CHILDREN WITH AUTISTIC DISORDER
Spiroski, Mirko; Trajkovski, Vladimir; Trajkov, Dejan; Petlichkovski, Aleksandar; Efinska-Mladenovska, Olivija; Hristomanova, Slavica; Djulejic, Eli; Paneva, Meri; Bozhikov, Jadranka
2009-01-01
Autistic disorder is a severe neurodevelopment disorder characterized by a triad of impairments in reciprocal social interaction, verbal and nonverbal communication, and a pattern of repetitive stereotyped activities, behaviours and interests. There are strong lines of evidence to suggest that the immune system plays an important role in the pathogenesis of autistic disorder. The aim of this study was to analyze quantitative plasma concentration of immunoglobulin classes, and subclasses in autistic patients and their families. The investigation was performed retrospectively in 50 persons with autistic disorder in the Republic of Macedonia. Infantile autistic disorder was diagnosed by DSM-IV and ICD-10 criteria. Plasma immunoglobulin classes (IgM, IgA, and IgG) and subclasses (IgG1, IgG2, IgG3, and IgG4) were determined using Nephelometer Analyzer BN-100. Multiple comparisons for the IgA variable have shown statistically significant differences between three pairs: male autistic from the fathers (p = 0,001), female autistic from the mothers (p = 0,008), as well as healthy sisters from the fathers (p = 0,011). Statistically significant differences found between three groups regarding autistic disorder (person with autistic disorder, father/mother of a person with autistic disorder, and brother/sister) independent of sex belongs to IgA, IgG2, and IgG3 variables. Multiple comparisons for the IgA variable have shown statistically significant differences between children with autistic disorder from the fathers and mothers (p < 0,001), and healthy brothers and sisters from the fathers and mothers (p < 0,001). Comparison between healthy children and children with autistic disorder from the same family should be tested for immunoglobulin classes and subclasses in order to avoid differences between generations. PMID:20001993
Family analysis of immunoglobulin classes and subclasses in children with autistic disorder.
Spiroski, Mirko; Trajkovski, Vladimir; Trajkov, Dejan; Petlichkovski, Aleksandar; Efinska-Mladenovska, Olivija; Hristomanova, Slavica; Djulejic, Eli; Paneva, Meri; Bozhikov, Jadranka
2009-11-01
Autistic disorder is a severe neurodevelopment disorder characterized by a triad of impairments in reciprocal social interaction, verbal and nonverbal communication, and a pattern of repetitive stereotyped activities, behaviours and interests. There are strong lines of evidence to suggest that the immune system plays an important role in the pathogenesis of autistic disorder. The aim of this study was to analyze quantitative plasma concentration of immunoglobulin classes, and subclasses in autistic patients and their families. The investigation was performed retrospectively in 50 persons with autistic disorder in the Republic of Macedonia. Infantile autistic disorder was diagnosed by DSM-IV and ICD-10 criteria. Plasma immunoglobulin classes (IgM, IgA, and IgG) and subclasses (IgG1, IgG2, IgG3, and IgG4) were determined using Nephelometer Analyzer BN-100. Multiple comparisons for the IgA variable have shown statistically significant differences between three pairs: male autistic from the fathers (p = 0,001), female autistic from the mothers (p = 0,008), as well as healthy sisters from the fathers (p = 0,011). Statistically significant differences found between three groups regarding autistic disorder (person with autistic disorder, father/mother of a person with autistic disorder, and brother/sister) independent of sex belongs to IgA, IgG2, and IgG3 variables. Multiple comparisons for the IgA variable have shown statistically significant differences between children with autistic disorder from the fathers and mothers (p < 0,001), and healthy brothers and sisters from the fathers and mothers (p < 0,001). Comparison between healthy children and children with autistic disorder from the same family should be tested for immunoglobulin classes and subclasses in order to avoid differences between generations.
Statistics of multi-look AIRSAR imagery: A comparison of theory with measurements
NASA Technical Reports Server (NTRS)
Lee, J. S.; Hoppel, K. W.; Mango, S. A.
1993-01-01
The intensity and amplitude statistics of SAR images, such as L-Band HH for SEASAT and SIR-B, and C-Band VV for ERS-1 have been extensively investigated for various terrain, ground cover and ocean surfaces. Less well-known are the statistics between multiple channels of polarimetric of interferometric SAR's, especially for the multi-look processed data. In this paper, we investigate the probability density functions (PDF's) of phase differences, the magnitude of complex products and the amplitude ratios, between polarization channels (i.e. HH, HV, and VV) using 1-look and 4-look AIRSAR polarimetric data. Measured histograms are compared with theoretical PDF's which were recently derived based on a complex Gaussian model.
Comment on "Evidence for mesothermy in dinosaurs".
Myhrvold, Nathan P
2015-05-29
Grady et al. (Reports, 13 June 2014, p. 1268) studied dinosaur metabolism by comparison of maximum somatic growth rate allometry with groups of known metabolism. They concluded that dinosaurs exhibited mesothermy, a metabolic rate intermediate between endothermy and ectothermy. Multiple statistical and methodological issues call into question the evidence for dinosaur mesothermy. Copyright © 2015, American Association for the Advancement of Science.
Küçük, Fadime; Kara, Bilge; Poyraz, Esra Çoşkuner; İdiman, Egemen
2016-01-01
[Purpose] The aim of this study was to determine the effects of clinical Pilates in multiple sclerosis patients. [Subjects and Methods] Twenty multiple sclerosis patients were enrolled in this study. The participants were divided into two groups as the clinical Pilates and control groups. Cognition (Multiple Sclerosis Functional Composite), balance (Berg Balance Scale), physical performance (timed performance tests, Timed up and go test), tiredness (Modified Fatigue Impact scale), depression (Beck Depression Inventory), and quality of life (Multiple Sclerosis International Quality of Life Questionnaire) were measured before and after treatment in all participants. [Results] There were statistically significant differences in balance, timed performance, tiredness and Multiple Sclerosis Functional Composite tests between before and after treatment in the clinical Pilates group. We also found significant differences in timed performance tests, the Timed up and go test and the Multiple Sclerosis Functional Composite between before and after treatment in the control group. According to the difference analyses, there were significant differences in Multiple Sclerosis Functional Composite and Multiple Sclerosis International Quality of Life Questionnaire scores between the two groups in favor of the clinical Pilates group. There were statistically significant clinical differences in favor of the clinical Pilates group in comparison of measurements between the groups. Clinical Pilates improved cognitive functions and quality of life compared with traditional exercise. [Conclusion] In Multiple Sclerosis treatment, clinical Pilates should be used as a holistic approach by physical therapists. PMID:27134355
Küçük, Fadime; Kara, Bilge; Poyraz, Esra Çoşkuner; İdiman, Egemen
2016-03-01
[Purpose] The aim of this study was to determine the effects of clinical Pilates in multiple sclerosis patients. [Subjects and Methods] Twenty multiple sclerosis patients were enrolled in this study. The participants were divided into two groups as the clinical Pilates and control groups. Cognition (Multiple Sclerosis Functional Composite), balance (Berg Balance Scale), physical performance (timed performance tests, Timed up and go test), tiredness (Modified Fatigue Impact scale), depression (Beck Depression Inventory), and quality of life (Multiple Sclerosis International Quality of Life Questionnaire) were measured before and after treatment in all participants. [Results] There were statistically significant differences in balance, timed performance, tiredness and Multiple Sclerosis Functional Composite tests between before and after treatment in the clinical Pilates group. We also found significant differences in timed performance tests, the Timed up and go test and the Multiple Sclerosis Functional Composite between before and after treatment in the control group. According to the difference analyses, there were significant differences in Multiple Sclerosis Functional Composite and Multiple Sclerosis International Quality of Life Questionnaire scores between the two groups in favor of the clinical Pilates group. There were statistically significant clinical differences in favor of the clinical Pilates group in comparison of measurements between the groups. Clinical Pilates improved cognitive functions and quality of life compared with traditional exercise. [Conclusion] In Multiple Sclerosis treatment, clinical Pilates should be used as a holistic approach by physical therapists.
A basket two-part model to analyze medical expenditure on interdependent multiple sectors.
Sugawara, Shinya; Wu, Tianyi; Yamanishi, Kenji
2018-05-01
This study proposes a novel statistical methodology to analyze expenditure on multiple medical sectors using consumer data. Conventionally, medical expenditure has been analyzed by two-part models, which separately consider purchase decision and amount of expenditure. We extend the traditional two-part models by adding the step of basket analysis for dimension reduction. This new step enables us to analyze complicated interdependence between multiple sectors without an identification problem. As an empirical application for the proposed method, we analyze data of 13 medical sectors from the Medical Expenditure Panel Survey. In comparison with the results of previous studies that analyzed the multiple sector independently, our method provides more detailed implications of the impacts of individual socioeconomic status on the composition of joint purchases from multiple medical sectors; our method has a better prediction performance.
Statistical technique for analysing functional connectivity of multiple spike trains.
Masud, Mohammad Shahed; Borisyuk, Roman
2011-03-15
A new statistical technique, the Cox method, used for analysing functional connectivity of simultaneously recorded multiple spike trains is presented. This method is based on the theory of modulated renewal processes and it estimates a vector of influence strengths from multiple spike trains (called reference trains) to the selected (target) spike train. Selecting another target spike train and repeating the calculation of the influence strengths from the reference spike trains enables researchers to find all functional connections among multiple spike trains. In order to study functional connectivity an "influence function" is identified. This function recognises the specificity of neuronal interactions and reflects the dynamics of postsynaptic potential. In comparison to existing techniques, the Cox method has the following advantages: it does not use bins (binless method); it is applicable to cases where the sample size is small; it is sufficiently sensitive such that it estimates weak influences; it supports the simultaneous analysis of multiple influences; it is able to identify a correct connectivity scheme in difficult cases of "common source" or "indirect" connectivity. The Cox method has been thoroughly tested using multiple sets of data generated by the neural network model of the leaky integrate and fire neurons with a prescribed architecture of connections. The results suggest that this method is highly successful for analysing functional connectivity of simultaneously recorded multiple spike trains. Copyright © 2011 Elsevier B.V. All rights reserved.
Sahli, Sanem; Laszig, Roland; Aschendorff, Antje; Kroeger, Stefanie; Wesarg, Thomas; Belgin, Erol
2011-12-01
The aim of the study is to determinate the using dominant multiple intelligence types and compare the learning preferences of Turkish cochlear implanted children aged four to ten in Turkey and Germany according to Theory of multiple intelligence. The study has been conducted on a total of 80 children and four groups in Freiburg/Germany and Ankara/Turkey. The applications have been done in University of Freiburg, Cochlear Implant Center in Germany, and University of Hacettepe, ENT Department, Audiology and Speech Pathology Section in Turkey. In this study, the data have been collected by means of General Information Form and Cochlear Implant Information Form applied to parents. To determine the dominant multiple intelligence types of children, the TIMI (Teele Inventory of Multiple Intelligences) which was developed by Sue Teele have been used. The study results exposed that there was not a statistically significant difference on dominant intelligence areas and averages of scores of multiple intelligence types in control groups (p>0.05). Although, the dominant intelligence areas were different (except for first dominant intelligence) in cochlear implanted children in Turkey and Germany, there was not a statistically significant difference on averages of scores of dominant multiple intelligence types. Every hearing impaired child who started training, should be evaluated in terms of multiple intelligence areas and identified strengths and weaknesses. Multiple intelligence activities should be used in their educational programs. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
A survey and evaluations of histogram-based statistics in alignment-free sequence comparison.
Luczak, Brian B; James, Benjamin T; Girgis, Hani Z
2017-12-06
Since the dawn of the bioinformatics field, sequence alignment scores have been the main method for comparing sequences. However, alignment algorithms are quadratic, requiring long execution time. As alternatives, scientists have developed tens of alignment-free statistics for measuring the similarity between two sequences. We surveyed tens of alignment-free k-mer statistics. Additionally, we evaluated 33 statistics and multiplicative combinations between the statistics and/or their squares. These statistics are calculated on two k-mer histograms representing two sequences. Our evaluations using global alignment scores revealed that the majority of the statistics are sensitive and capable of finding similar sequences to a query sequence. Therefore, any of these statistics can filter out dissimilar sequences quickly. Further, we observed that multiplicative combinations of the statistics are highly correlated with the identity score. Furthermore, combinations involving sequence length difference or Earth Mover's distance, which takes the length difference into account, are always among the highest correlated paired statistics with identity scores. Similarly, paired statistics including length difference or Earth Mover's distance are among the best performers in finding the K-closest sequences. Interestingly, similar performance can be obtained using histograms of shorter words, resulting in reducing the memory requirement and increasing the speed remarkably. Moreover, we found that simple single statistics are sufficient for processing next-generation sequencing reads and for applications relying on local alignment. Finally, we measured the time requirement of each statistic. The survey and the evaluations will help scientists with identifying efficient alternatives to the costly alignment algorithm, saving thousands of computational hours. The source code of the benchmarking tool is available as Supplementary Materials. © The Author 2017. Published by Oxford University Press.
Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis
Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.
2006-01-01
In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709
Holmes, Susan; Alekseyenko, Alexander; Timme, Alden; Nelson, Tyrrell; Pasricha, Pankaj Jay; Spormann, Alfred
2011-01-01
This article explains the statistical and computational methodology used to analyze species abundances collected using the LNBL Phylochip in a study of Irritable Bowel Syndrome (IBS) in rats. Some tools already available for the analysis of ordinary microarray data are useful in this type of statistical analysis. For instance in correcting for multiple testing we use Family Wise Error rate control and step-down tests (available in the multtest package). Once the most significant species are chosen we use the hypergeometric tests familiar for testing GO categories to test specific phyla and families. We provide examples of normalization, multivariate projections, batch effect detection and integration of phylogenetic covariation, as well as tree equalization and robustification methods.
Kling, Teresia; Johansson, Patrik; Sanchez, José; Marinescu, Voichita D.; Jörnsten, Rebecka; Nelander, Sven
2015-01-01
Statistical network modeling techniques are increasingly important tools to analyze cancer genomics data. However, current tools and resources are not designed to work across multiple diagnoses and technical platforms, thus limiting their applicability to comprehensive pan-cancer datasets such as The Cancer Genome Atlas (TCGA). To address this, we describe a new data driven modeling method, based on generalized Sparse Inverse Covariance Selection (SICS). The method integrates genetic, epigenetic and transcriptional data from multiple cancers, to define links that are present in multiple cancers, a subset of cancers, or a single cancer. It is shown to be statistically robust and effective at detecting direct pathway links in data from TCGA. To facilitate interpretation of the results, we introduce a publicly accessible tool (cancerlandscapes.org), in which the derived networks are explored as interactive web content, linked to several pathway and pharmacological databases. To evaluate the performance of the method, we constructed a model for eight TCGA cancers, using data from 3900 patients. The model rediscovered known mechanisms and contained interesting predictions. Possible applications include prediction of regulatory relationships, comparison of network modules across multiple forms of cancer and identification of drug targets. PMID:25953855
The impact of prison reentry services on short-term outcomes: evidence from a multisite evaluation.
Lattimore, Pamela K; Visher, Christy A
2013-01-01
Renewed interest in prisoner rehabilitation to improve postrelease outcomes occurred in the 1990s, as policy makers reacted to burgeoning prison populations with calls to facilitate community reintegration and reduce recidivism. In 2003, the Federal government funded grants to implement locally designed reentry programs. Adult programs in 12 states were studied to determine the effects of the reentry programs on multiple outcomes. A two-stage matching procedure was used to examine the effectiveness of 12 reentry programs for adult males. In the first stage, "intact group matching" was used to identify comparison populations that were similar to program participants. In the second stage, propensity score matching was used to adjust for remaining differences between groups. Propensity score weighted logistic regression was used to examine the impact of reentry program participation on multiple outcomes measured 3 months after release. The study population was 1,697 adult males released from prisons in 2004-2005. Data consisted of interview data gathered 30 days prior to release and approximately 3 months following release, supplemented by administrative data from state departments of correction and the National Crime Information Center. Results suggest programs increased in-prison service receipt and produced modest positive outcomes across multiple domains (employment, housing, and substance use) 3 months after release. Although program participants reported fewer crimes, differences in postrelease arrest and reincarceration were not statistically significant. Incomplete implementation and service receipt by comparison group members may have resulted in insufficient statistical power to identify stronger treatment effects.
Park, Jae Hyon; Kim, Joo Hi; Jo, Kye Eun; Na, Se Whan; Eisenhut, Michael; Kronbichler, Andreas; Lee, Keum Hwa; Shin, Jae Il
2018-07-01
To provide an up-to-date summary of multiple sclerosis-susceptible gene variants and assess the noteworthiness in hopes of finding true associations, we investigated the results of 44 meta-analyses on gene variants and multiple sclerosis published through December 2016. Out of 70 statistically significant genotype associations, roughly a fifth (21%) of the comparisons showed noteworthy false-positive rate probability (FPRP) at a statistical power to detect an OR of 1.5 and at a prior probability of 10 -6 assumed for a random single nucleotide polymorphism. These associations (IRF8/rs17445836, STAT3/rs744166, HLA/rs4959093, HLA/rs2647046, HLA/rs7382297, HLA/rs17421624, HLA/rs2517646, HLA/rs9261491, HLA/rs2857439, HLA/rs16896944, HLA/rs3132671, HLA/rs2857435, HLA/rs9261471, HLA/rs2523393, HLA-DRB1/rs3135388, RGS1/rs2760524, PTGER4/rs9292777) also showed a noteworthy Bayesian false discovery probability (BFDP) and one additional association (CD24 rs8734/rs52812045) was also noteworthy via BFDP computation. Herein, we have identified several noteworthy biomarkers of multiple sclerosis susceptibility. We hope these data are used to study multiple sclerosis genetics and inform future screening programs.
Tian, Lili; Yu, Tingting; Huebner, E. Scott
2017-01-01
The purpose of this study was to examine the multiple mediational roles of academic social comparison directions (upward academic social comparison and downward academic social comparison) on the relationships between achievement goal orientations (i.e., mastery goals, performance-approach goals, and performance-avoidance goals) and subjective well-being (SWB) in school (school satisfaction, school affect) in adolescent students in China. A total of 883 Chinese adolescent students (430 males; Mean age = 12.99) completed a multi-measure questionnaire. Structural equation modeling was used to examine the hypotheses. Results indicated that (1) mastery goal orientations and performance-approach goal orientations both showed a statistically significant, positive correlation with SWB in school whereas performance-avoidance goal orientations showed a statistically significant, negative correlation with SWB in school among adolescents; (2) upward academic social comparisons mediated the relation between the three types of achievement goal orientations (i.e., mastery goals, performance-approach goals, and performance-avoidance goals) and SWB in school; (3) downward academic social comparisons mediated the relation between mastery goal orientations and SWB in school as well as the relation between performance-avoidance goal orientations and SWB in school. The findings suggest possible important cultural differences in the antecedents of SWB in school in adolescent students in China compared to adolescent students in Western nations. PMID:28197109
eShadow: A tool for comparing closely related sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovcharenko, Ivan; Boffelli, Dario; Loots, Gabriela G.
2004-01-15
Primate sequence comparisons are difficult to interpret due to the high degree of sequence similarity shared between such closely related species. Recently, a novel method, phylogenetic shadowing, has been pioneered for predicting functional elements in the human genome through the analysis of multiple primate sequence alignments. We have expanded this theoretical approach to create a computational tool, eShadow, for the identification of elements under selective pressure in multiple sequence alignments of closely related genomes, such as in comparisons of human to primate or mouse to rat DNA. This tool integrates two different statistical methods and allows for the dynamic visualizationmore » of the resulting conservation profile. eShadow also includes a versatile optimization module capable of training the underlying Hidden Markov Model to differentially predict functional sequences. This module grants the tool high flexibility in the analysis of multiple sequence alignments and in comparing sequences with different divergence rates. Here, we describe the eShadow comparative tool and its potential uses for analyzing both multiple nucleotide and protein alignments to predict putative functional elements. The eShadow tool is publicly available at http://eshadow.dcode.org/« less
Multistrip Western blotting: a tool for comparative quantitative analysis of multiple proteins.
Aksamitiene, Edita; Hoek, Jan B; Kiyatkin, Anatoly
2015-01-01
The qualitative and quantitative measurements of protein abundance and modification states are essential in understanding their functions in diverse cellular processes. Typical Western blotting, though sensitive, is prone to produce substantial errors and is not readily adapted to high-throughput technologies. Multistrip Western blotting is a modified immunoblotting procedure based on simultaneous electrophoretic transfer of proteins from multiple strips of polyacrylamide gels to a single membrane sheet. In comparison with the conventional technique, Multistrip Western blotting increases data output per single blotting cycle up to tenfold; allows concurrent measurement of up to nine different total and/or posttranslationally modified protein expression obtained from the same loading of the sample; and substantially improves the data accuracy by reducing immunoblotting-derived signal errors. This approach enables statistically reliable comparison of different or repeated sets of data and therefore is advantageous to apply in biomedical diagnostics, systems biology, and cell signaling research.
Jiang, Yueyang; Kim, John B.; Still, Christopher J.; Kerns, Becky K.; Kline, Jeffrey D.; Cunningham, Patrick G.
2018-01-01
Statistically downscaled climate data have been widely used to explore possible impacts of climate change in various fields of study. Although many studies have focused on characterizing differences in the downscaling methods, few studies have evaluated actual downscaled datasets being distributed publicly. Spatially focusing on the Pacific Northwest, we compare five statistically downscaled climate datasets distributed publicly in the US: ClimateNA, NASA NEX-DCP30, MACAv2-METDATA, MACAv2-LIVNEH and WorldClim. We compare the downscaled projections of climate change, and the associated observational data used as training data for downscaling. We map and quantify the variability among the datasets and characterize the spatio-temporal patterns of agreement and disagreement among the datasets. Pair-wise comparisons of datasets identify the coast and high-elevation areas as areas of disagreement for temperature. For precipitation, high-elevation areas, rainshadows and the dry, eastern portion of the study area have high dissimilarity among the datasets. By spatially aggregating the variability measures into watersheds, we develop guidance for selecting datasets within the Pacific Northwest climate change impact studies. PMID:29461513
Jiang, Yueyang; Kim, John B; Still, Christopher J; Kerns, Becky K; Kline, Jeffrey D; Cunningham, Patrick G
2018-02-20
Statistically downscaled climate data have been widely used to explore possible impacts of climate change in various fields of study. Although many studies have focused on characterizing differences in the downscaling methods, few studies have evaluated actual downscaled datasets being distributed publicly. Spatially focusing on the Pacific Northwest, we compare five statistically downscaled climate datasets distributed publicly in the US: ClimateNA, NASA NEX-DCP30, MACAv2-METDATA, MACAv2-LIVNEH and WorldClim. We compare the downscaled projections of climate change, and the associated observational data used as training data for downscaling. We map and quantify the variability among the datasets and characterize the spatio-temporal patterns of agreement and disagreement among the datasets. Pair-wise comparisons of datasets identify the coast and high-elevation areas as areas of disagreement for temperature. For precipitation, high-elevation areas, rainshadows and the dry, eastern portion of the study area have high dissimilarity among the datasets. By spatially aggregating the variability measures into watersheds, we develop guidance for selecting datasets within the Pacific Northwest climate change impact studies.
Eisinga, Rob; Heskes, Tom; Pelzer, Ben; Te Grotenhuis, Manfred
2017-01-25
The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to such tests rely on large-sample approximations, due to the numerical complexity of computing the exact distribution. These approximate methods lead to inaccurate estimates in the tail of the distribution, which is most relevant for p-value calculation. We propose an efficient, combinatorial exact approach for calculating the probability mass distribution of the rank sum difference statistic for pairwise comparison of Friedman rank sums, and compare exact results with recommended asymptotic approximations. Whereas the chi-squared approximation performs inferiorly to exact computation overall, others, particularly the normal, perform well, except for the extreme tail. Hence exact calculation offers an improvement when small p-values occur following multiple testing correction. Exact inference also enhances the identification of significant differences whenever the observed values are close to the approximate critical value. We illustrate the proposed method in the context of biological machine learning, were Friedman rank sum difference tests are commonly used for the comparison of classifiers over multiple datasets. We provide a computationally fast method to determine the exact p-value of the absolute rank sum difference of a pair of Friedman rank sums, making asymptotic tests obsolete. Calculation of exact p-values is easy to implement in statistical software and the implementation in R is provided in one of the Additional files and is also available at http://www.ru.nl/publish/pages/726696/friedmanrsd.zip .
Aqil, Muhammad; Kita, Ichiro; Yano, Akira; Nishiyama, Soichi
2007-10-01
Traditionally, the multiple linear regression technique has been one of the most widely used models in simulating hydrological time series. However, when the nonlinear phenomenon is significant, the multiple linear will fail to develop an appropriate predictive model. Recently, neuro-fuzzy systems have gained much popularity for calibrating the nonlinear relationships. This study evaluated the potential of a neuro-fuzzy system as an alternative to the traditional statistical regression technique for the purpose of predicting flow from a local source in a river basin. The effectiveness of the proposed identification technique was demonstrated through a simulation study of the river flow time series of the Citarum River in Indonesia. Furthermore, in order to provide the uncertainty associated with the estimation of river flow, a Monte Carlo simulation was performed. As a comparison, a multiple linear regression analysis that was being used by the Citarum River Authority was also examined using various statistical indices. The simulation results using 95% confidence intervals indicated that the neuro-fuzzy model consistently underestimated the magnitude of high flow while the low and medium flow magnitudes were estimated closer to the observed data. The comparison of the prediction accuracy of the neuro-fuzzy and linear regression methods indicated that the neuro-fuzzy approach was more accurate in predicting river flow dynamics. The neuro-fuzzy model was able to improve the root mean square error (RMSE) and mean absolute percentage error (MAPE) values of the multiple linear regression forecasts by about 13.52% and 10.73%, respectively. Considering its simplicity and efficiency, the neuro-fuzzy model is recommended as an alternative tool for modeling of flow dynamics in the study area.
NASA Astrophysics Data System (ADS)
Yu, Fu-Yun; Liu, Yu-Hsin
2005-09-01
The potential value of a multiple-choice question-construction instructional strategy for the support of students’ learning of physics experiments was examined in the study. Forty-two university freshmen participated in the study for a whole semester. A constant comparison method adopted to categorize students’ qualitative data indicated that the influences of multiple-choice question construction were evident in several significant ways (promoting constructive and productive studying habits; reflecting and previewing course-related materials; increasing in-group communication and interaction; breaking passive learning style and habits, etc.), which, worked together, not only enhanced students’ comprehension and retention of the obtained knowledge, but also helped distil a sense of empowerment and learning community within the participants. Analysis with one-group t-tests, using 3 as the expected mean, on quantitative data further found that students’ satisfaction toward past learning experience, and perceptions toward this strategy’s potentials for promoting learning were statistically significant at the 0.0005 level, while learning anxiety was not statistically significant. Suggestions for incorporating question-generation activities within classroom and topics for future studies were rendered.
Statistical testing and power analysis for brain-wide association study.
Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng
2018-04-05
The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.
Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.
Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg
2009-11-01
G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.
Amen, Daniel G; Hanks, Chris; Prunella, Jill R; Green, Aisa
2007-01-01
The authors explored differences in regional cerebral blood flow in 11 impulsive murderers and 11 healthy comparison subjects using single photon emission computed tomography. The authors assessed subjects at rest and during a computerized go/no-go concentration task. Using statistical parametric mapping software, the authors performed voxel-by-voxel t tests to assess significant differences, making family-wide error corrections for multiple comparisons. Murderers were found to have significantly lower relative rCBF during concentration, particularly in areas associated with concentration and impulse control. These results indicate that nonemotionally laden stimuli may result in frontotemporal dysregulation in people predisposed to impulsive violence.
NASA Astrophysics Data System (ADS)
Brankov, Elvira
This thesis presents a methodology for examining the relationship between synoptic-scale atmospheric transport patterns and observed pollutant concentration levels. It involves calculating a large number of back-trajectories from the observational site and subjecting them to cluster analysis. The pollutant concentration data observed at that site are then segregated according to the back-trajectory clusters. If the pollutant observations extend over several seasons, it is important to filter out seasonal and long-term components from the time series data before pollutant cluster-segregation, because only the short-term component of the time series data is related to the synoptic-scale transport. Multiple comparison procedures are used to test for significant differences in the chemical composition of pollutant data associated with each cluster. This procedure is useful in indicating potential pollutant source regions and isolating meteorological regimes associated with pollutant transport from those regions. If many observational sites are available, the spatial and temporal scales of the pollution transport from a given direction can be extracted through the time-lagged inter- site correlation analysis of pollutant concentrations. The proposed methodology is applicable to any pollutant at any site if sufficiently abundant data set is available. This is illustrated through examination of five-year long time series data of ozone concentrations at several sites in the Northeast. The results provide evidence of ozone transport to these sites, revealing the characteristic spatial and temporal scales involved in the transport and identifying source regions for this pollutant. Problems related to statistical analyses of censored data are addressed in the second half of this thesis. Although censoring (reporting concentrations in a non-quantitative way) is typical for trace-level measurements, methods for statistical analysis, inference and interpretation of such data are complex and still under development. In this study, multiple comparison of censored data sets was required in order to examine the influence of synoptic- scale circulations on concentration levels of several trace-level toxic pollutants observed in the Northeast (e.g., As, Se, Mn, V, etc.). Since the traditional multiple comparison procedures are not readily applicable to such data sets, a Monte Carlo simulation study was performed to assess several nonparametric methods for multiple comparison of censored data sets. Application of an appropriate comparison procedure to clusters of toxic trace elements observed in the Northeast led to the identification of potential source regions and atmospheric patterns associated with the long-range transport of these pollutants. A method for comparison of proportions and elemental ratio calculations were used to confirm/clarify these inferences with a greater degree of confidence.
Schwartz, Carolyn E; Patrick, Donald L
2014-07-01
When planning a comparative effectiveness study comparing disease-modifying treatments, competing demands influence choice of outcomes. Current practice emphasizes parsimony, although understanding multidimensional treatment impact can help to personalize medical decision-making. We discuss both sides of this 'tug of war'. We discuss the assumptions, advantages and drawbacks of composite scores and multidimensional outcomes. We describe possible solutions to the multiple comparison problem, including conceptual hierarchy distinctions, statistical approaches, 'real-world' benchmarks of effectiveness and subgroup analysis. We conclude that comparative effectiveness research should consider multiple outcome dimensions and compare different approaches that fit the individual context of study objectives.
Busfield, Benjamin T; Kharrazi, F Daniel; Starkey, Chad; Lombardo, Stephen J; Seegmiller, Jeffrey
2009-08-01
The purpose of this study was to determine the rate of return to play and to quantify the effect on the basketball player's performance after surgical reconstruction of the anterior cruciate ligament (ACL). Surgical injuries involving the ACL were queried for a 10-year period (1993-1994 season through 2004-2005 season) from the database maintained by the National Basketball Association (NBA). Standard statistical categories and player efficiency rating (PER), a measure that accounts for positive and negative playing statistics, were calculated to determine the impact of the injury on player performance relative to a matched comparison group. Over the study period, 31 NBA players had 32 ACL reconstructions. Two patients were excluded because of multiple ACL injuries, one was excluded because he never participated in league play, and another was the result of nonathletic activity. Of the 27 players in the study group, 6 (22%) did not return to NBA competition. Of the 21 players (78%) who did return to play, 4 (15%) had an increase in the preinjury PER, 5 (19%) remained within 1 point of the preinjury PER, and the PER decreased by more than 1 point after return to play in 12 (44%). Although decreases occurred in most of the statistical categories for players returning from ACL surgery, the number of games played, field goal percentage, and number of turnovers per game were the only categories with a statistically significant decrease. Players in the comparison group had a statistically significant increase in the PER over their careers, whereas the study group had a marked, though not statistically significant, increase in the PER in the season after reconstruction. After ACL reconstruction in 27 basketball players, 22% did not return to a sanctioned NBA game. For those returning to play, performance decreased by more than 1 PER point in 44% of the patients, although the changes were not statistically significant relative to the comparison group. Level IV, therapeutic case series.
Gabbe, Belinda J.; Harrison, James E.; Lyons, Ronan A.; Jolley, Damien
2011-01-01
Background Injury is a leading cause of the global burden of disease (GBD). Estimates of non-fatal injury burden have been limited by a paucity of empirical outcomes data. This study aimed to (i) establish the 12-month disability associated with each GBD 2010 injury health state, and (ii) compare approaches to modelling the impact of multiple injury health states on disability as measured by the Glasgow Outcome Scale – Extended (GOS-E). Methods 12-month functional outcomes for 11,337 survivors to hospital discharge were drawn from the Victorian State Trauma Registry and the Victorian Orthopaedic Trauma Outcomes Registry. ICD-10 diagnosis codes were mapped to the GBD 2010 injury health states. Cases with a GOS-E score >6 were defined as “recovered.” A split dataset approach was used. Cases were randomly assigned to development or test datasets. Probability of recovery for each health state was calculated using the development dataset. Three logistic regression models were evaluated: a) additive, multivariable; b) “worst injury;” and c) multiplicative. Models were adjusted for age and comorbidity and investigated for discrimination and calibration. Findings A single injury health state was recorded for 46% of cases (1–16 health states per case). The additive (C-statistic 0.70, 95% CI: 0.69, 0.71) and “worst injury” (C-statistic 0.70; 95% CI: 0.68, 0.71) models demonstrated higher discrimination than the multiplicative (C-statistic 0.68; 95% CI: 0.67, 0.70) model. The additive and “worst injury” models demonstrated acceptable calibration. Conclusions The majority of patients survived with persisting disability at 12-months, highlighting the importance of improving estimates of non-fatal injury burden. Additive and “worst” injury models performed similarly. GBD 2010 injury states were moderately predictive of recovery 1-year post-injury. Further evaluation using additional measures of health status and functioning and comparison with the GBD 2010 disability weights will be needed to optimise injury states for future GBD studies. PMID:21984951
Usadel, Björn; Nagel, Axel; Steinhauser, Dirk; Gibon, Yves; Bläsing, Oliver E; Redestig, Henning; Sreenivasulu, Nese; Krall, Leonard; Hannah, Matthew A; Poree, Fabien; Fernie, Alisdair R; Stitt, Mark
2006-12-18
Microarray technology has become a widely accepted and standardized tool in biology. The first microarray data analysis programs were developed to support pair-wise comparison. However, as microarray experiments have become more routine, large scale experiments have become more common, which investigate multiple time points or sets of mutants or transgenics. To extract biological information from such high-throughput expression data, it is necessary to develop efficient analytical platforms, which combine manually curated gene ontologies with efficient visualization and navigation tools. Currently, most tools focus on a few limited biological aspects, rather than offering a holistic, integrated analysis. Here we introduce PageMan, a multiplatform, user-friendly, and stand-alone software tool that annotates, investigates, and condenses high-throughput microarray data in the context of functional ontologies. It includes a GUI tool to transform different ontologies into a suitable format, enabling the user to compare and choose between different ontologies. It is equipped with several statistical modules for data analysis, including over-representation analysis and Wilcoxon statistical testing. Results are exported in a graphical format for direct use, or for further editing in graphics programs.PageMan provides a fast overview of single treatments, allows genome-level responses to be compared across several microarray experiments covering, for example, stress responses at multiple time points. This aids in searching for trait-specific changes in pathways using mutants or transgenics, analyzing development time-courses, and comparison between species. In a case study, we analyze the results of publicly available microarrays of multiple cold stress experiments using PageMan, and compare the results to a previously published meta-analysis.PageMan offers a complete user's guide, a web-based over-representation analysis as well as a tutorial, and is freely available at http://mapman.mpimp-golm.mpg.de/pageman/. PageMan allows multiple microarray experiments to be efficiently condensed into a single page graphical display. The flexible interface allows data to be quickly and easily visualized, facilitating comparisons within experiments and to published experiments, thus enabling researchers to gain a rapid overview of the biological responses in the experiments.
Nilsagård, Ylva E; Forsberg, Anette S; von Koch, Lena
2013-02-01
The use of interactive video games is expanding within rehabilitation. The evidence base is, however, limited. Our aim was to evaluate the effects of a Nintendo Wii Fit® balance exercise programme on balance function and walking ability in people with multiple sclerosis (MS). A multi-centre, randomised, controlled single-blinded trial with random allocation to exercise or no exercise. The exercise group participated in a programme of 12 supervised 30-min sessions of balance exercises using Wii games, twice a week for 6-7 weeks. Primary outcome was the Timed Up and Go test (TUG). In total, 84 participants were enrolled; four were lost to follow-up. After the intervention, there were no statistically significant differences between groups but effect sizes for the TUG, TUGcognitive and, the Dynamic Gait Index (DGI) were moderate and small for all other measures. Statistically significant improvements within the exercise group were present for all measures (large to moderate effect sizes) except in walking speed and balance confidence. The non-exercise group showed statistically significant improvements for the Four Square Step Test and the DGI. In comparison with no intervention, a programme of supervised balance exercise using Nintendo Wii Fit® did not render statistically significant differences, but presented moderate effect sizes for several measures of balance performance.
Comparing physiographic maps with different categorisations
NASA Astrophysics Data System (ADS)
Zawadzka, J.; Mayr, T.; Bellamy, P.; Corstanje, R.
2015-02-01
This paper addresses the need for a robust map comparison method suitable for finding similarities between thematic maps with different forms of categorisations. In our case, the requirement was to establish the information content of newly derived physiographic maps with regards to set of reference maps for a study area in England and Wales. Physiographic maps were derived from the 90 m resolution SRTM DEM, using a suite of existing and new digital landform mapping methods with the overarching purpose of enhancing the physiographic unit component of the Soil and Terrain database (SOTER). Reference maps were seven soil and landscape datasets mapped at scales ranging from 1:200,000 to 1:5,000,000. A review of commonly used statistical methods for categorical comparisons was performed and of these, the Cramer's V statistic was identified as the most appropriate for comparison of maps with different legends. Interpretation of multiple Cramer's V values resulting from one-by-one comparisons of the physiographic and baseline maps was facilitated by multi-dimensional scaling and calculation of average distances between the maps. The method allowed for finding similarities and dissimilarities amongst physiographic maps and baseline maps and informed the recommendation of the most suitable methodology for terrain analysis in the context of soil mapping.
Methods of comparing associative models and an application to retrospective revaluation.
Witnauer, James E; Hutchings, Ryan; Miller, Ralph R
2017-11-01
Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on the specific data being fit (e.g., hill climbing), so that the comparisons hinge on the psychological mechanisms assumed by each model rather than being biased by using parameters that differ in quality across models with respect to the data being fit. Statistics like the Bayesian information criterion facilitate comparisons among models that have different numbers of free parameters. These issues are examined using retrospective revaluation data. Copyright © 2017 Elsevier B.V. All rights reserved.
Harrigan, George G; Harrison, Jay M
2012-01-01
New transgenic (GM) crops are subjected to extensive safety assessments that include compositional comparisons with conventional counterparts as a cornerstone of the process. The influence of germplasm, location, environment, and agronomic treatments on compositional variability is, however, often obscured in these pair-wise comparisons. Furthermore, classical statistical significance testing can often provide an incomplete and over-simplified summary of highly responsive variables such as crop composition. In order to more clearly describe the influence of the numerous sources of compositional variation we present an introduction to two alternative but complementary approaches to data analysis and interpretation. These include i) exploratory data analysis (EDA) with its emphasis on visualization and graphics-based approaches and ii) Bayesian statistical methodology that provides easily interpretable and meaningful evaluations of data in terms of probability distributions. The EDA case-studies include analyses of herbicide-tolerant GM soybean and insect-protected GM maize and soybean. Bayesian approaches are presented in an analysis of herbicide-tolerant GM soybean. Advantages of these approaches over classical frequentist significance testing include the more direct interpretation of results in terms of probabilities pertaining to quantities of interest and no confusion over the application of corrections for multiple comparisons. It is concluded that a standardized framework for these methodologies could provide specific advantages through enhanced clarity of presentation and interpretation in comparative assessments of crop composition.
NF-kB2/p52 Activation and Androgen Receptor Signaling in Prostate Cancer
2010-08-01
for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202- 4302. Respondents should be aware...Materials and Methods; ref. 38). Statistical analysis. Data are shown as the mean ± SD. Multiple group comparison was performed by one-way ANO- VA followed... Moroz , Byron Crawford, Asim Abdel-Mageed, New Orleans, LA INTRODUCTION AND OBJECTIVES: African American men (AA) have twice the incidence and mortality
Energy dependence of strangeness production and event-byevent fluctuations
NASA Astrophysics Data System (ADS)
Rustamov, Anar
2018-02-01
We review the energy dependence of strangeness production in nucleus-nucleus collisions and contrast it with the experimental observations in pp and p-A collisions at LHC energies as a function of the charged particle multiplicities. For the high multiplicity final states the results from pp and p-Pb reactions systematically approach the values obtained from Pb-Pb collisions. In statistical models this implies an approach to the thermodynamic limit, where differences of mean multiplicities between various formalisms, such as Canonical and Grand Canonical Ensembles, vanish. Furthermore, we report on event-by-event net-proton fluctuations as measured by STAR at RHIC/BNL and by ALICE at LHC/CERN and discuss various non-dynamical contributions to these measurements, which should be properly subtracted before comparison to theoretical calculations on dynamical net-baryon fluctuations.
Schuch, Klaus; Logothetis, Nikos K.; Maass, Wolfgang
2011-01-01
A major goal of computational neuroscience is the creation of computer models for cortical areas whose response to sensory stimuli resembles that of cortical areas in vivo in important aspects. It is seldom considered whether the simulated spiking activity is realistic (in a statistical sense) in response to natural stimuli. Because certain statistical properties of spike responses were suggested to facilitate computations in the cortex, acquiring a realistic firing regimen in cortical network models might be a prerequisite for analyzing their computational functions. We present a characterization and comparison of the statistical response properties of the primary visual cortex (V1) in vivo and in silico in response to natural stimuli. We recorded from multiple electrodes in area V1 of 4 macaque monkeys and developed a large state-of-the-art network model for a 5 × 5-mm patch of V1 composed of 35,000 neurons and 3.9 million synapses that integrates previously published anatomical and physiological details. By quantitative comparison of the model response to the “statistical fingerprint” of responses in vivo, we find that our model for a patch of V1 responds to the same movie in a way which matches the statistical structure of the recorded data surprisingly well. The deviation between the firing regimen of the model and the in vivo data are on the same level as deviations among monkeys and sessions. This suggests that, despite strong simplifications and abstractions of cortical network models, they are nevertheless capable of generating realistic spiking activity. To reach a realistic firing state, it was not only necessary to include both N-methyl-d-aspartate and GABAB synaptic conductances in our model, but also to markedly increase the strength of excitatory synapses onto inhibitory neurons (>2-fold) in comparison to literature values, hinting at the importance to carefully adjust the effect of inhibition for achieving realistic dynamics in current network models. PMID:21106898
Nested Sampling for Bayesian Model Comparison in the Context of Salmonella Disease Dynamics
Dybowski, Richard; McKinley, Trevelyan J.; Mastroeni, Pietro; Restif, Olivier
2013-01-01
Understanding the mechanisms underlying the observed dynamics of complex biological systems requires the statistical assessment and comparison of multiple alternative models. Although this has traditionally been done using maximum likelihood-based methods such as Akaike's Information Criterion (AIC), Bayesian methods have gained in popularity because they provide more informative output in the form of posterior probability distributions. However, comparison between multiple models in a Bayesian framework is made difficult by the computational cost of numerical integration over large parameter spaces. A new, efficient method for the computation of posterior probabilities has recently been proposed and applied to complex problems from the physical sciences. Here we demonstrate how nested sampling can be used for inference and model comparison in biological sciences. We present a reanalysis of data from experimental infection of mice with Salmonella enterica showing the distribution of bacteria in liver cells. In addition to confirming the main finding of the original analysis, which relied on AIC, our approach provides: (a) integration across the parameter space, (b) estimation of the posterior parameter distributions (with visualisations of parameter correlations), and (c) estimation of the posterior predictive distributions for goodness-of-fit assessments of the models. The goodness-of-fit results suggest that alternative mechanistic models and a relaxation of the quasi-stationary assumption should be considered. PMID:24376528
Got power? A systematic review of sample size adequacy in health professions education research.
Cook, David A; Hatala, Rose
2015-03-01
Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3%) had ≥80% power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22%) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43%) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3%) had ≥80% power to detect a small difference and 79 (27%) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3%) excluded a small difference and 91 (71%) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.
Atypical nucleus accumbens morphology in psychopathy: another limbic piece in the puzzle.
Boccardi, Marina; Bocchetta, Martina; Aronen, Hannu J; Repo-Tiihonen, Eila; Vaurio, Olli; Thompson, Paul M; Tiihonen, Jari; Frisoni, Giovanni B
2013-01-01
Psychopathy has been associated with increased putamen and striatum volumes. The nucleus accumbens - a key structure in reversal learning, less effective in psychopathy - has not yet received specific attention. Moreover, basal ganglia morphology has never been explored. We examined the morphology of the caudate, putamen and accumbens, manually segmented from magnetic resonance images of 26 offenders (age: 32.5 ± 8.4) with medium-high psychopathy (mean PCL-R=30 ± 5) and 25 healthy controls (age: 34.6 ± 10.8). Local differences were statistically modeled using a surface-based radial distance mapping method (p<0.05; multiple comparisons correction through permutation tests). In psychopathy, the caudate and putamen had normal global volume, but different morphology, significant after correction for multiple comparisons, for the right dorsal putamen (permutation test: p=0.02). The volume of the nucleus accumbens was 13% smaller in psychopathy (p corrected for multiple comparisons <0.006). The atypical morphology consisted of predominant anterior hypotrophy bilaterally (10-30%). Caudate and putamen local morphology displayed negative correlation with the lifestyle factor of the PCL-R (permutation test: p=0.05 and 0.03). From these data, psychopathy appears to be associated with an atypical striatal morphology, with highly significant global and local differences of the accumbens. This is consistent with the clinical syndrome and with theories of limbic involvement. Copyright © 2013 Elsevier Ltd. All rights reserved.
Goodnight, Jackson A.; D’Onofrio, Brian M.; Cherlin, Andrew J.; Emery, Robert E.; Van Hulle, Carol A.; Lahey, Benjamin B.
2012-01-01
Previous studies of the association between multiple parental relationship transitions (i.e., when a parent begins or terminates an intimate relationship involving cohabitation) and offspring antisocial behavior have varied in their efforts to rule out confounding influences, such as parental antisocial behavior and low income. They also have been limited in the representativeness of their samples. Thus, it remains unclear to what degree parents’ multiple relationship transitions have independent effects on children’s antisocial behavior. Analyses were conducted using data on 8,652 6–9-year-old, 6,911 10–13-year-old, and 6,495 14-17-year-old offspring of a nationally representative sample of U.S. women. Cousin-comparisons were used in combination with statistical covariates to evaluate the associations between maternal relationship transitions and offspring antisocial behavior in childhood and adolescence. Cousin-comparisons suggested that associations between maternal relationship transitions and antisocial behavior in childhood and early adolescence are largely explained by confounding factors. In contrast, the associations between maternal relationship transitions and offspring delinquency in late adolescence were robust to measured and unmeasured confounds. The present findings suggest that interventions aimed at reducing exposure to parental relationship transitions or addressing the psychosocial consequences of exposure to parental relationship transitions could reduce risk for offspring delinquency in late adolescence. PMID:22829173
Effect of Air Pollution on Exacerbations of Bronchiectasis in Badalona, Spain, 2008-2016.
Garcia-Olivé, Ignasi; Stojanovic, Zoran; Radua, Joaquim; Rodriguez-Pons, Laura; Martinez-Rivera, Carlos; Ruiz Manzano, Juan
2018-05-17
Air pollution has been widely associated with respiratory diseases. Nevertheless, the association between air pollution and exacerbations of bronchiectasis has been less studied. To analyze the effect of air pollution on exacerbations of bronchiectasis. This was a retrospective observational study conducted in Badalona. The number of daily hospital admissions and emergency room visits related to exacerbation of bronchiectasis (ICD-9 code 494.1) between 2008 and 2016 was obtained. We used simple Poisson regressions to test the effects of daily mean temperature, SO2, NO2, CO, and PM10 levels on bronchiectasis-related emergencies and hospitalizations on the same day and 1-4 days after. All p values were corrected for multiple comparisons. SO2 was significantly associated with an increase in the number of hospitalizations (lags 0, 1, 2, and 3). None of these associations remained significant after correcting for multiple comparisons. The number of emergency room visits was associated with higher levels of SO2 (lags 0-4). After correcting for multiple comparisons, the association between emergency room visits and SO2 levels was statistically significant for lag 0 (p = 0.043), lag 1 (p = 0.018), and lag 3 (p = 0.050). The number of emergency room visits for exacerbation of bronchiectasis is associated with higher levels of SO2. © 2018 S. Karger AG, Basel.
What does the multiple mini interview have to offer over the panel interview?
Pau, Allan; Chen, Yu Sui; Lee, Verna Kar Mun; Sow, Chew Fei; De Alwis, Ranjit
2016-01-01
This paper compares the panel interview (PI) performance with the multiple mini interview (MMI) performance and indication of behavioural concerns of a sample of medical school applicants. The acceptability of the MMI was also assessed. All applicants shortlisted for a PI were invited to an MMI. Applicants attended a 30-min PI with two faculty interviewers followed by an MMI consisting of ten 8-min stations. Applicants were assessed on their performance at each MMI station by one faculty. The interviewer also indicated if they perceived the applicant to be a concern. Finally, applicants completed an acceptability questionnaire. From the analysis of 133 (75.1%) completed MMI scoresheets, the MMI scores correlated statistically significantly with the PI scores (r=0.438, p=0.001). Both were not statistically associated with sex, age, race, or pre-university academic ability to any significance. Applicants assessed as a concern at two or more stations performed statistically significantly less well at the MMI when compared with those who were assessed as a concern at one station or none at all. However, there was no association with PI performance. Acceptability scores were generally high, and comparison of mean scores for each of the acceptability questionnaire items did not show statistically significant differences between sex and race categories. Although PI and MMI performances are correlated, the MMI may have the added advantage of more objectively generating multiple impressions of the applicant's interpersonal skill, thoughtfulness, and general demeanour. Results of the present study indicated that the MMI is acceptable in a multicultural context.
What does the multiple mini interview have to offer over the panel interview?
Pau, Allan; Chen, Yu Sui; Lee, Verna Kar Mun; Sow, Chew Fei; Alwis, Ranjit De
2016-01-01
Introduction This paper compares the panel interview (PI) performance with the multiple mini interview (MMI) performance and indication of behavioural concerns of a sample of medical school applicants. The acceptability of the MMI was also assessed. Materials and methods All applicants shortlisted for a PI were invited to an MMI. Applicants attended a 30-min PI with two faculty interviewers followed by an MMI consisting of ten 8-min stations. Applicants were assessed on their performance at each MMI station by one faculty. The interviewer also indicated if they perceived the applicant to be a concern. Finally, applicants completed an acceptability questionnaire. Results From the analysis of 133 (75.1%) completed MMI scoresheets, the MMI scores correlated statistically significantly with the PI scores (r=0.438, p=0.001). Both were not statistically associated with sex, age, race, or pre-university academic ability to any significance. Applicants assessed as a concern at two or more stations performed statistically significantly less well at the MMI when compared with those who were assessed as a concern at one station or none at all. However, there was no association with PI performance. Acceptability scores were generally high, and comparison of mean scores for each of the acceptability questionnaire items did not show statistically significant differences between sex and race categories. Conclusions Although PI and MMI performances are correlated, the MMI may have the added advantage of more objectively generating multiple impressions of the applicant's interpersonal skill, thoughtfulness, and general demeanour. Results of the present study indicated that the MMI is acceptable in a multicultural context. PMID:26873337
What does the multiple mini interview have to offer over the panel interview?
Pau, Allan; Chen, Yu Sui; Lee, Verna Kar Mun; Sow, Chew Fei; Alwis, Ranjit De
2016-01-01
Introduction This paper compares the panel interview (PI) performance with the multiple mini interview (MMI) performance and indication of behavioural concerns of a sample of medical school applicants. The acceptability of the MMI was also assessed. Materials and methods All applicants shortlisted for a PI were invited to an MMI. Applicants attended a 30-min PI with two faculty interviewers followed by an MMI consisting of ten 8-min stations. Applicants were assessed on their performance at each MMI station by one faculty. The interviewer also indicated if they perceived the applicant to be a concern. Finally, applicants completed an acceptability questionnaire. Results From the analysis of 133 (75.1%) completed MMI scoresheets, the MMI scores correlated statistically significantly with the PI scores (r=0.438, p=0.001). Both were not statistically associated with sex, age, race, or pre-university academic ability to any significance. Applicants assessed as a concern at two or more stations performed statistically significantly less well at the MMI when compared with those who were assessed as a concern at one station or none at all. However, there was no association with PI performance. Acceptability scores were generally high, and comparison of mean scores for each of the acceptability questionnaire items did not show statistically significant differences between sex and race categories. Conclusions Although PI and MMI performances are correlated, the MMI may have the added advantage of more objectively generating multiple impressions of the applicant's interpersonal skill, thoughtfulness, and general demeanour. Results of the present study indicated that the MMI is acceptable in a multicultural context.
NASA Technical Reports Server (NTRS)
Krajewski, Witold F.; Rexroth, David T.; Kiriaki, Kiriakie
1991-01-01
Two problems related to radar rainfall estimation are described. The first part is a description of a preliminary data analysis for the purpose of statistical estimation of rainfall from multiple (radar and raingage) sensors. Raingage, radar, and joint radar-raingage estimation is described, and some results are given. Statistical parameters of rainfall spatial dependence are calculated and discussed in the context of optimal estimation. Quality control of radar data is also described. The second part describes radar scattering by ellipsoidal raindrops. An analytical solution is derived for the Rayleigh scattering regime. Single and volume scattering are presented. Comparison calculations with the known results for spheres and oblate spheroids are shown.
Multiple imputation of missing fMRI data in whole brain analysis
Vaden, Kenneth I.; Gebregziabher, Mulugeta; Kuchinsky, Stefanie E.; Eckert, Mark A.
2012-01-01
Whole brain fMRI analyses rarely include the entire brain because of missing data that result from data acquisition limits and susceptibility artifact, in particular. This missing data problem is typically addressed by omitting voxels from analysis, which may exclude brain regions that are of theoretical interest and increase the potential for Type II error at cortical boundaries or Type I error when spatial thresholds are used to establish significance. Imputation could significantly expand statistical map coverage, increase power, and enhance interpretations of fMRI results. We examined multiple imputation for group level analyses of missing fMRI data using methods that leverage the spatial information in fMRI datasets for both real and simulated data. Available case analysis, neighbor replacement, and regression based imputation approaches were compared in a general linear model framework to determine the extent to which these methods quantitatively (effect size) and qualitatively (spatial coverage) increased the sensitivity of group analyses. In both real and simulated data analysis, multiple imputation provided 1) variance that was most similar to estimates for voxels with no missing data, 2) fewer false positive errors in comparison to mean replacement, and 3) fewer false negative errors in comparison to available case analysis. Compared to the standard analysis approach of omitting voxels with missing data, imputation methods increased brain coverage in this study by 35% (from 33,323 to 45,071 voxels). In addition, multiple imputation increased the size of significant clusters by 58% and number of significant clusters across statistical thresholds, compared to the standard voxel omission approach. While neighbor replacement produced similar results, we recommend multiple imputation because it uses an informed sampling distribution to deal with missing data across subjects that can include neighbor values and other predictors. Multiple imputation is anticipated to be particularly useful for 1) large fMRI data sets with inconsistent missing voxels across subjects and 2) addressing the problem of increased artifact at ultra-high field, which significantly limit the extent of whole brain coverage and interpretations of results. PMID:22500925
Considerations in the statistical analysis of clinical trials in periodontitis.
Imrey, P B
1986-05-01
Adult periodontitis has been described as a chronic infectious process exhibiting sporadic, acute exacerbations which cause quantal, localized losses of dental attachment. Many analytic problems of periodontal trials are similar to those of other chronic diseases. However, the episodic, localized, infrequent, and relatively unpredictable behavior of exacerbations, coupled with measurement error difficulties, cause some specific problems. Considerable controversy exists as to the proper selection and treatment of multiple site data from the same patient for group comparisons for epidemiologic or therapeutic evaluative purposes. This paper comments, with varying degrees of emphasis, on several issues pertinent to the analysis of periodontal trials. Considerable attention is given to the ways in which measurement variability may distort analytic results. Statistical treatments of multiple site data for descriptive summaries are distinguished from treatments for formal statistical inference to validate therapeutic effects. Evidence suggesting that sites behave independently is contested. For inferential analyses directed at therapeutic or preventive effects, analytic models based on site independence are deemed unsatisfactory. Methods of summarization that may yield more powerful analyses than all-site mean scores, while retaining appropriate treatment of inter-site associations, are suggested. Brief comments and opinions on an assortment of other issues in clinical trial analysis are preferred.
Robaski, Aliden-Willian; Pamato, Saulo; Tomás-de Oliveira, Marcelo; Pereira, Jefferson-Ricardo
2017-07-01
The enamel condition and the quality of surface are points that need to be considered for achieving optimal efficiency in the treatment with orthodontic brackets. The aim of this study was to assess the immediate bond strength of metallic brackets cemented to dental. Forty human premolars were double-sectioned, placed in PVC matrices and randomly divided into 10 groups (n=8). They received artificial saliva contamination before or after the application of adhesive systems, except for the control groups. The metallic brackets were cemented using two orthodontic cements (Transbond™ Plus Color Change, 3M Unitek e Transbond™ XT Light, 3M Unitek). The specimens were subjected to mechanical shear bond strength testing and classified according to the fracture pattern. The results were analyzed using a two-way ANOVA and Tukey's test for multiple comparisons ( p <0.05). ANOVA analysis showed statistically significant differences between the groups ( p =0.01). The Tukey's multiple comparison test indicated statistically significant difference between G6 and G7 groups ( p <0.05). A high prevalence of adhesive failure in the groups receiving the hydrophobic adhesive system. The saliva contamination prior to the application of a hydrophobic simplified conventional adhesive system was responsible for decreasing the immediate bond strength values of brackets cemented on the dental enamel. Key words: Bonding, orthodontic brackets, shear bond strength, saliva, adhesive systems.
Rummel, Julia L; Steill, Jeffrey D; Oomens, Jos; Contreras, Cesar S; Pearson, Wright L; Szczepanski, Jan; Powell, David H; Eyler, John R
2011-06-01
Infrared multiple photon dissociation (IRMPD) was used to generate vibrational spectra of ions produced with a direct analysis in real time (DART) ionization source coupled to a 4.7 T Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometer. The location of protonation on the nerve agent simulants diisopropyl methylphosphonate (DIMP) and dimethyl methylphosphonate (DMMP) was studied while solutions of the compounds were introduced for extended periods of time with a syringe pump. Theoretical vibrational spectra were generated with density functional theory calculations. Visual comparison of experimental mid-IR IRMPD spectra and theoretical spectra could not establish definitively if a single structure or a mixture of conformations was present for the protonated parent of each compound. However, theoretical calculations, near-ir IRMPD spectra, and frequency-to-frequency and statistical comparisons indicated that the protonation site for both DIMP and DMMP was predominantly, if not exclusively, the phosphonyl oxygen instead of one of the oxygen atoms with only single bonds.
NASA Astrophysics Data System (ADS)
Mussen, Kimberly S.
This quantitative research study evaluated the effectiveness of employing pedagogy based on the theory of multiple intelligences (MI). Currently, not all students are performing at the rate mandated by the government. When schools do not meet the required state standards, the school is labeled as not achieving adequate yearly progress (AYP), which may lead to the loss of funding. Any school not achieving AYP would be interested in this study. Due to low state standardized test scores in the district for science, student achievement and attitudes towards learning science were evaluated on a pretest, posttest, essay question, and one attitudinal survey. Statistical significance existed on one of the four research questions. Utilizing the Analysis of Covariance (ANCOVA) for data analysis, student attitudes towards learning science were statically significant in the MI (experimental) group. No statistical significance was found in student achievement on the posttest, delayed posttest, or the essay question test. Social change can result from this study because studying the effects of the multiple intelligence theory incorporated into classroom instruction can have significant effect on how children learn, allowing them to compete in a knowledge society.
Alonso, Joan Francesc; Romero, Sergio; Mañanas, Miguel Ángel; Rojas, Mónica; Riba, Jordi; Barbanoj, Manel José
2015-10-01
The identification of the brain regions involved in the neuropharmacological action is a potential procedure for drug development. These regions are commonly determined by the voxels showing significant statistical differences after comparing placebo-induced effects with drug-elicited effects. LORETA is an electroencephalography (EEG) source imaging technique frequently used to identify brain structures affected by the drug. The aim of the present study was to evaluate different methods for the correction of multiple comparisons in the LORETA maps. These methods which have been commonly used in neuroimaging and also simulated studies have been applied on a real case of pharmaco-EEG study where the effects of increasing benzodiazepine doses on the central nervous system measured by LORETA were investigated. Data consisted of EEG recordings obtained from nine volunteers who received single oral doses of alprazolam 0.25, 0.5, and 1 mg, and placebo in a randomized crossover double-blind design. The identification of active regions was highly dependent on the selected multiple test correction procedure. The combined criteria approach known as cluster mass was useful to reveal that increasing drug doses led to higher intensity and spread of the pharmacologically induced changes in intracerebral current density.
A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network
NASA Astrophysics Data System (ADS)
Li, Yiming; Bhanu, Bir
Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.
Vecchiato, G; De Vico Fallani, F; Astolfi, L; Toppi, J; Cincotti, F; Mattia, D; Salinari, S; Babiloni, F
2010-08-30
This paper presents some considerations about the use of adequate statistical techniques in the framework of the neuroelectromagnetic brain mapping. With the use of advanced EEG/MEG recording setup involving hundred of sensors, the issue of the protection against the type I errors that could occur during the execution of hundred of univariate statistical tests, has gained interest. In the present experiment, we investigated the EEG signals from a mannequin acting as an experimental subject. Data have been collected while performing a neuromarketing experiment and analyzed with state of the art computational tools adopted in specialized literature. Results showed that electric data from the mannequin's head presents statistical significant differences in power spectra during the visualization of a commercial advertising when compared to the power spectra gathered during a documentary, when no adjustments were made on the alpha level of the multiple univariate tests performed. The use of the Bonferroni or Bonferroni-Holm adjustments returned correctly no differences between the signals gathered from the mannequin in the two experimental conditions. An partial sample of recently published literature on different neuroscience journals suggested that at least the 30% of the papers do not use statistical protection for the type I errors. While the occurrence of type I errors could be easily managed with appropriate statistical techniques, the use of such techniques is still not so largely adopted in the literature. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Bredbenner, Todd L.; Eliason, Travis D.; Francis, W. Loren; McFarland, John M.; Merkle, Andrew C.; Nicolella, Daniel P.
2014-01-01
Cervical spinal injuries are a significant concern in all trauma injuries. Recent military conflicts have demonstrated the substantial risk of spinal injury for the modern warfighter. Finite element models used to investigate injury mechanisms often fail to examine the effects of variation in geometry or material properties on mechanical behavior. The goals of this study were to model geometric variation for a set of cervical spines, to extend this model to a parametric finite element model, and, as a first step, to validate the parametric model against experimental data for low-loading conditions. Individual finite element models were created using cervical spine (C3–T1) computed tomography data for five male cadavers. Statistical shape modeling (SSM) was used to generate a parametric finite element model incorporating variability of spine geometry, and soft-tissue material property variation was also included. The probabilistic loading response of the parametric model was determined under flexion-extension, axial rotation, and lateral bending and validated by comparison to experimental data. Based on qualitative and quantitative comparison of the experimental loading response and model simulations, we suggest that the model performs adequately under relatively low-level loading conditions in multiple loading directions. In conclusion, SSM methods coupled with finite element analyses within a probabilistic framework, along with the ability to statistically validate the overall model performance, provide innovative and important steps toward describing the differences in vertebral morphology, spinal curvature, and variation in material properties. We suggest that these methods, with additional investigation and validation under injurious loading conditions, will lead to understanding and mitigating the risks of injury in the spine and other musculoskeletal structures. PMID:25506051
Singh, Param Priya; Arora, Jatin; Isambert, Hervé
2015-07-01
Whole genome duplications (WGD) have now been firmly established in all major eukaryotic kingdoms. In particular, all vertebrates descend from two rounds of WGDs, that occurred in their jawless ancestor some 500 MY ago. Paralogs retained from WGD, also coined 'ohnologs' after Susumu Ohno, have been shown to be typically associated with development, signaling and gene regulation. Ohnologs, which amount to about 20 to 35% of genes in the human genome, have also been shown to be prone to dominant deleterious mutations and frequently implicated in cancer and genetic diseases. Hence, identifying ohnologs is central to better understand the evolution of vertebrates and their susceptibility to genetic diseases. Early computational analyses to identify vertebrate ohnologs relied on content-based synteny comparisons between the human genome and a single invertebrate outgroup genome or within the human genome itself. These approaches are thus limited by lineage specific rearrangements in individual genomes. We report, in this study, the identification of vertebrate ohnologs based on the quantitative assessment and integration of synteny conservation between six amniote vertebrates and six invertebrate outgroups. Such a synteny comparison across multiple genomes is shown to enhance the statistical power of ohnolog identification in vertebrates compared to earlier approaches, by overcoming lineage specific genome rearrangements. Ohnolog gene families can be browsed and downloaded for three statistical confidence levels or recompiled for specific, user-defined, significance criteria at http://ohnologs.curie.fr/. In the light of the importance of WGD on the genetic makeup of vertebrates, our analysis provides a useful resource for researchers interested in gaining further insights on vertebrate evolution and genetic diseases.
Singh, Param Priya; Arora, Jatin; Isambert, Hervé
2015-01-01
Whole genome duplications (WGD) have now been firmly established in all major eukaryotic kingdoms. In particular, all vertebrates descend from two rounds of WGDs, that occurred in their jawless ancestor some 500 MY ago. Paralogs retained from WGD, also coined ‘ohnologs’ after Susumu Ohno, have been shown to be typically associated with development, signaling and gene regulation. Ohnologs, which amount to about 20 to 35% of genes in the human genome, have also been shown to be prone to dominant deleterious mutations and frequently implicated in cancer and genetic diseases. Hence, identifying ohnologs is central to better understand the evolution of vertebrates and their susceptibility to genetic diseases. Early computational analyses to identify vertebrate ohnologs relied on content-based synteny comparisons between the human genome and a single invertebrate outgroup genome or within the human genome itself. These approaches are thus limited by lineage specific rearrangements in individual genomes. We report, in this study, the identification of vertebrate ohnologs based on the quantitative assessment and integration of synteny conservation between six amniote vertebrates and six invertebrate outgroups. Such a synteny comparison across multiple genomes is shown to enhance the statistical power of ohnolog identification in vertebrates compared to earlier approaches, by overcoming lineage specific genome rearrangements. Ohnolog gene families can be browsed and downloaded for three statistical confidence levels or recompiled for specific, user-defined, significance criteria at http://ohnologs.curie.fr/. In the light of the importance of WGD on the genetic makeup of vertebrates, our analysis provides a useful resource for researchers interested in gaining further insights on vertebrate evolution and genetic diseases. PMID:26181593
Abnormal hippocampal shape in offenders with psychopathy.
Boccardi, Marina; Ganzola, Rossana; Rossi, Roberta; Sabattoli, Francesca; Laakso, Mikko P; Repo-Tiihonen, Eila; Vaurio, Olli; Könönen, Mervi; Aronen, Hannu J; Thompson, Paul M; Frisoni, Giovanni B; Tiihonen, Jari
2010-03-01
Posterior hippocampal volumes correlate negatively with the severity of psychopathy, but local morphological features are unknown. The aim of this study was to investigate hippocampal morphology in habitually violent offenders having psychopathy. Manual tracings of hippocampi from magnetic resonance images of 26 offenders (age: 32.5 +/- 8.4), with different degrees of psychopathy (12 high, 14 medium psychopathy based on the Psychopathy Checklist Revised), and 25 healthy controls (age: 34.6 +/- 10.8) were used for statistical modelling of local changes with a surface-based radial distance mapping method. Both offenders and controls had similar hippocampal volume and asymmetry ratios. Local analysis showed that the high psychopathy group had a significant depression along the longitudinal hippocampal axis, on both the dorsal and ventral aspects, when compared with the healthy controls and the medium psychopathy group. The opposite comparison revealed abnormal enlargement of the lateral borders in both the right and left hippocampi of both high and medium psychopathy groups versus controls, throughout CA1, CA2-3 and the subicular regions. These enlargement and reduction effects survived statistical correction for multiple comparisons in the main contrast (26 offenders vs. 25 controls) and in most subgroup comparisons. A statistical check excluded a possible confounding effect from amphetamine and polysubstance abuse. These results indicate that habitually violent offenders exhibit a specific abnormal hippocampal morphology, in the absence of total gray matter volume changes, that may relate to different autonomic modulation and abnormal fear-conditioning. 2009 Wiley-Liss, Inc.
Supe, S; Milicić, J; Pavićević, R
1997-06-01
Recent studies on the etiopathogenesis of multiple sclerosis (MS) all point out that there is a polygenetical predisposition for this illness. The so called "MS Trait" determines the reactivity of the immunological system upon ecological factors. The development of the glyphological science and the study of the characteristics of the digito-palmar dermatoglyphic complex (for which it was established that they are polygenetically determined characteristics) all enable a better insight into the genetic development during early embriogenesis. The aim of this study was to estimate certain differences in the dermatoglyphics of digito-palmar complexes between the group with multiple sclerosis and the comparable, phenotypically healthy groups of both sexes. This study is based on the analysis of 18 quantitative characteristics of the digito-palmar complex in 125 patients with multiple sclerosis (41 males and 84 females) in comparison to a group of 400 phenotypically healthy patients (200 males and 200 females). The conducted analysis pointed towards a statistically significant decrease of the number of digital and palmar ridges, as well as with lower values of atd angles in a group of MS patients of both sexes. The main discriminators were the characteristic palmar dermatoglyphics with the possibility that the discriminate analysis classifies over 80% of the examinees which exceeds the statistical significance. The results of this study suggest a possible discrimination of patients with MS and the phenotypically health population through the analysis of the dermatoglyphic status, and therefore the possibility that multiple sclerosis is genetically predisposed disease.
Whole-Range Assessment: A Simple Method for Analysing Allelopathic Dose-Response Data
An, Min; Pratley, J. E.; Haig, T.; Liu, D.L.
2005-01-01
Based on the typical biological responses of an organism to allelochemicals (hormesis), concepts of whole-range assessment and inhibition index were developed for improved analysis of allelopathic data. Examples of their application are presented using data drawn from the literature. The method is concise and comprehensive, and makes data grouping and multiple comparisons simple, logical, and possible. It improves data interpretation, enhances research outcomes, and is a statistically efficient summary of the plant response profiles. PMID:19330165
1993-03-01
statistical mathe- matics, began in the late 1800’s when Sir Francis Galton first attempted to use practical mathematical techniques to investigate the...randomly collected (sampled) many pairs of parent/child height mea- surements (data), Galton observed that for a given parent- height average, the...ty only Maximum Adjusted R2 will be discussed. However, Maximum Adjusted R’ and Minimum MSE test exactly the same 2.thing. Adjusted R is related to R
Zhou, Bing; Li, Ming-Hua; Wang, Wu; Xu, Hao-Wen; Cheng, Yong-De; Wang, Jue
2010-03-01
The authors conducted a study to evaluate the advantages of a 3D volume-rendering technique (VRT) in follow-up digital subtraction (DS) angiography of coil-embolized intracranial aneurysms. One hundred nine patients with 121 intracranial aneurysms underwent endovascular coil embolization and at least 1 follow-up DS angiography session at the authors' institution. Two neuroradiologists independently evaluated the conventional 2D DS angiograms, rotational angiograms, and 3D VRT images obtained at the interventional procedures and DS angiography follow-up. If multiple follow-up sessions were performed, the final follow-up was mainly considered. The authors compared the 3 techniques for their ability to detect aneurysm remnants (including aneurysm neck and sac remnants) and parent artery stenosis based on the angiographic follow-up. The Kruskal-Wallis test was used for group comparisons, and the kappa test was used to measure interobserver agreement. Statistical analyses were performed using commercially available software. There was a high statistical significance among 2D DS angiography, rotational angiography, and 3D VRT results (X(2) = 9.9613, p = 0.0069) when detecting an aneurysm remnant. Further comparisons disclosed a statistical significance between 3D VRT and rotational angiography (X(2) = 4.9754, p = 0.0257); a high statistical significance between 3D VRT and 2D DS angiography (X(2) = 8.9169, p = 0.0028); and no significant difference between rotational angiography and 2D DS angiography (X(2) = 0.5648, p = 0.4523). There was no statistical significance among the 3 techniques when detecting parent artery stenosis (X(2) = 2.5164, p = 0.2842). One case, in which parent artery stenosis was diagnosed by 2D DS angiography and rotational angiography, was excluded by 3D VRT following observations of multiple views. The kappa test showed good agreement between the 2 observers. The 3D VRT is more sensitive in detecting aneurysm remnants than 2D DS angiography and rotational angiography and is helpful for identifying parent artery stenosis. The authors recommend this technique for the angiographic follow-up of patients with coil-embolized aneurysms.
Statistical strategies for averaging EC50 from multiple dose-response experiments.
Jiang, Xiaoqi; Kopp-Schneider, Annette
2015-11-01
In most dose-response studies, repeated experiments are conducted to determine the EC50 value for a chemical, requiring averaging EC50 estimates from a series of experiments. Two statistical strategies, the mixed-effect modeling and the meta-analysis approach, can be applied to estimate average behavior of EC50 values over all experiments by considering the variabilities within and among experiments. We investigated these two strategies in two common cases of multiple dose-response experiments in (a) complete and explicit dose-response relationships are observed in all experiments and in (b) only in a subset of experiments. In case (a), the meta-analysis strategy is a simple and robust method to average EC50 estimates. In case (b), all experimental data sets can be first screened using the dose-response screening plot, which allows visualization and comparison of multiple dose-response experimental results. As long as more than three experiments provide information about complete dose-response relationships, the experiments that cover incomplete relationships can be excluded from the meta-analysis strategy of averaging EC50 estimates. If there are only two experiments containing complete dose-response information, the mixed-effects model approach is suggested. We subsequently provided a web application for non-statisticians to implement the proposed meta-analysis strategy of averaging EC50 estimates from multiple dose-response experiments.
Viallon, Vivian; Banerjee, Onureena; Jougla, Eric; Rey, Grégoire; Coste, Joel
2014-03-01
Looking for associations among multiple variables is a topical issue in statistics due to the increasing amount of data encountered in biology, medicine, and many other domains involving statistical applications. Graphical models have recently gained popularity for this purpose in the statistical literature. In the binary case, however, exact inference is generally very slow or even intractable because of the form of the so-called log-partition function. In this paper, we review various approximate methods for structure selection in binary graphical models that have recently been proposed in the literature and compare them through an extensive simulation study. We also propose a modification of one existing method, that is shown to achieve good performance and to be generally very fast. We conclude with an application in which we search for associations among causes of death recorded on French death certificates. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Predicting recreational water quality advisories: A comparison of statistical methods
Brooks, Wesley R.; Corsi, Steven R.; Fienen, Michael N.; Carvin, Rebecca B.
2016-01-01
Epidemiological studies indicate that fecal indicator bacteria (FIB) in beach water are associated with illnesses among people having contact with the water. In order to mitigate public health impacts, many beaches are posted with an advisory when the concentration of FIB exceeds a beach action value. The most commonly used method of measuring FIB concentration takes 18–24 h before returning a result. In order to avoid the 24 h lag, it has become common to ”nowcast” the FIB concentration using statistical regressions on environmental surrogate variables. Most commonly, nowcast models are estimated using ordinary least squares regression, but other regression methods from the statistical and machine learning literature are sometimes used. This study compares 14 regression methods across 7 Wisconsin beaches to identify which consistently produces the most accurate predictions. A random forest model is identified as the most accurate, followed by multiple regression fit using the adaptive LASSO.
Liu, Yuewei; Chen, Weihong
2012-02-01
As a nonparametric method, the Kruskal-Wallis test is widely used to compare three or more independent groups when an ordinal or interval level of data is available, especially when the assumptions of analysis of variance (ANOVA) are not met. If the Kruskal-Wallis statistic is statistically significant, Nemenyi test is an alternative method for further pairwise multiple comparisons to locate the source of significance. Unfortunately, most popular statistical packages do not integrate the Nemenyi test, which is not easy to be calculated by hand. We described the theory and applications of the Kruskal-Wallis and Nemenyi tests, and presented a flexible SAS macro to implement the two tests. The SAS macro was demonstrated by two examples from our cohort study in occupational epidemiology. It provides a useful tool for SAS users to test the differences among three or more independent groups using a nonparametric method.
Iterative Self-Dual Reconstruction on Radar Image Recovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martins, Charles; Medeiros, Fatima; Ushizima, Daniela
2010-05-21
Imaging systems as ultrasound, sonar, laser and synthetic aperture radar (SAR) are subjected to speckle noise during image acquisition. Before analyzing these images, it is often necessary to remove the speckle noise using filters. We combine properties of two mathematical morphology filters with speckle statistics to propose a signal-dependent noise filter to multiplicative noise. We describe a multiscale scheme that preserves sharp edges while it smooths homogeneous areas, by combining local statistics with two mathematical morphology filters: the alternating sequential and the self-dual reconstruction algorithms. The experimental results show that the proposed approach is less sensitive to varying window sizesmore » when applied to simulated and real SAR images in comparison with standard filters.« less
Statistical design of quantitative mass spectrometry-based proteomic experiments.
Oberg, Ann L; Vitek, Olga
2009-05-01
We review the fundamental principles of statistical experimental design, and their application to quantitative mass spectrometry-based proteomics. We focus on class comparison using Analysis of Variance (ANOVA), and discuss how randomization, replication and blocking help avoid systematic biases due to the experimental procedure, and help optimize our ability to detect true quantitative changes between groups. We also discuss the issues of pooling multiple biological specimens for a single mass analysis, and calculation of the number of replicates in a future study. When applicable, we emphasize the parallels between designing quantitative proteomic experiments and experiments with gene expression microarrays, and give examples from that area of research. We illustrate the discussion using theoretical considerations, and using real-data examples of profiling of disease.
Comparison of Adaline and Multiple Linear Regression Methods for Rainfall Forecasting
NASA Astrophysics Data System (ADS)
Sutawinaya, IP; Astawa, INGA; Hariyanti, NKD
2018-01-01
Heavy rainfall can cause disaster, therefore need a forecast to predict rainfall intensity. Main factor that cause flooding is there is a high rainfall intensity and it makes the river become overcapacity. This will cause flooding around the area. Rainfall factor is a dynamic factor, so rainfall is very interesting to be studied. In order to support the rainfall forecasting, there are methods that can be used from Artificial Intelligence (AI) to statistic. In this research, we used Adaline for AI method and Regression for statistic method. The more accurate forecast result shows the method that used is good for forecasting the rainfall. Through those methods, we expected which is the best method for rainfall forecasting here.
Network-based statistical comparison of citation topology of bibliographic databases
Šubelj, Lovro; Fiala, Dalibor; Bajec, Marko
2014-01-01
Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, while, not surprisingly, Web of Science is significantly more reliable from the perspective of consistency. This work can serve either as a reference for scholars in bibliometrics and scientometrics or a scientific evaluation guideline for governments and research agencies. PMID:25263231
Statistical and Machine Learning forecasting methods: Concerns and ways forward
Makridakis, Spyros; Assimakopoulos, Vassilios
2018-01-01
Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions. PMID:29584784
DOE Office of Scientific and Technical Information (OSTI.GOV)
Algan, O; Giem, J; Young, J
Purpose: To investigate the doses received by the hippocampus and normal brain tissue during a course of stereotactic radiotherapy utilizing a single isocenter (SI) versus multiple isocenter (MI) in patients with multiple intracranial metastases. Methods: Seven patients imaged with MRI including SPGR sequence and diagnosed with 2–3 brain metastases were included in this retrospective study. Two sets of stereotactic IMRT treatment plans, (MI vs SI), were generated. The hippocampus was contoured on SPGR sequences and doses received by the hippocampus and whole brain were calculated. The prescribed dose was 25Gy in 5 fractions. The two groups were compared using t-testmore » analysis. Results: There were 17 lesions in 7 patients. The median tumor, right hippocampus, left hippocampus and brain volumes were: 3.37cc, 2.56cc, 3.28cc, and 1417cc respectively. In comparing the two treatment plans, there was no difference in the PTV coverage except in the tail of the DVH curve. All tumors had V95 > 99.5%. The only statistically significant parameter was the V100 (72% vs 45%, p=0.002, favoring MI). All other evaluated parameters including the V95 and V98 did not reveal any statistically significant differences. None of the evaluated dosimetric parameters for the hippocampus (V100, V80, V60, V40, V20, V10, D100, D90, D70, D50, D30, D10) revealed any statistically significant differences (all p-values > 0.31) between MI and SI plans. The total brain dose was slightly higher in the SI plans, especially in the lower dose regions, although this difference was not statistically significant. Utilizing brain-sub-PTV volumes did not change these results. Conclusion: The use of SI treatment planning for patients with up to 3 brain metastases produces similar PTV coverage and similar normal tissue doses to the hippocampus and the brain compared to MI plans. SI treatment planning should be considered in patients with multiple brain metastases undergoing stereotactic treatment.« less
Functional equivalency inferred from "authoritative sources" in networks of homologous proteins.
Natarajan, Shreedhar; Jakobsson, Eric
2009-06-12
A one-on-one mapping of protein functionality across different species is a critical component of comparative analysis. This paper presents a heuristic algorithm for discovering the Most Likely Functional Counterparts (MoLFunCs) of a protein, based on simple concepts from network theory. A key feature of our algorithm is utilization of the user's knowledge to assign high confidence to selected functional identification. We show use of the algorithm to retrieve functional equivalents for 7 membrane proteins, from an exploration of almost 40 genomes form multiple online resources. We verify the functional equivalency of our dataset through a series of tests that include sequence, structure and function comparisons. Comparison is made to the OMA methodology, which also identifies one-on-one mapping between proteins from different species. Based on that comparison, we believe that incorporation of user's knowledge as a key aspect of the technique adds value to purely statistical formal methods.
Functional Equivalency Inferred from “Authoritative Sources” in Networks of Homologous Proteins
Natarajan, Shreedhar; Jakobsson, Eric
2009-01-01
A one-on-one mapping of protein functionality across different species is a critical component of comparative analysis. This paper presents a heuristic algorithm for discovering the Most Likely Functional Counterparts (MoLFunCs) of a protein, based on simple concepts from network theory. A key feature of our algorithm is utilization of the user's knowledge to assign high confidence to selected functional identification. We show use of the algorithm to retrieve functional equivalents for 7 membrane proteins, from an exploration of almost 40 genomes form multiple online resources. We verify the functional equivalency of our dataset through a series of tests that include sequence, structure and function comparisons. Comparison is made to the OMA methodology, which also identifies one-on-one mapping between proteins from different species. Based on that comparison, we believe that incorporation of user's knowledge as a key aspect of the technique adds value to purely statistical formal methods. PMID:19521530
Najafi, Mostafa; Akouchekian, Shahla; Ghaderi, Alireza; Mahaki, Behzad; Rezaei, Mariam
2017-01-01
Attention deficit and hyperactivity disorder (ADHD) is a common psychological problem during childhood. This study aimed to evaluate multiple intelligences profiles of children with ADHD in comparison with non-ADHD. This cross-sectional descriptive analytical study was done on 50 children of 6-13 years old in two groups of with and without ADHD. Children with ADHD were referred to Clinics of Child and Adolescent Psychiatry, Isfahan University of Medical Sciences, in 2014. Samples were selected based on clinical interview (based on Diagnostic and Statistical Manual of Mental Disorders IV and parent-teacher strengths and difficulties questionnaire), which was done by psychiatrist and psychologist. Raven intelligence quotient (IQ) test was used, and the findings were compared to the results of multiple intelligences test. Data analysis was done using a multivariate analysis of covariance using SPSS20 software. Comparing the profiles of multiple intelligence among two groups, there are more kinds of multiple intelligences in control group than ADHD group, a difference which has been more significant in logical, interpersonal, and intrapersonal intelligence ( P < 0.05). There was no significant difference with the other kinds of multiple intelligences in two groups ( P > 0.05). The IQ average score in the control group and ADHD group was 102.42 ± 16.26 and 96.72 ± 16.06, respectively, that reveals the negative effect of ADHD on IQ average value. There was an insignificance relationship between linguistic and naturalist intelligence ( P > 0.05). However, in other kinds of multiple intelligences, direct and significant relationships were observed ( P < 0.05). Since the levels of IQ (Raven test) and MI in control group were more significant than ADHD group, ADHD is likely to be associated with logical-mathematical, interpersonal, and intrapersonal profiles.
Haiman, Guy; Pratt, Hillel; Miller, Ariel
2009-10-01
The purpose of this study was to characterize the brain activity and associated cortical structures involved in pseudobulbar affect (PBA), a condition characterized by uncontrollable episodes of laughing and/or crying in patients with multiple sclerosis before and after treatment with dextromethorphan/quinidine (DM/Q). Behavioral responses and event-related potentials (ERPs) in response to subjectively significant and neutral verbal stimuli were recorded from 2 groups: 6 multiple sclerosis patients with PBA before (PBA-preTx) and after (PBA-DM/Q) treatment with DM/Q and 6 healthy control (HC) subjects. Statistical nonparametric mapping comparisons of ERP source current density distributions between groups were conducted for subjectively significant and neutral stimuli separately before and after treatment with DM/Q. Treatment with DM/Q had a normalizing effect on the behavioral responses of PBA patients. Event-related potential waveform comparisons of PBA-preTx and PBA-DM/Q with HC, for both neutral and subjectively significant stimuli, revealed effects on early ERP components. Comparisons between PBA-preTx and HC, in response to subjectively significant stimuli, revealed both early and late effects. Source analysis comparisons between PBA-preTx and PBA-DM/Q indicated distinct activations in areas involved in emotional processing and high-level and associative visual processing in response to neutral stimuli and in areas involved in emotional, somatosensory, primary, and premotor processing in response to subjectively significant stimuli. In most cases, stimuli evoked higher current density in PBA-DM/Q compared with the other groups. In conclusion, differences in brain activity were observed before and after medication. Also, DM/Q administration resulted in normalization of behavioral and electrophysiological measures.
Nelson, Peter M; Burns, Matthew K; Kanive, Rebecca; Ysseldyke, James E
2013-12-01
The current study used a randomized controlled trial to compare the effects of a practice-based intervention and a mnemonic strategy intervention on the retention and application of single-digit multiplication facts with 90 third- and fourth-grade students with math difficulties. Changes in retention and application were assessed separately using one-way ANCOVAs in which students' pretest scores were included as the covariate. Students in the practice-based intervention group had higher retention scores (expressed as the total number of digits correct per minute) relative to the control group. No statistically significant between-group differences were observed for application scores. Practical and theoretical implications for interventions targeting basic multiplication facts are discussed. © 2013.
Gasparovic, Hrvoje; Borojevic, Marko; Malojcic, Branko; Gasparovic, Kristina; Biocina, Bojan
2013-10-01
Aortic manipulation releases embolic material, thereby enhancing the probability of adverse neurologic outcomes following coronary artery bypass grafting (CABG). We prospectively evaluated 59 patients undergoing CABG. Patients in the single (SC, n = 37) and multiple clamp (MC, n = 22) groups were comparable in relation to age and operative risk (p > 0.05). Neurocognitive evaluation consisted of the Auditory Verbal Learning Test (AVLT), Color Trails Test A, the Grooved Pegboard test and the Mini-Mental State Examination. Data acquisition was performed preoperatively, early postoperatively and at the 4-month follow-up. Intraoperative transcranial Doppler (TCD) monitoring was used to quantify the embolic load in relation to different aortic clamping strategies. Preoperative neurocognitive results were similar between the groups (p > 0.05). The incidence of postoperative delirium was greater in the MC group but this failed to reach statistical significance (23% vs 8%, p = 0.14). SC patients had fewer embolization signals (270 ± 181 vs 465 ± 160, p < 0.0001). Early postoperative neurocognitive results were depressed in comparison to preoperative values in both groups (p < 0.05 for multiple comparisons). The magnitude of this cognitive depression was greater in the MC group (p < 0.05 for multiple comparisons). Preoperative levels of neurocognition were restored at follow-up in the SC group in all tests except the AVLT. A trend towards improvements in neurocognitive performances at follow-up was also observed in the MC group. Residual attention, motor skill and memory deficits were, however, documented with multiple tests. In conclusion, the embolic burden was significantly lower in the SC group. This TCD imaging outcome translated into fewer early cognition deficits and superior late restoration of function.
Lamart, Stephanie; Griffiths, Nina M; Tchitchek, Nicolas; Angulo, Jaime F; Van der Meeren, Anne
2017-03-01
The aim of this work was to develop a computational tool that integrates several statistical analysis features for biodistribution data from internal contamination experiments. These data represent actinide levels in biological compartments as a function of time and are derived from activity measurements in tissues and excreta. These experiments aim at assessing the influence of different contamination conditions (e.g. intake route or radioelement) on the biological behavior of the contaminant. The ever increasing number of datasets and diversity of experimental conditions make the handling and analysis of biodistribution data difficult. This work sought to facilitate the statistical analysis of a large number of datasets and the comparison of results from diverse experimental conditions. Functional modules were developed using the open-source programming language R to facilitate specific operations: descriptive statistics, visual comparison, curve fitting, and implementation of biokinetic models. In addition, the structure of the datasets was harmonized using the same table format. Analysis outputs can be written in text files and updated data can be written in the consistent table format. Hence, a data repository is built progressively, which is essential for the optimal use of animal data. Graphical representations can be automatically generated and saved as image files. The resulting computational tool was applied using data derived from wound contamination experiments conducted under different conditions. In facilitating biodistribution data handling and statistical analyses, this computational tool ensures faster analyses and a better reproducibility compared with the use of multiple office software applications. Furthermore, re-analysis of archival data and comparison of data from different sources is made much easier. Hence this tool will help to understand better the influence of contamination characteristics on actinide biokinetics. Our approach can aid the optimization of treatment protocols and therefore contribute to the improvement of the medical response after internal contamination with actinides.
Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R
2015-08-28
Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed preferable for X-axis positional verification, with KVX preferred for superoinferior alignment. The COM3PARE methodology was validated as feasible and useful in this pilot head and neck cancer positional verification dataset. COM3PARE represents a flexible and robust standardized analytic methodology for IGRT comparison. The implemented SAS script is included to encourage other groups to implement COM3PARE in other anatomic sites or IGRT platforms.
The capacity limitations of orientation summary statistics
Attarha, Mouna; Moore, Cathleen M.
2015-01-01
The simultaneous–sequential method was used to test the processing capacity of establishing mean orientation summaries. Four clusters of oriented Gabor patches were presented in the peripheral visual field. One of the clusters had a mean orientation that was tilted either left or right while the mean orientations of the other three clusters were roughly vertical. All four clusters were presented at the same time in the simultaneous condition whereas the clusters appeared in temporal subsets of two in the sequential condition. Performance was lower when the means of all four clusters had to be processed concurrently than when only two had to be processed in the same amount of time. The advantage for establishing fewer summaries at a given time indicates that the processing of mean orientation engages limited-capacity processes (Experiment 1). This limitation cannot be attributed to crowding, low target-distractor discriminability, or a limited-capacity comparison process (Experiments 2 and 3). In contrast to the limitations of establishing multiple summary representations, establishing a single summary representation unfolds without interference (Experiment 4). When interpreted in the context of recent work on the capacity of summary statistics, these findings encourage reevaluation of the view that early visual perception consists of summary statistic representations that unfold independently across multiple areas of the visual field. PMID:25810160
Data-driven inference for the spatial scan statistic.
Almeida, Alexandre C L; Duarte, Anderson R; Duczmal, Luiz H; Oliveira, Fernando L P; Takahashi, Ricardo H C
2011-08-02
Kulldorff's spatial scan statistic for aggregated area maps searches for clusters of cases without specifying their size (number of areas) or geographic location in advance. Their statistical significance is tested while adjusting for the multiple testing inherent in such a procedure. However, as is shown in this work, this adjustment is not done in an even manner for all possible cluster sizes. A modification is proposed to the usual inference test of the spatial scan statistic, incorporating additional information about the size of the most likely cluster found. A new interpretation of the results of the spatial scan statistic is done, posing a modified inference question: what is the probability that the null hypothesis is rejected for the original observed cases map with a most likely cluster of size k, taking into account only those most likely clusters of size k found under null hypothesis for comparison? This question is especially important when the p-value computed by the usual inference process is near the alpha significance level, regarding the correctness of the decision based in this inference. A practical procedure is provided to make more accurate inferences about the most likely cluster found by the spatial scan statistic.
NASA Astrophysics Data System (ADS)
Zhuo, Congshan; Zhong, Chengwen
2016-11-01
In this paper, a three-dimensional filter-matrix lattice Boltzmann (FMLB) model based on large eddy simulation (LES) was verified for simulating wall-bounded turbulent flows. The Vreman subgrid-scale model was employed in the present FMLB-LES framework, which had been proved to be capable of predicting turbulent near-wall region accurately. The fully developed turbulent channel flows were performed at a friction Reynolds number Reτ of 180. The turbulence statistics computed from the present FMLB-LES simulations, including mean stream velocity profile, Reynolds stress profile and root-mean-square velocity fluctuations greed well with the LES results of multiple-relaxation-time (MRT) LB model, and some discrepancies in comparison with those direct numerical simulation (DNS) data of Kim et al. was also observed due to the relatively low grid resolution. Moreover, to investigate the influence of grid resolution on the present LES simulation, a DNS simulation on a finer gird was also implemented by present FMLB-D3Q19 model. Comparisons of detailed computed various turbulence statistics with available benchmark data of DNS showed quite well agreement.
[Basic concepts for network meta-analysis].
Catalá-López, Ferrán; Tobías, Aurelio; Roqué, Marta
2014-12-01
Systematic reviews and meta-analyses have long been fundamental tools for evidence-based clinical practice. Initially, meta-analyses were proposed as a technique that could improve the accuracy and the statistical power of previous research from individual studies with small sample size. However, one of its main limitations has been the fact of being able to compare no more than two treatments in an analysis, even when the clinical research question necessitates that we compare multiple interventions. Network meta-analysis (NMA) uses novel statistical methods that incorporate information from both direct and indirect treatment comparisons in a network of studies examining the effects of various competing treatments, estimating comparisons between many treatments in a single analysis. Despite its potential limitations, NMA applications in clinical epidemiology can be of great value in situations where there are several treatments that have been compared against a common comparator. Also, NMA can be relevant to a research or clinical question when many treatments must be considered or when there is a mix of both direct and indirect information in the body of evidence. Copyright © 2013 Elsevier España, S.L.U. All rights reserved.
The Earthquake‐Source Inversion Validation (SIV) Project
Mai, P. Martin; Schorlemmer, Danijel; Page, Morgan T.; Ampuero, Jean-Paul; Asano, Kimiyuki; Causse, Mathieu; Custodio, Susana; Fan, Wenyuan; Festa, Gaetano; Galis, Martin; Gallovic, Frantisek; Imperatori, Walter; Käser, Martin; Malytskyy, Dmytro; Okuwaki, Ryo; Pollitz, Fred; Passone, Luca; Razafindrakoto, Hoby N. T.; Sekiguchi, Haruko; Song, Seok Goo; Somala, Surendra N.; Thingbaijam, Kiran K. S.; Twardzik, Cedric; van Driel, Martin; Vyas, Jagdish C.; Wang, Rongjiang; Yagi, Yuji; Zielke, Olaf
2016-01-01
Finite‐fault earthquake source inversions infer the (time‐dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, multiple source models for the same earthquake, obtained by different research teams, often exhibit remarkable dissimilarities. To address the uncertainties in earthquake‐source inversion methods and to understand strengths and weaknesses of the various approaches used, the Source Inversion Validation (SIV) project conducts a set of forward‐modeling exercises and inversion benchmarks. In this article, we describe the SIV strategy, the initial benchmarks, and current SIV results. Furthermore, we apply statistical tools for quantitative waveform comparison and for investigating source‐model (dis)similarities that enable us to rank the solutions, and to identify particularly promising source inversion approaches. All SIV exercises (with related data and descriptions) and statistical comparison tools are available via an online collaboration platform, and we encourage source modelers to use the SIV benchmarks for developing and testing new methods. We envision that the SIV efforts will lead to new developments for tackling the earthquake‐source imaging problem.
Koerner, Tess K; Zhang, Yang
2017-02-27
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.
Comparing and combining biomarkers as principle surrogates for time-to-event clinical endpoints.
Gabriel, Erin E; Sachs, Michael C; Gilbert, Peter B
2015-02-10
Principal surrogate endpoints are useful as targets for phase I and II trials. In many recent trials, multiple post-randomization biomarkers are measured. However, few statistical methods exist for comparison of or combination of biomarkers as principal surrogates, and none of these methods to our knowledge utilize time-to-event clinical endpoint information. We propose a Weibull model extension of the semi-parametric estimated maximum likelihood method that allows for the inclusion of multiple biomarkers in the same risk model as multivariate candidate principal surrogates. We propose several methods for comparing candidate principal surrogates and evaluating multivariate principal surrogates. These include the time-dependent and surrogate-dependent true and false positive fraction, the time-dependent and the integrated standardized total gain, and the cumulative distribution function of the risk difference. We illustrate the operating characteristics of our proposed methods in simulations and outline how these statistics can be used to evaluate and compare candidate principal surrogates. We use these methods to investigate candidate surrogates in the Diabetes Control and Complications Trial. Copyright © 2014 John Wiley & Sons, Ltd.
An introduction to multiplicity issues in clinical trials: the what, why, when and how.
Li, Guowei; Taljaard, Monica; Van den Heuvel, Edwin R; Levine, Mitchell Ah; Cook, Deborah J; Wells, George A; Devereaux, Philip J; Thabane, Lehana
2017-04-01
In clinical trials it is not uncommon to face a multiple testing problem which can have an impact on both type I and type II error rates, leading to inappropriate interpretation of trial results. Multiplicity issues may need to be considered at the design, analysis and interpretation stages of a trial. The proportion of trial reports not adequately correcting for multiple testing remains substantial. The purpose of this article is to provide an introduction to multiple testing issues in clinical trials, and to reduce confusion around the need for multiplicity adjustments. We use a tutorial, question-and-answer approach to address the key issues of why, when and how to consider multiplicity adjustments in trials. We summarize the relevant circumstances under which multiplicity adjustments ought to be considered, as well as options for carrying out multiplicity adjustments in terms of trial design factors including Population, Intervention/Comparison, Outcome, Time frame and Analysis (PICOTA). Results are presented in an easy-to-use table and flow diagrams. Confusion about multiplicity issues can be reduced or avoided by considering the potential impact of multiplicity on type I and II errors and, if necessary pre-specifying statistical approaches to either avoid or adjust for multiplicity in the trial protocol or analysis plan. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
The relation between statistical power and inference in fMRI
Wager, Tor D.; Yarkoni, Tal
2017-01-01
Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects), and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial—especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20–30) display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate) prediction methods and meta-analyses with related synthesis-oriented approaches. PMID:29155843
[Coding Causes of Death with IRIS Software. Impact in Navarre Mortality Statistic].
Floristán Floristán, Yugo; Delfrade Osinaga, Josu; Carrillo Prieto, Jesus; Aguirre Perez, Jesus; Moreno-Iribas, Conchi
2016-08-02
There are few studies that analyze changes in mortality statistics derived from the use of IRIS software, an automatic system for coding multiple causes of death and for the selection of the underlying cause of death, compared to manual coding. This study evaluated the impact of the use of IRIS in the Navarre mortality statistic. We proceeded to double coding 5,060 death certificates corresponding to residents in Navarra in 2014. We calculated coincidence between the two encodings for ICD10 chapters and for the list of causes of the Spanish National Statistics Institute (INE-102) and we estimated the change on mortality rates. IRIS automatically coded 90% of death certificates. The coincidence to 4 characters and in the same chapter of the CIE10 was 79.1% and 92.0%, respectively. Furthermore, coincidence with the short INE-102 list was 88.3%. Higher matches were found in death certificate of people under 65 years. In comparison with manual coding there was an increase in deaths from endocrine diseases (31%), mental disorders (19%) and disease of nervous system (9%), while a decrease of genitourinary system diseases was observed (21%). The coincidence at level of ICD10 chapters coding by IRIS in comparison to manual coding was 9 out of 10 deaths, similar to what is observed in other studies. The implementation of IRIS has led to increased of endocrine diseases, especially diabetes and hyperlipidaemia, and mental disorders, especially dementias.
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.
2013-07-25
This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load valuesmore » to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less
A Methodological Inter-Comparison of Gridded Meteorological Products
NASA Astrophysics Data System (ADS)
Newman, A. J.; Clark, M. P.; Longman, R. J.; Giambelluca, T. W.; Arnold, J.
2017-12-01
Here we present a gridded meteorology inter-comparison using the state of Hawaíi as a testbed. This inter-comparison is motivated by two general goals: 1) the broad user community of gridded observation based meteorological fields should be aware of inter-product differences and the reasons they exist, which allows users to make informed choices on product selection to best meet their specific application(s); 2) we want to demonstrate the utility of inter-comparisons to meet the first goal, yet highlight that they are limited to mostly generic statements regarding attribution of differences that limits our understanding of these complex algorithms and obscures future research directions. Hawaíi is a useful testbed because it is a meteorologically complex region with well-known spatial features that are tied to specific physical processes (e.g. the trade wind inversion). From a practical standpoint, there are now several monthly climatological and daily precipitation and temperature datasets available that are being used for impact modeling. General conclusions that have emerged are: 1) differences in input station data significantly influence product differences; 2) prediction of precipitation occurrence is crucial across multiple metrics; 3) derived temperature statistics (e.g. diurnal temperature range) may have large spatial differences across products; and 4) attribution of differences to methodological choices is difficult and may limit the outcomes of these inter-comparisons, particularly from a development viewpoint. Thus, we want to continue to move the community towards frameworks that allow for multiple options throughout the product generation chain and allow for more systematic testing.
Kawata, Masaaki; Sato, Chikara
2007-06-01
In determining the three-dimensional (3D) structure of macromolecular assemblies in single particle analysis, a large representative dataset of two-dimensional (2D) average images from huge number of raw images is a key for high resolution. Because alignments prior to averaging are computationally intensive, currently available multireference alignment (MRA) software does not survey every possible alignment. This leads to misaligned images, creating blurred averages and reducing the quality of the final 3D reconstruction. We present a new method, in which multireference alignment is harmonized with classification (multireference multiple alignment: MRMA). This method enables a statistical comparison of multiple alignment peaks, reflecting the similarities between each raw image and a set of reference images. Among the selected alignment candidates for each raw image, misaligned images are statistically excluded, based on the principle that aligned raw images of similar projections have a dense distribution around the correctly aligned coordinates in image space. This newly developed method was examined for accuracy and speed using model image sets with various signal-to-noise ratios, and with electron microscope images of the Transient Receptor Potential C3 and the sodium channel. In every data set, the newly developed method outperformed conventional methods in robustness against noise and in speed, creating 2D average images of higher quality. This statistically harmonized alignment-classification combination should greatly improve the quality of single particle analysis.
Thompson, Geoffrey A; Luo, Qing; Hefti, Arthur
2013-12-01
Previous studies have shown casting methodology to influence the as-cast properties of dental casting alloys. It is important to consider clinically important mechanical properties so that the influence of casting can be clarified. The purpose of this study was to evaluate how torch/centrifugal and inductively cast and vacuum-pressure casting machines may affect the castability, microhardness, chemical composition, and microstructure of 2 high noble, 1 noble, and 1 base metal dental casting alloys. Two commonly used methods for casting were selected for comparison: torch/centrifugal casting and inductively heated/ vacuum-pressure casting. One hundred and twenty castability patterns were fabricated and divided into 8 groups. Four groups were torch/centrifugally cast in Olympia (O), Jelenko O (JO), Genesis II (G), and Liberty (L) alloys. Similarly, 4 groups were cast in O, JO, G, and L by an inductively induction/vacuum-pressure casting machine. Each specimen was evaluated for casting completeness to determine a castability value, while porosity was determined by standard x-ray techniques. Each group was metallographically prepared for further evaluation that included chemical composition, Vickers microhardness, and grain analysis of microstructure. Two-way ANOVA was used to determine significant differences among the main effects. Statistically significant effects were examined further with the Tukey HSD procedure for multiple comparisons. Data obtained from the castability experiments were non-normal and the variances were unequal. They were analyzed statistically with the Kruskal-Wallis rank sum test. Significant results were further investigated statistically with the Steel-Dwass method for multiple comparisons (α=.05). The alloy type had a significant effect on surface microhardness (P<.001). In contrast, the technique used for casting did not affect the microhardness of the test specimen (P=.465). Similarly, the interaction between the alloy and casting technique was not significant (P=.119). A high level of castability (98.5% on average) was achieved overall. The frequency of casting failures as a function of alloy type and casting method was determined. Failure was defined as a castability index score of <100%. Three of 28 possible comparisons between alloy and casting combinations were statistically significant. The results suggested that casting technique affects the castability index of alloys. Radiographic analysis detected large porosities in regions near the edge of the castability pattern and infrequently adjacent to noncast segments. All castings acquired traces of elements found in the casting crucibles. The grain size for each dental casting alloy was generally finer for specimens produced by the induction/vacuum-pressure method. The difference was substantial for JO and L. This study demonstrated a relation between casting techniques and some physical properties of metal ceramic casting alloys. Copyright © 2013 Editorial Council for the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Czerwiński, M; Mroczka, J; Girasole, T; Gouesbet, G; Gréhan, G
2001-03-20
Our aim is to present a method of predicting light transmittances through dense three-dimensional layered media. A hybrid method is introduced as a combination of the four-flux method with coefficients predicted from a Monte Carlo statistical model to take into account the actual three-dimensional geometry of the problem under study. We present the principles of the hybrid method, some exemplifying results of numerical simulations, and their comparison with results obtained from Bouguer-Lambert-Beer law and from Monte Carlo simulations.
Avoiding false discoveries in association studies.
Sabatti, Chiara
2007-01-01
We consider the problem of controlling false discoveries in association studies. We assume that the design of the study is adequate so that the "false discoveries" are potentially only because of random chance, not to confounding or other flaws. Under this premise, we review the statistical framework for hypothesis testing and correction for multiple comparisons. We consider in detail the currently accepted strategies in linkage analysis. We then examine the underlying similarities and differences between linkage and association studies and document some of the most recent methodological developments for association mapping.
Comparison of Force and Moment Coefficients for the Same Test Article in Multiple Wind Tunnels
NASA Technical Reports Server (NTRS)
Deloach, Richard
2013-01-01
This paper compares the results of force and moment measurements made on the same test article and with the same balance in three transonic wind tunnels. Comparisons are made for the same combination of Reynolds number, Mach number, sideslip angle, control surface configuration, and angle of attack range. Between-tunnel force and moment differences are quantified. An analysis of variance was performed at four unique sites in the design space to assess the statistical significance of between-tunnel variation and any interaction with angle of attack. Tunnel to tunnel differences too large to attribute to random error were detected were observed for all forces and moments. In some cases these differences were independent of angle of attack and in other cases they changed with angle of attack.
Baxter, Melissa; Withey, Sarah; Harrison, Sean; Segeritz, Charis-Patricia; Zhang, Fang; Atkinson-Dell, Rebecca; Rowe, Cliff; Gerrard, Dave T.; Sison-Young, Rowena; Jenkins, Roz; Henry, Joanne; Berry, Andrew A.; Mohamet, Lisa; Best, Marie; Fenwick, Stephen W.; Malik, Hassan; Kitteringham, Neil R.; Goldring, Chris E.; Piper Hanley, Karen; Vallier, Ludovic; Hanley, Neil A.
2015-01-01
Background & Aims Hepatocyte-like cells (HLCs), differentiated from pluripotent stem cells by the use of soluble factors, can model human liver function and toxicity. However, at present HLC maturity and whether any deficit represents a true fetal state or aberrant differentiation is unclear and compounded by comparison to potentially deteriorated adult hepatocytes. Therefore, we generated HLCs from multiple lineages, using two different protocols, for direct comparison with fresh fetal and adult hepatocytes. Methods Protocols were developed for robust differentiation. Multiple transcript, protein and functional analyses compared HLCs to fresh human fetal and adult hepatocytes. Results HLCs were comparable to those of other laboratories by multiple parameters. Transcriptional changes during differentiation mimicked human embryogenesis and showed more similarity to pericentral than periportal hepatocytes. Unbiased proteomics demonstrated greater proximity to liver than 30 other human organs or tissues. However, by comparison to fresh material, HLC maturity was proven by transcript, protein and function to be fetal-like and short of the adult phenotype. The expression of 81% phase 1 enzymes in HLCs was significantly upregulated and half were statistically not different from fetal hepatocytes. HLCs secreted albumin and metabolized testosterone (CYP3A) and dextrorphan (CYP2D6) like fetal hepatocytes. In seven bespoke tests, devised by principal components analysis to distinguish fetal from adult hepatocytes, HLCs from two different source laboratories consistently demonstrated fetal characteristics. Conclusions HLCs from different sources are broadly comparable with unbiased proteomic evidence for faithful differentiation down the liver lineage. This current phenotype mimics human fetal rather than adult hepatocytes. PMID:25457200
Stevens, John R; Jones, Todd R; Lefevre, Michael; Ganesan, Balasubramanian; Weimer, Bart C
2017-01-01
Microbial community analysis experiments to assess the effect of a treatment intervention (or environmental change) on the relative abundance levels of multiple related microbial species (or operational taxonomic units) simultaneously using high throughput genomics are becoming increasingly common. Within the framework of the evolutionary phylogeny of all species considered in the experiment, this translates to a statistical need to identify the phylogenetic branches that exhibit a significant consensus response (in terms of operational taxonomic unit abundance) to the intervention. We present the R software package SigTree , a collection of flexible tools that make use of meta-analysis methods and regular expressions to identify and visualize significantly responsive branches in a phylogenetic tree, while appropriately adjusting for multiple comparisons.
Chin, Ki Jinn; Alakkad, Husni; Cubillos, Javier E
2013-08-08
Regional anaesthesia comprising axillary block of the brachial plexus is a common anaesthetic technique for distal upper limb surgery. This is an update of a review first published in 2006 and updated in 2011. To compare the relative effects (benefits and harms) of three injection techniques (single, double and multiple) of axillary block of the brachial plexus for distal upper extremity surgery. We considered these effects primarily in terms of anaesthetic effectiveness; the complication rate (neurological and vascular); and pain and discomfort caused by performance of the block. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library), MEDLINE, EMBASE and reference lists of trials. We contacted trial authors. The date of the last search was March 2013 (updated from March 2011). We included randomized controlled trials that compared double with single-injection techniques, multiple with single-injection techniques, or multiple with double-injection techniques for axillary block in adults undergoing surgery of the distal upper limb. We excluded trials using ultrasound-guided techniques. Independent study selection, risk of bias assessment and data extraction were performed by at least two investigators. We undertook meta-analysis. The 21 included trials involved a total of 2148 participants who received regional anaesthesia for hand, wrist, forearm or elbow surgery. Risk of bias assessment indicated that trial design and conduct were generally adequate; the most common areas of weakness were in blinding and allocation concealment.Eight trials comparing double versus single injections showed a statistically significant decrease in primary anaesthesia failure (risk ratio (RR 0.51), 95% confidence interval (CI) 0.30 to 0.85). Subgroup analysis by method of nerve location showed that the effect size was greater when neurostimulation was used rather than the transarterial technique.Eight trials comparing multiple with single injections showed a statistically significant decrease in primary anaesthesia failure (RR 0.25, 95% CI 0.14 to 0.44) and of incomplete motor block (RR 0.61, 95% CI 0.39 to 0.96) in the multiple injection group.Eleven trials comparing multiple with double injections showed a statistically significant decrease in primary anaesthesia failure (RR 0.28, 95% CI 0.20 to 0.40) and of incomplete motor block (RR 0.55, 95% CI 0.36 to 0.85) in the multiple injection group.Tourniquet pain was significantly reduced with multiple injections compared with double injections (RR 0.53, 95% CI 0.33 to 0.84). Otherwise there were no statistically significant differences between groups in any of the three comparisons on secondary analgesia failure, complications and patient discomfort. The time for block performance was significantly shorter for single and double injections compared with multiple injections. This review provides evidence that multiple-injection techniques using nerve stimulation for axillary plexus block produce more effective anaesthesia than either double or single-injection techniques. However, there was insufficient evidence for a significant difference in other outcomes, including safety.
Conceptual and statistical problems associated with the use of diversity indices in ecology.
Barrantes, Gilbert; Sandoval, Luis
2009-09-01
Diversity indices, particularly the Shannon-Wiener index, have extensively been used in analyzing patterns of diversity at different geographic and ecological scales. These indices have serious conceptual and statistical problems which make comparisons of species richness or species abundances across communities nearly impossible. There is often no a single statistical method that retains all information needed to answer even a simple question. However, multivariate analyses could be used instead of diversity indices, such as cluster analyses or multiple regressions. More complex multivariate analyses, such as Canonical Correspondence Analysis, provide very valuable information on environmental variables associated to the presence and abundance of the species in a community. In addition, particular hypotheses associated to changes in species richness across localities, or change in abundance of one, or a group of species can be tested using univariate, bivariate, and/or rarefaction statistical tests. The rarefaction method has proved to be robust to standardize all samples to a common size. Even the simplest method as reporting the number of species per taxonomic category possibly provides more information than a diversity index value.
Secure and Cost-Effective Distributed Aggregation for Mobile Sensor Networks
Guo, Kehua; Zhang, Ping; Ma, Jianhua
2016-01-01
Secure data aggregation (SDA) schemes are widely used in distributed applications, such as mobile sensor networks, to reduce communication cost, prolong the network life cycle and provide security. However, most SDA are only suited for a single type of statistics (i.e., summation-based or comparison-based statistics) and are not applicable to obtaining multiple statistic results. Most SDA are also inefficient for dynamic networks. This paper presents multi-functional secure data aggregation (MFSDA), in which the mapping step and coding step are introduced to provide value-preserving and order-preserving and, later, to enable arbitrary statistics support in the same query. MFSDA is suited for dynamic networks because these active nodes can be counted directly from aggregation data. The proposed scheme is tolerant to many types of attacks. The network load of the proposed scheme is balanced, and no significant bottleneck exists. The MFSDA includes two versions: MFSDA-I and MFSDA-II. The first one can obtain accurate results, while the second one is a more generalized version that can significantly reduce network traffic at the expense of less accuracy loss. PMID:27120599
Jansen, Jeroen P; Fleurence, Rachael; Devine, Beth; Itzler, Robbin; Barrett, Annabel; Hawkins, Neil; Lee, Karen; Boersma, Cornelis; Annemans, Lieven; Cappelleri, Joseph C
2011-06-01
Evidence-based health-care decision making requires comparisons of all relevant competing interventions. In the absence of randomized, controlled trials involving a direct comparison of all treatments of interest, indirect treatment comparisons and network meta-analysis provide useful evidence for judiciously selecting the best choice(s) of treatment. Mixed treatment comparisons, a special case of network meta-analysis, combine direct and indirect evidence for particular pairwise comparisons, thereby synthesizing a greater share of the available evidence than a traditional meta-analysis. This report from the ISPOR Indirect Treatment Comparisons Good Research Practices Task Force provides guidance on the interpretation of indirect treatment comparisons and network meta-analysis to assist policymakers and health-care professionals in using its findings for decision making. We start with an overview of how networks of randomized, controlled trials allow multiple treatment comparisons of competing interventions. Next, an introduction to the synthesis of the available evidence with a focus on terminology, assumptions, validity, and statistical methods is provided, followed by advice on critically reviewing and interpreting an indirect treatment comparison or network meta-analysis to inform decision making. We finish with a discussion of what to do if there are no direct or indirect treatment comparisons of randomized, controlled trials possible and a health-care decision still needs to be made. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Evaluating the statistical methodology of randomized trials on dentin hypersensitivity management.
Matranga, Domenica; Matera, Federico; Pizzo, Giuseppe
2017-12-27
The present study aimed to evaluate the characteristics and quality of statistical methodology used in clinical studies on dentin hypersensitivity management. An electronic search was performed for data published from 2009 to 2014 by using PubMed, Ovid/MEDLINE, and Cochrane Library databases. The primary search terms were used in combination. Eligibility criteria included randomized clinical trials that evaluated the efficacy of desensitizing agents in terms of reducing dentin hypersensitivity. A total of 40 studies were considered eligible for assessment of quality statistical methodology. The four main concerns identified were i) use of nonparametric tests in the presence of large samples, coupled with lack of information about normality and equality of variances of the response; ii) lack of P-value adjustment for multiple comparisons; iii) failure to account for interactions between treatment and follow-up time; and iv) no information about the number of teeth examined per patient and the consequent lack of cluster-specific approach in data analysis. Owing to these concerns, statistical methodology was judged as inappropriate in 77.1% of the 35 studies that used parametric methods. Additional studies with appropriate statistical analysis are required to obtain appropriate assessment of the efficacy of desensitizing agents.
2010-01-01
Background Animals, including humans, exhibit a variety of biological rhythms. This article describes a method for the detection and simultaneous comparison of multiple nycthemeral rhythms. Methods A statistical method for detecting periodic patterns in time-related data via harmonic regression is described. The method is particularly capable of detecting nycthemeral rhythms in medical data. Additionally a method for simultaneously comparing two or more periodic patterns is described, which derives from the analysis of variance (ANOVA). This method statistically confirms or rejects equality of periodic patterns. Mathematical descriptions of the detecting method and the comparing method are displayed. Results Nycthemeral rhythms of incidents of bodily harm in Middle Franconia are analyzed in order to demonstrate both methods. Every day of the week showed a significant nycthemeral rhythm of bodily harm. These seven patterns of the week were compared to each other revealing only two different nycthemeral rhythms, one for Friday and Saturday and one for the other weekdays. PMID:21059197
NASA Astrophysics Data System (ADS)
Young, S. L.; Kress, B. T.; Rodriguez, J. V.; McCollough, J. P.
2013-12-01
Operational specifications of space environmental hazards can be an important input used by decision makers. Ideally the specification would come from on-board sensors, but for satellites where that capability is not available another option is to map data from remote observations to the location of the satellite. This requires a model of the physical environment and an understanding of its accuracy for mapping applications. We present a statistical comparison between magnetic field model mappings of solar energetic particle observations made by NOAA's Geostationary Operational Environmental Satellites (GOES) to the location of the Combined Release and Radiation Effects Satellite (CRRES). Because CRRES followed a geosynchronous transfer orbit which precessed in local time this allows us to examine the model accuracy between LEO and GEO orbits across a range of local times. We examine the accuracy of multiple magnetic field models using a variety of statistics and examine their utility for operational purposes.
2014-01-01
Background Thresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore, assessment of intervention effects in randomised clinical trials deserves more rigour in order to become more valid. Methods Several methodologies for assessing the statistical and clinical significance of intervention effects in randomised clinical trials were considered. Balancing simplicity and comprehensiveness, a simple five-step procedure was developed. Results For a more valid assessment of results from a randomised clinical trial we propose the following five-steps: (1) report the confidence intervals and the exact P-values; (2) report Bayes factor for the primary outcome, being the ratio of the probability that a given trial result is compatible with a ‘null’ effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance threshold if the trial is stopped early or if interim analyses have been conducted; (4) adjust the confidence intervals and the P-values for multiplicity due to number of outcome comparisons; and (5) assess clinical significance of the trial results. Conclusions If the proposed five-step procedure is followed, this may increase the validity of assessments of intervention effects in randomised clinical trials. PMID:24588900
ERIC Educational Resources Information Center
Porter, Kristin E.
2018-01-01
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Jeon, Young J.; Kim, Jaeuk U.; Lee, Hae J.; Lee, Jeon; Ryu, Hyun H.; Lee, Yu J.; Kim, Jong Y.
2011-01-01
In this work, we analyze the baseline, signal strength, aortic augmentation index (AIx), radial AIx, time to reflection and P_T2 at Chon, Gwan, and Cheok, which are the three pulse diagnosis positions in Oriental medicine. For the pulse measurement, we used the SphygmoCor apparatus, which has been widely used for the evaluation of the arterial stiffness at the aorta. By two-way repeated measures analysis of variance, we tested two independent measurements for repeatability and investigated their mean differences among Chon, Gwan and Cheok. To characterize further the parameters that were shown to be different between each palpation position, we carried out Duncan's test for the multiple comparisons. The baseline and signal strength were statistically different (P < .05) among Chon, Gwan and Cheok, respectively, which supports the major hypothesis of Oriental medicine that all of the three palpation positions contain different clinical information. On the other hand, aortic AIx and time to reflection were found to be statistically different between Chon and the others, and radial AIx and P_T2 did not show any difference between pulse positions. In the clinical sense, however, the aortic AIx at each palpation position was found to fall within the 90% confidence interval of normal arterial compliance. The results of the multiple comparisons indicate that the parameters of arterial stiffness were independent of the palpation positions. This work is the first attempt to characterize quantitatively the pulse signals at Chon, Gwan and Cheok with some relevant parameters extracted from the SphygmoCor apparatus. PMID:19789213
Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal D.; Balsamo, Gianpaolo; Lawrence, David M.
2018-01-01
Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses outperform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison. PMID:29645013
NASA Technical Reports Server (NTRS)
Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A., Jr.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal Dean;
2016-01-01
Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses out perform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.
Implementation errors in the GingerALE Software: Description and recommendations.
Eickhoff, Simon B; Laird, Angela R; Fox, P Mickle; Lancaster, Jack L; Fox, Peter T
2017-01-01
Neuroscience imaging is a burgeoning, highly sophisticated field the growth of which has been fostered by grant-funded, freely distributed software libraries that perform voxel-wise analyses in anatomically standardized three-dimensional space on multi-subject, whole-brain, primary datasets. Despite the ongoing advances made using these non-commercial computational tools, the replicability of individual studies is an acknowledged limitation. Coordinate-based meta-analysis offers a practical solution to this limitation and, consequently, plays an important role in filtering and consolidating the enormous corpus of functional and structural neuroimaging results reported in the peer-reviewed literature. In both primary data and meta-analytic neuroimaging analyses, correction for multiple comparisons is a complex but critical step for ensuring statistical rigor. Reports of errors in multiple-comparison corrections in primary-data analyses have recently appeared. Here, we report two such errors in GingerALE, a widely used, US National Institutes of Health (NIH)-funded, freely distributed software package for coordinate-based meta-analysis. These errors have given rise to published reports with more liberal statistical inferences than were specified by the authors. The intent of this technical report is threefold. First, we inform authors who used GingerALE of these errors so that they can take appropriate actions including re-analyses and corrective publications. Second, we seek to exemplify and promote an open approach to error management. Third, we discuss the implications of these and similar errors in a scientific environment dependent on third-party software. Hum Brain Mapp 38:7-11, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Dirmeyer, Paul A; Wu, Jiexia; Norton, Holly E; Dorigo, Wouter A; Quiring, Steven M; Ford, Trenton W; Santanello, Joseph A; Bosilovich, Michael G; Ek, Michael B; Koster, Randal D; Balsamo, Gianpaolo; Lawrence, David M
2016-04-01
Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses outperform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.
A new blended learning concept for medical students in otolaryngology.
Grasl, Matthaeus C; Pokieser, Peter; Gleiss, Andreas; Brandstaetter, Juergen; Sigmund, Thorsten; Erovic, Boban M; Fischer, Martin R
2012-04-01
To evaluate students' overall assessment and effectiveness of the web-based blended learning conception "Unified Patient Project" (UPP) for medical students rotating on their otolaryngology internship (ear, nose, and throat [ENT] tertiary). Prospective comparison group design of the quasiexperimental type. Medical education. The experimental group (preintervention test [pretest], intervention, and postintervention test [posttest]) comprised 117 students, and the comparison group (pretest, alternative intervention, and posttest), 119. In the experimental group, lecturing of case studies was replaced by the blended learning concept UPP. A standardized questionnaire evaluated students' overall assessment of teaching otolaryngology. A pretest and posttest using multiple choice questions was administered to clarify whether the UPP has led to a knowledge gain. The comparison group was more satisfied with their teaching; however, this was not statistically significant (P = .26) compared with the UPP. Students with higher preknowledge benefitted from the UPP, while students with lower preknowledge did not (P = .01). On average, posttest results in the experimental group exceeded those of the comparison group by 8.7 percentage points for a 75% preknowledge of the maximum attainable score, while they fell below those of the comparison group by 8.1 percentage points for a 25% preknowledge. Students' satisfaction with the blended learning concept UPP was lower than in the face-to-face teaching, although this was not statistically significant. The new web-based UPP leads to an improved knowledge in clinical otolaryngology for all students. Students with lower preknowledge benefitted more from face-to-face teaching than from the UPP, while students with higher preknowledge benefitted more from the UPP. This implies students with poor preknowledge need special promotion programs.
ERDEMİR, Ugur; YİLDİZ, Esra; EREN, Meltem Mert; OZEL, Sevda
2013-01-01
Objectives: This study evaluated the effect of sports and energy drinks on the surface hardness of different composite resin restorative materials over a 1-month period. Material and Methods: A total of 168 specimens: Compoglass F, Filtek Z250, Filtek Supreme, and Premise were prepared using a customized cylindrical metal mould and they were divided into six groups (N=42; n=7 per group). For the control groups, the specimens were stored in distilled water for 24 hours at 37º C and the water was renewed daily. For the experimental groups, the specimens were immersed in 5 mL of one of the following test solutions: Powerade, Gatorade, X-IR, Burn, and Red Bull, for two minutes daily for up to a 1-month test period and all the solutions were refreshed daily. Surface hardness was measured using a Vickers hardness measuring instrument at baseline, after 1-week and 1-month. Data were statistically analyzed using Multivariate repeated measure ANOVA and Bonferroni's multiple comparison tests (α=0.05). Results: Multivariate repeated measures ANOVA revealed that there were statistically significant differences in the hardness of the restorative materials in different immersion times (p<0.001) in different solutions (p<0.001). The effect of different solutions on the surface hardness values of the restorative materials was tested using Bonferroni's multiple comparison tests, and it was observed that specimens stored in distilled water demonstrated statistically significant lower mean surface hardness reductions when compared to the specimens immersed in sports and energy drinks after a 1-month evaluation period (p<0.001). The compomer was the most affected by an acidic environment, whereas the composite resin materials were the least affected materials. Conclusions: The effect of sports and energy drinks on the surface hardness of a restorative material depends on the duration of exposure time, and the composition of the material. PMID:23739850
[Completeness of mortality statistics in Navarra, Spain].
Moreno-Iribas, Conchi; Guevara, Marcela; Díaz-González, Jorge; Álvarez-Arruti, Nerea; Casado, Itziar; Delfrade, Josu; Larumbe, Emilia; Aguirre, Jesús; Floristán, Yugo
2013-01-01
Women in the region of Navarra, Spain, have one of the highest life expectancies at birth in Europe. The aim of this study is to assess the completeness of the official mortality statistics of Navarra in 2009 and the impact of the under-registration of deaths on life expectancy estimates. Comparison of the number of deaths in Navarra using the official statistics from the Instituto Nacional de Estadística (INE) and the data derived from a multiple-source case-finding: the electronic health record, Instituto Navarro de Medicina Legal and INE including data that they received late. 5,249 deaths were identified, of which 103 were not included in the official mortality statistics. Taking into account only deaths that occurred in Spain, which are the only ones considered for the official statistics, the completeness was 98.4%. Estimated life expectancy at birth in 2009 descended from 86.6 years to 86.4 in women and from 80.0 to 79.6 years in men, after correcting for undercount. The results of this study ruled out the existence of significant under-registration of the official mortality statistics, confirming the exceptional longevity of women in Navarra, who are in the top position in Europe with a life expectancy at birth of 86.4 years.
Koerner, Tess K.; Zhang, Yang
2017-01-01
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers. PMID:28264422
Ward, John; Sorrels, Ken; Coats, Jesse; Pourmoghaddam, Amir; Deleon, Carlos; Daigneault, Paige
2014-03-01
The purpose of this study was to pilot test our study procedures and estimate parameters for sample size calculations for a randomized controlled trial to determine if bilateral sacroiliac (SI) joint manipulation affects specific gait parameters in asymptomatic individuals with a leg length inequality (LLI). Twenty-one asymptomatic chiropractic students engaged in a baseline 90-second walking kinematic analysis using infrared Vicon® cameras. Following this, participants underwent a functional LLI test. Upon examination participants were classified as: left short leg, right short leg, or no short leg. Half of the participants in each short leg group were then randomized to receive bilateral corrective SI joint chiropractic manipulative therapy (CMT). All participants then underwent another 90-second gait analysis. Pre- versus post-intervention gait data were then analyzed within treatment groups by an individual who was blinded to participant group status. For the primary analysis, all p-values were corrected for multiple comparisons using the Bonferroni method. Within groups, no differences in measured gait parameters were statistically significant after correcting for multiple comparisons. The protocol of this study was acceptable to all subjects who were invited to participate. No participants refused randomization. Based on the data collected, we estimated that a larger main study would require 34 participants in each comparison group to detect a moderate effect size.
Wu, Robert; Glen, Peter; Ramsay, Tim; Martel, Guillaume
2014-06-28
Observational studies dominate the surgical literature. Statistical adjustment is an important strategy to account for confounders in observational studies. Research has shown that published articles are often poor in statistical quality, which may jeopardize their conclusions. The Statistical Analyses and Methods in the Published Literature (SAMPL) guidelines have been published to help establish standards for statistical reporting.This study will seek to determine whether the quality of statistical adjustment and the reporting of these methods are adequate in surgical observational studies. We hypothesize that incomplete reporting will be found in all surgical observational studies, and that the quality and reporting of these methods will be of lower quality in surgical journals when compared with medical journals. Finally, this work will seek to identify predictors of high-quality reporting. This work will examine the top five general surgical and medical journals, based on a 5-year impact factor (2007-2012). All observational studies investigating an intervention related to an essential component area of general surgery (defined by the American Board of Surgery), with an exposure, outcome, and comparator, will be included in this systematic review. Essential elements related to statistical reporting and quality were extracted from the SAMPL guidelines and include domains such as intent of analysis, primary analysis, multiple comparisons, numbers and descriptive statistics, association and correlation analyses, linear regression, logistic regression, Cox proportional hazard analysis, analysis of variance, survival analysis, propensity analysis, and independent and correlated analyses. Each article will be scored as a proportion based on fulfilling criteria in relevant analyses used in the study. A logistic regression model will be built to identify variables associated with high-quality reporting. A comparison will be made between the scores of surgical observational studies published in medical versus surgical journals. Secondary outcomes will pertain to individual domains of analysis. Sensitivity analyses will be conducted. This study will explore the reporting and quality of statistical analyses in surgical observational studies published in the most referenced surgical and medical journals in 2013 and examine whether variables (including the type of journal) can predict high-quality reporting.
Linear and volumetric dimensional changes of injection-molded PMMA denture base resins.
El Bahra, Shadi; Ludwig, Klaus; Samran, Abdulaziz; Freitag-Wolf, Sandra; Kern, Matthias
2013-11-01
The aim of this study was to evaluate the linear and volumetric dimensional changes of six denture base resins processed by their corresponding injection-molding systems at 3 time intervals of water storage. Two heat-curing (SR Ivocap Hi Impact and Lucitone 199) and four auto-curing (IvoBase Hybrid, IvoBase Hi Impact, PalaXpress, and Futura Gen) acrylic resins were used with their specific injection-molding technique to fabricate 6 specimens of each material. Linear and volumetric dimensional changes were determined by means of a digital caliper and an electronic hydrostatic balance, respectively, after water storage of 1, 30, or 90 days. Means and standard deviations of linear and volumetric dimensional changes were calculated in percentage (%). Statistical analysis was done using Student's and Welch's t tests with Bonferroni-Holm correction for multiple comparisons (α=0.05). Statistically significant differences in linear dimensional changes between resins were demonstrated at all three time intervals of water immersion (p≤0.05), with exception of the following comparisons which showed no significant difference: IvoBase Hi Impact/SR Ivocap Hi Impact and PalaXpress/Lucitone 199 after 1 day, Futura Gen/PalaXpress and PalaXpress/Lucitone 199 after 30 days, and IvoBase Hybrid/IvoBase Hi Impact after 90 days. Also, statistically significant differences in volumetric dimensional changes between resins were found at all three time intervals of water immersion (p≤0.05), with exception of the comparison between PalaXpress and Futura Gen. Denture base resins (IvoBase Hybrid and IvoBase Hi Impact) processed by the new injection-molding system (IvoBase), revealed superior dimensional precision. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Time density curve analysis for C-arm FDCT PBV imaging.
Kamran, Mudassar; Byrne, James V
2016-04-01
Parenchymal blood volume (PBV) estimation using C-arm flat detector computed tomography (FDCT) assumes a steady-state contrast concentration in cerebral vasculature for the scan duration. Using time density curve (TDC) analysis, we explored if the steady-state assumption is met for C-arm CT PBV scans, and how consistent the contrast-material dynamics in cerebral vasculature are across patients. Thirty C-arm FDCT datasets of 26 patients with aneurysmal-SAH, acquired as part of a prospective study comparing C-arm CT PBV with MR-PWI, were analysed. TDCs were extracted from the 2D rotational projections. Goodness-of-fit of TDCs to a steady-state horizontal-line-model and the statistical similarity among the individual TDCs were tested. Influence of the differences in TDC characteristics on the agreement of resulting PBV measurements with MR-CBV was calculated. Despite identical scan parameters and contrast-injection-protocol, the individual TDCs were statistically non-identical (p < 0.01). Using Dunn's multiple comparisons test, of the total 435 individual comparisons among the 30 TDCs, 330 comparisons (62%) reached statistical significance for difference. All TDCs deviated significantly (p < 0.01) from the steady-state horizontal-line-model. PBV values of those datasets for which the TDCs showed largest deviations from the steady-state model demonstrated poor agreement and correlation with MR-CBV, compared with the PBV values of those datasets for which the TDCs were closer to steady-state. For clinical C-arm CT PBV examinations, the administered contrast material does not reach the assumed 'ideal steady-state' for the duration of scan. Using a prolonged injection protocol, the degree to which the TDCs approximate the ideal steady-state influences the agreement of resulting PBV measurements with MR-CBV. © The Author(s) 2016.
Time density curve analysis for C-arm FDCT PBV imaging
Byrne, James V
2016-01-01
Introduction Parenchymal blood volume (PBV) estimation using C-arm flat detector computed tomography (FDCT) assumes a steady-state contrast concentration in cerebral vasculature for the scan duration. Using time density curve (TDC) analysis, we explored if the steady-state assumption is met for C-arm CT PBV scans, and how consistent the contrast-material dynamics in cerebral vasculature are across patients. Methods Thirty C-arm FDCT datasets of 26 patients with aneurysmal-SAH, acquired as part of a prospective study comparing C-arm CT PBV with MR-PWI, were analysed. TDCs were extracted from the 2D rotational projections. Goodness-of-fit of TDCs to a steady-state horizontal-line-model and the statistical similarity among the individual TDCs were tested. Influence of the differences in TDC characteristics on the agreement of resulting PBV measurements with MR-CBV was calculated. Results Despite identical scan parameters and contrast-injection-protocol, the individual TDCs were statistically non-identical (p < 0.01). Using Dunn's multiple comparisons test, of the total 435 individual comparisons among the 30 TDCs, 330 comparisons (62%) reached statistical significance for difference. All TDCs deviated significantly (p < 0.01) from the steady-state horizontal-line-model. PBV values of those datasets for which the TDCs showed largest deviations from the steady-state model demonstrated poor agreement and correlation with MR-CBV, compared with the PBV values of those datasets for which the TDCs were closer to steady-state. Conclusion For clinical C-arm CT PBV examinations, the administered contrast material does not reach the assumed ‘ideal steady-state’ for the duration of scan. Using a prolonged injection protocol, the degree to which the TDCs approximate the ideal steady-state influences the agreement of resulting PBV measurements with MR-CBV. PMID:26769736
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
Application of one-way ANOVA in completely randomized experiments
NASA Astrophysics Data System (ADS)
Wahid, Zaharah; Izwan Latiff, Ahmad; Ahmad, Kartini
2017-12-01
This paper describes an application of a statistical technique one-way ANOVA in completely randomized experiments with three replicates. This technique was employed to a single factor with four levels and multiple observations at each level. The aim of this study is to investigate the relationship between chemical oxygen demand index and location on-sites. Two different approaches are employed for the analyses; critical value and p-value. It also presents key assumptions of the technique to be satisfied by the data in order to obtain valid results. Pairwise comparisons by Turkey method are also considered and discussed to determine where the significant differences among the means is after the ANOVA has been performed. The results revealed that there are statistically significant relationship exist between the chemical oxygen demand index and the location on-sites.
Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module
NASA Astrophysics Data System (ADS)
Martinez, Gregory D.; McKay, James; Farmer, Ben; Scott, Pat; Roebber, Elinore; Putze, Antje; Conrad, Jan
2017-11-01
We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework GAMBIT. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm T-Walk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, T-Walk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and T-Walk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics.
Dong, Nianbo; Lipsey, Mark W
2017-01-01
It is unclear whether propensity score analysis (PSA) based on pretest and demographic covariates will meet the ignorability assumption for replicating the results of randomized experiments. This study applies within-study comparisons to assess whether pre-Kindergarten (pre-K) treatment effects on achievement outcomes estimated using PSA based on a pretest and demographic covariates can approximate those found in a randomized experiment. Data-Four studies with samples of pre-K children each provided data on two math achievement outcome measures with baseline pretests and child demographic variables that included race, gender, age, language spoken at home, and mother's highest education. Research Design and Data Analysis-A randomized study of a pre-K math curriculum provided benchmark estimates of effects on achievement measures. Comparison samples from other pre-K studies were then substituted for the original randomized control and the effects were reestimated using PSA. The correspondence was evaluated using multiple criteria. The effect estimates using PSA were in the same direction as the benchmark estimates, had similar but not identical statistical significance, and did not differ from the benchmarks at statistically significant levels. However, the magnitude of the effect sizes differed and displayed both absolute and relative bias larger than required to show statistical equivalence with formal tests, but those results were not definitive because of the limited statistical power. We conclude that treatment effect estimates based on a single pretest and demographic covariates in PSA correspond to those from a randomized experiment on the most general criteria for equivalence.
Apical extrusion of debris and irrigant using hand and rotary systems: A comparative study
Ghivari, Sheetal B; Kubasad, Girish C; Chandak, Manoj G; Akarte, NR
2011-01-01
Aim: To evaluate and compare the amount of debris and irrigant extruded quantitatively by using two hand and rotary nickel–titanium (Ni–Ti) instrumentation techniques. Materials and Methods: Eighty freshly extracted mandibular premolars having similar canal length and curvature were selected and mounted in a debris collection apparatus. After each instrument change, 1 ml of distilled water was used as an irrigant and the amount of irrigant extruded was measured using the Meyers and Montgomery method. After drying, the debris was weighed using an electronic microbalance to determine its weight. Statistical analysis used: The data was analyzed statistically to determine the mean difference between the groups. The mean weight of the dry debris and irrigant within the group and between the groups was calculated by the one-way ANOVA and multiple comparison (Dunnet D) test. Results: The step-back technique extruded a greater quantity of debris and irrigant in comparison to other hand and rotary Ni–Ti systems. Conclusions: All instrumentation techniques extrude debris and irrigant, it is prudent on the part of the clinician to select the instrumentation technique that extrudes the least amount of debris and irrigant, to prevent a flare-up phenomena. PMID:21814364
The Use of Meta-Analytic Statistical Significance Testing
ERIC Educational Resources Information Center
Polanin, Joshua R.; Pigott, Terri D.
2015-01-01
Meta-analysis multiplicity, the concept of conducting multiple tests of statistical significance within one review, is an underdeveloped literature. We address this issue by considering how Type I errors can impact meta-analytic results, suggest how statistical power may be affected through the use of multiplicity corrections, and propose how…
Bose-Einstein correlations in pp and PbPb collisions with ALICE at the LHC
Kisiel, Adam
2018-05-14
We report on the results of identical pion femtoscopy at the LHC. The Bose-Einstein correlation analysis was performed on the large-statistics ALICE p+p at sqrt{s}= 0.9 TeV and 7 TeV datasets collected during 2010 LHC running and the first Pb+Pb dataset at sqrt{s_NN}= 2.76 TeV. Detailed pion femtoscopy studies in heavy-ion collisions have shown that emission region sizes ("HBT radii") decrease with increasing pair momentum, which is understood as a manifestation of the collective behavior of matter. 3D radii were also found to universally scale with event multiplicity. In p+p collisions at 7 TeV one measures multiplicities which are comparable with those registered in peripheral AuAu and CuCu collisions at RHIC, so direct comparisons and tests of scaling laws are now possible. We show the results of double-differential 3D pion HBT analysis, as a function of multiplicity and pair momentum. The results for two collision energies are compared to results obtained in the heavy-ion collisions at similar multiplicity and p+p collisions at lower energy. We identify the relevant scaling variables for the femtoscopic radii and discuss the similarities and differences to results from heavy-ions. The observed trends give insight into the soft particle production mechanism in p+p collisions and suggest that a self-interacting collective system may be created in sufficiently high multiplicity events. First results for the central Pb+Pb collisions are also shown. A significant increase of the reaction zone volume and lifetime in comparison to RHIC is observed. Signatures of collective hydrodynamics-like behavior of the system are also apparent, and are compared to model predictions.
Multistrip western blotting to increase quantitative data output.
Kiyatkin, Anatoly; Aksamitiene, Edita
2009-01-01
The qualitative and quantitative measurements of protein abundance and modification states are essential in understanding their functions in diverse cellular processes. Typical western blotting, though sensitive, is prone to produce substantial errors and is not readily adapted to high-throughput technologies. Multistrip western blotting is a modified immunoblotting procedure based on simultaneous electrophoretic transfer of proteins from multiple strips of polyacrylamide gels to a single membrane sheet. In comparison with the conventional technique, Multistrip western blotting increases the data output per single blotting cycle up to tenfold, allows concurrent monitoring of up to nine different proteins from the same loading of the sample, and substantially improves the data accuracy by reducing immunoblotting-derived signal errors. This approach enables statistically reliable comparison of different or repeated sets of data, and therefore is beneficial to apply in biomedical diagnostics, systems biology, and cell signaling research.
NASA Astrophysics Data System (ADS)
Menne, Matthew J.; Williams, Claude N., Jr.
2005-10-01
An evaluation of three hypothesis test statistics that are commonly used in the detection of undocumented changepoints is described. The goal of the evaluation was to determine whether the use of multiple tests could improve undocumented, artificial changepoint detection skill in climate series. The use of successive hypothesis testing is compared to optimal approaches, both of which are designed for situations in which multiple undocumented changepoints may be present. In addition, the importance of the form of the composite climate reference series is evaluated, particularly with regard to the impact of undocumented changepoints in the various component series that are used to calculate the composite.In a comparison of single test changepoint detection skill, the composite reference series formulation is shown to be less important than the choice of the hypothesis test statistic, provided that the composite is calculated from the serially complete and homogeneous component series. However, each of the evaluated composite series is not equally susceptible to the presence of changepoints in its components, which may be erroneously attributed to the target series. Moreover, a reference formulation that is based on the averaging of the first-difference component series is susceptible to random walks when the composition of the component series changes through time (e.g., values are missing), and its use is, therefore, not recommended. When more than one test is required to reject the null hypothesis of no changepoint, the number of detected changepoints is reduced proportionately less than the number of false alarms in a wide variety of Monte Carlo simulations. Consequently, a consensus of hypothesis tests appears to improve undocumented changepoint detection skill, especially when reference series homogeneity is violated. A consensus of successive hypothesis tests using a semihierarchic splitting algorithm also compares favorably to optimal solutions, even when changepoints are not hierarchic.
Najafi, Mostafa; Akouchekian, Shahla; Ghaderi, Alireza; Mahaki, Behzad; Rezaei, Mariam
2017-01-01
Background: Attention deficit and hyperactivity disorder (ADHD) is a common psychological problem during childhood. This study aimed to evaluate multiple intelligences profiles of children with ADHD in comparison with non-ADHD. Materials and Methods: This cross-sectional descriptive analytical study was done on 50 children of 6–13 years old in two groups of with and without ADHD. Children with ADHD were referred to Clinics of Child and Adolescent Psychiatry, Isfahan University of Medical Sciences, in 2014. Samples were selected based on clinical interview (based on Diagnostic and Statistical Manual of Mental Disorders IV and parent–teacher strengths and difficulties questionnaire), which was done by psychiatrist and psychologist. Raven intelligence quotient (IQ) test was used, and the findings were compared to the results of multiple intelligences test. Data analysis was done using a multivariate analysis of covariance using SPSS20 software. Results: Comparing the profiles of multiple intelligence among two groups, there are more kinds of multiple intelligences in control group than ADHD group, a difference which has been more significant in logical, interpersonal, and intrapersonal intelligence (P < 0.05). There was no significant difference with the other kinds of multiple intelligences in two groups (P > 0.05). The IQ average score in the control group and ADHD group was 102.42 ± 16.26 and 96.72 ± 16.06, respectively, that reveals the negative effect of ADHD on IQ average value. There was an insignificance relationship between linguistic and naturalist intelligence (P > 0.05). However, in other kinds of multiple intelligences, direct and significant relationships were observed (P < 0.05). Conclusions: Since the levels of IQ (Raven test) and MI in control group were more significant than ADHD group, ADHD is likely to be associated with logical-mathematical, interpersonal, and intrapersonal profiles. PMID:29285478
Statistics in biomedical laboratory and clinical science: applications, issues and pitfalls.
Ludbrook, John
2008-01-01
This review is directed at biomedical scientists who want to gain a better understanding of statistics: what tests to use, when, and why. In my view, even during the planning stage of a study it is very important to seek the advice of a qualified biostatistician. When designing and analyzing a study, it is important to construct and test global hypotheses, rather than to make multiple tests on the data. If the latter cannot be avoided, it is essential to control the risk of making false-positive inferences by applying multiple comparison procedures. For comparing two means or two proportions, it is best to use exact permutation tests rather then the better known, classical, ones. For comparing many means, analysis of variance, often of a complex type, is the most powerful approach. The correlation coefficient should never be used to compare the performances of two methods of measurement, or two measures, because it does not detect bias. Instead the Altman-Bland method of differences or least-products linear regression analysis should be preferred. Finally, the educational value to investigators of interaction with a biostatistician, before, during and after a study, cannot be overemphasized. (c) 2007 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Sahoo, Sasmita; Jha, Madan K.
2013-12-01
The potential of multiple linear regression (MLR) and artificial neural network (ANN) techniques in predicting transient water levels over a groundwater basin were compared. MLR and ANN modeling was carried out at 17 sites in Japan, considering all significant inputs: rainfall, ambient temperature, river stage, 11 seasonal dummy variables, and influential lags of rainfall, ambient temperature, river stage and groundwater level. Seventeen site-specific ANN models were developed, using multi-layer feed-forward neural networks trained with Levenberg-Marquardt backpropagation algorithms. The performance of the models was evaluated using statistical and graphical indicators. Comparison of the goodness-of-fit statistics of the MLR models with those of the ANN models indicated that there is better agreement between the ANN-predicted groundwater levels and the observed groundwater levels at all the sites, compared to the MLR. This finding was supported by the graphical indicators and the residual analysis. Thus, it is concluded that the ANN technique is superior to the MLR technique in predicting spatio-temporal distribution of groundwater levels in a basin. However, considering the practical advantages of the MLR technique, it is recommended as an alternative and cost-effective groundwater modeling tool.
Townsley, Michael; Bernasco, Wim; Ruiter, Stijn; Johnson, Shane D.; White, Gentry; Baum, Scott
2015-01-01
Objectives: This study builds on research undertaken by Bernasco and Nieuwbeerta and explores the generalizability of a theoretically derived offender target selection model in three cross-national study regions. Methods: Taking a discrete spatial choice approach, we estimate the impact of both environment- and offender-level factors on residential burglary placement in the Netherlands, the United Kingdom, and Australia. Combining cleared burglary data from all study regions in a single statistical model, we make statistical comparisons between environments. Results: In all three study regions, the likelihood an offender selects an area for burglary is positively influenced by proximity to their home, the proportion of easily accessible targets, and the total number of targets available. Furthermore, in two of the three study regions, juvenile offenders under the legal driving age are significantly more influenced by target proximity than adult offenders. Post hoc tests indicate the magnitudes of these impacts vary significantly between study regions. Conclusions: While burglary target selection strategies are consistent with opportunity-based explanations of offending, the impact of environmental context is significant. As such, the approach undertaken in combining observations from multiple study regions may aid criminology scholars in assessing the generalizability of observed findings across multiple environments. PMID:25866418
NASA Technical Reports Server (NTRS)
Stolzer, Alan J.; Halford, Carl
2007-01-01
In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.
Common pitfalls in statistical analysis: The perils of multiple testing
Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc
2016-01-01
Multiple testing refers to situations where a dataset is subjected to statistical testing multiple times - either at multiple time-points or through multiple subgroups or for multiple end-points. This amplifies the probability of a false-positive finding. In this article, we look at the consequences of multiple testing and explore various methods to deal with this issue. PMID:27141478
Rothlind, Johannes C; York, Michele K; Carlson, Kim; Luo, Ping; Marks, William J; Weaver, Frances M; Stern, Matthew; Follett, Kenneth; Reda, Domenic
2015-06-01
Deep brain stimulation (DBS) improves motor symptoms in Parkinson's disease (PD), but questions remain regarding neuropsychological decrements sometimes associated with this treatment, including rates of statistically and clinically meaningful change, and whether there are differences in outcome related to surgical target. Neuropsychological functioning was assessed in patients with Parkinson's disease (PD) at baseline and after 6 months in a prospective, randomised, controlled study comparing best medical therapy (BMT, n=116) and bilateral deep brain stimulation (DBS, n=164) at either the subthalamic nucleus (STN, n=84) or globus pallidus interna (GPi, n=80), using standardised neuropsychological tests. Measures of functional outcomes were also administered. Comparison of the two DBS targets revealed few significant group differences. STN DBS was associated with greater mean reductions on some measures of processing speed, only one of which was statistically significant in comparison with stimulation of GPi. GPi DBS was associated with lower mean performance on one measure of learning and memory that requires mental control and cognitive flexibility. Compared to the group receiving BMT, the combined DBS group had significantly greater mean reductions at 6-month follow-up in performance on multiple measures of processing speed and working memory. After calculating thresholds for statistically reliable change from data obtained from the BMT group, the combined DBS group also displayed higher rates of decline in neuropsychological test performance. Among study completers, 18 (11%) study participants receiving DBS displayed reliable decline by multiple indicators in two or more cognitive domains, a significantly higher rate than in the BMT group (3%). This multi-domain cognitive decline was associated with less beneficial change in subjective ratings of everyday functioning and quality of life (QOL). The multi-domain cognitive decline group continued to function at a lower level at 24-month follow-up. In those with PD, the likelihood of significant decline in neuropsychological functioning increases with DBS, affecting a small minority of patients who also appear to respond less optimally to DBS by other indicators of QOL. NCT00056563 and NCT01076452. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Casanova, I; Diaz, A; Pinto, S; de Carvalho, M
2014-04-01
The technique of threshold tracking to test axonal excitability gives information about nodal and internodal ion channel function. We aimed to investigate variability of the motor excitability measurements in healthy controls, taking into account age, gender, body mass index (BMI) and small changes in skin temperature. We examined the left median nerve of 47 healthy controls using the automated threshold-tacking program, QTRAC. Statistical multiple regression analysis was applied to test relationship between nerve excitability measurements and subject variables. Comparisons between genders did not find any significant difference (P>0.2 for all comparisons). Multiple regression analysis showed that motor amplitude decreases with age and temperature, stimulus-response slope decreases with age and BMI, and that accommodation half-time decrease with age and temperature. The changes related to demographic features on TRONDE protocol parameters are small and less important than in conventional nerve conduction studies. Nonetheless, our results underscore the relevance of careful temperature control, and indicate that interpretation of stimulus-response slope and accommodation half-time should take into account age and BMI. In contrast, gender is not of major relevance to axonal threshold findings in motor nerves. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Arroyo González, Rafael; Kita, Mariko; Crayton, Heidi; Havrdova, Eva; Margolin, David H; Lake, Stephen L; Giovannoni, Gavin
2017-09-01
Alemtuzumab was superior on clinical and magnetic resonance imaging (MRI) outcomes versus subcutaneous interferon beta-1a in phase 3 trials in patients with relapsing-remitting multiple sclerosis. To examine quality-of-life (QoL) outcomes in the alemtuzumab phase 3 trials. Patients who were treatment naive (Comparison of Alemtuzumab and Rebif ® Efficacy in Multiple Sclerosis I [CARE-MS I]) or had an inadequate response to prior therapy (CARE-MS II) received annual courses of alemtuzumab 12 mg/day at baseline (5 days) and Month 12 (3 days) or subcutaneous interferon beta-1a 44 µg three times/week. QoL was measured every 6 or 12 months using Functional Assessment of Multiple Sclerosis (FAMS), European Quality of Life-5 Dimensions (EQ-5D) and its visual analog scale (EQ-VAS), and 36-Item Short-Form Survey (SF-36). Statistically significant improvements from baseline with alemtuzumab were observed on all three QoL instruments at the earliest post-baseline assessment and sustained through Year 2. Statistically significant greater QoL improvements over subcutaneous interferon beta-1a were seen at all time points in CARE-MS II with FAMS, EQ-VAS and SF-36 physical component summary, and in CARE-MS I with FAMS. Patients treated with alemtuzumab had improvements in physical, mental, and emotional QoL regardless of treatment history. Improvements were significantly greater with alemtuzumab versus subcutaneous interferon beta-1a on both disease-specific and general measures of QoL.
Kaynar, Mehmet; Tekinarslan, Erdem; Keskin, Suat; Buldu, İbrahim; Sönmez, Mehmet Giray; Karatag, Tuna; Istanbulluoglu, Mustafa Okan
2015-01-01
To determine and evaluate the effective radiation exposure during a one year follow-up of urolithiasis patients following the SWL (extracorporeal shock wave lithotripsy) treatment. Total Effective Radiation Exposure (ERE) doses for each of the 129 patients: 44 kidney stone patients, 41 ureter stone patients, and 44 multiple stone location patients were calculated by adding up the radiation doses of each ionizing radiation session including images (IVU, KUB, CT) throughout a one year follow-up period following the SWL. Total mean ERE values for the kidney stone group was calculated as 15, 91 mSv (5.10-27.60), for the ureter group as 13.32 mSv (5.10-24.70), and in the multiple stone location group as 27.02 mSv (9.41-54.85). There was no statistically significant differences between the kidney and ureter groups in terms of the ERE dose values (p = 0.221) (p >0.05). In the comparison of the kidney and ureter stone groups with the multiple stone location group; however, there was a statistically significant difference (p = 0.000) (p <0.05). ERE doses should be a factor to be considered right at the initiation of any diagnostic and/or therapeutic procedure. Especially in the case of multiple stone locations, due to the high exposure to ionized radiation, different imaging modalities with low dose and/or totally without a dose should be employed in the diagnosis, treatment, and follow-up bearing the aim to optimize diagnosis while minimizing the radiation dose as much as possible.
Influence of pneumococcal conjugate vaccines on acute otitis media in Japan.
Sasaki, Atsushi; Kunimoto, Masaru; Takeno, Sachio; Sumiya, Takahiro; Ishino, Takashi; Sugino, Hirotoshi; Hirakawa, Katsuhiro
2017-11-01
This study investigated: (i) changes in the incidence of acute otitis media (AOM) following introduction of public funding for free inoculation with 7- and 13-valent pneumococcal conjugate vaccines (PCV7 and PCV13, respectively) and (ii) changes in the rate of myringotomies for AOM (MyfA) in children 1year following the publication of the first edition of the clinical practice guidelines for the diagnosis and management of AOM in children in Japan. PCV7 was launched on the Japanese market in 2010 and gained public funding in 2011. PCV7 was replaced with PCV13 in November 2013. Using the Japan Medical Data Center Claims Database, an 11-year study conducted between January 2005 and December 2015 investigated the decline in the incidence of visits to medical institutions (VtMI) due to all-cause AOM in children <15years. The rate of MyfA from January 2007 to December 2015was also investigated and changes before and after introduction of public funding for PCV7 (pfPCV7) and PCV13 (pfPCV13) for children were examined. Statistical data for the age group between 10 years and <15years served as the control. An analysis was conducted to examine changes for each age group, from infants that had received PCVs to children <5years. Statistical analysis was performed using the chi-square test and Ryan's multiple comparison tests. Ryan's multiple comparison tests were applied at a 5% level of significance. Due to significant changes in the guidelines on the indications for myringotomy introduced in 2013, statistical analysis of the rate of MyfA was limited to the pre- and post-PCV7 period. After introduction of pfPCV7 and pfPCV13, no significant suppression of the incidence of VtMI was observed in any age group. There was a gradual decline in the rate of MyfA after 2011. Compared to the control group, significant differences in all age groups from infants to children <5years were observed (p<0.009, chi-square test). Within 2 years after the introduction of PCV7, a significant decline in the rate of MyfA was observed in 1- and 5-year-olds using Ryan's multiple comparison tests at a 5% level of significance. The preventative effect of PCVs on AOM was not established in this study. There was, however, a significant decline in the rate of MyfA among 1- and 5-year-olds. Taking into consideration past studies, PCV7 may play a role in preventing the aggravation of AOM in 1-year-olds. When evaluating the effectiveness of PCVs, measures to evaluate severity may be as important as evaluating disease prevention. Copyright © 2017 Elsevier B.V. All rights reserved.
Baxter, Melissa; Withey, Sarah; Harrison, Sean; Segeritz, Charis-Patricia; Zhang, Fang; Atkinson-Dell, Rebecca; Rowe, Cliff; Gerrard, Dave T; Sison-Young, Rowena; Jenkins, Roz; Henry, Joanne; Berry, Andrew A; Mohamet, Lisa; Best, Marie; Fenwick, Stephen W; Malik, Hassan; Kitteringham, Neil R; Goldring, Chris E; Piper Hanley, Karen; Vallier, Ludovic; Hanley, Neil A
2015-03-01
Hepatocyte-like cells (HLCs), differentiated from pluripotent stem cells by the use of soluble factors, can model human liver function and toxicity. However, at present HLC maturity and whether any deficit represents a true fetal state or aberrant differentiation is unclear and compounded by comparison to potentially deteriorated adult hepatocytes. Therefore, we generated HLCs from multiple lineages, using two different protocols, for direct comparison with fresh fetal and adult hepatocytes. Protocols were developed for robust differentiation. Multiple transcript, protein and functional analyses compared HLCs to fresh human fetal and adult hepatocytes. HLCs were comparable to those of other laboratories by multiple parameters. Transcriptional changes during differentiation mimicked human embryogenesis and showed more similarity to pericentral than periportal hepatocytes. Unbiased proteomics demonstrated greater proximity to liver than 30 other human organs or tissues. However, by comparison to fresh material, HLC maturity was proven by transcript, protein and function to be fetal-like and short of the adult phenotype. The expression of 81% phase 1 enzymes in HLCs was significantly upregulated and half were statistically not different from fetal hepatocytes. HLCs secreted albumin and metabolized testosterone (CYP3A) and dextrorphan (CYP2D6) like fetal hepatocytes. In seven bespoke tests, devised by principal components analysis to distinguish fetal from adult hepatocytes, HLCs from two different source laboratories consistently demonstrated fetal characteristics. HLCs from different sources are broadly comparable with unbiased proteomic evidence for faithful differentiation down the liver lineage. This current phenotype mimics human fetal rather than adult hepatocytes. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.
Adaptive graph-based multiple testing procedures
Klinglmueller, Florian; Posch, Martin; Koenig, Franz
2016-01-01
Multiple testing procedures defined by directed, weighted graphs have recently been proposed as an intuitive visual tool for constructing multiple testing strategies that reflect the often complex contextual relations between hypotheses in clinical trials. Many well-known sequentially rejective tests, such as (parallel) gatekeeping tests or hierarchical testing procedures are special cases of the graph based tests. We generalize these graph-based multiple testing procedures to adaptive trial designs with an interim analysis. These designs permit mid-trial design modifications based on unblinded interim data as well as external information, while providing strong family wise error rate control. To maintain the familywise error rate, it is not required to prespecify the adaption rule in detail. Because the adaptive test does not require knowledge of the multivariate distribution of test statistics, it is applicable in a wide range of scenarios including trials with multiple treatment comparisons, endpoints or subgroups, or combinations thereof. Examples of adaptations are dropping of treatment arms, selection of subpopulations, and sample size reassessment. If, in the interim analysis, it is decided to continue the trial as planned, the adaptive test reduces to the originally planned multiple testing procedure. Only if adaptations are actually implemented, an adjusted test needs to be applied. The procedure is illustrated with a case study and its operating characteristics are investigated by simulations. PMID:25319733
ERIC Educational Resources Information Center
Porter, Kristin E.
2016-01-01
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Song, Fujian; Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G
2011-08-16
To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. The study included 112 independent trial networks (including 1552 trials with 478,775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence.
Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G
2011-01-01
Objective To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Design Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Main outcome measure Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. Results The study included 112 independent trial networks (including 1552 trials with 478 775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Conclusions Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence. PMID:21846695
NASA Astrophysics Data System (ADS)
Zack, J. W.
2015-12-01
Predictions from Numerical Weather Prediction (NWP) models are the foundation for wind power forecasts for day-ahead and longer forecast horizons. The NWP models directly produce three-dimensional wind forecasts on their respective computational grids. These can be interpolated to the location and time of interest. However, these direct predictions typically contain significant systematic errors ("biases"). This is due to a variety of factors including the limited space-time resolution of the NWP models and shortcomings in the model's representation of physical processes. It has become common practice to attempt to improve the raw NWP forecasts by statistically adjusting them through a procedure that is widely known as Model Output Statistics (MOS). The challenge is to identify complex patterns of systematic errors and then use this knowledge to adjust the NWP predictions. The MOS-based improvements are the basis for much of the value added by commercial wind power forecast providers. There are an enormous number of statistical approaches that can be used to generate the MOS adjustments to the raw NWP forecasts. In order to obtain insight into the potential value of some of the newer and more sophisticated statistical techniques often referred to as "machine learning methods" a MOS-method comparison experiment has been performed for wind power generation facilities in 6 wind resource areas of California. The underlying NWP models that provided the raw forecasts were the two primary operational models of the US National Weather Service: the GFS and NAM models. The focus was on 1- and 2-day ahead forecasts of the hourly wind-based generation. The statistical methods evaluated included: (1) screening multiple linear regression, which served as a baseline method, (2) artificial neural networks, (3) a decision-tree approach called random forests, (4) gradient boosted regression based upon an decision-tree algorithm, (5) support vector regression and (6) analog ensemble, which is a case-matching scheme. The presentation will provide (1) an overview of each method and the experimental design, (2) performance comparisons based on standard metrics such as bias, MAE and RMSE, (3) a summary of the performance characteristics of each approach and (4) a preview of further experiments to be conducted.
Harris, Alex H S; Reeder, Rachelle; Hyun, Jenny K
2009-10-01
Journal editors and statistical reviewers are often in the difficult position of catching serious problems in submitted manuscripts after the research is conducted and data have been analyzed. We sought to learn from editors and reviewers of major psychiatry journals what common statistical and design problems they most often find in submitted manuscripts and what they wished to communicate to authors regarding these issues. Our primary goal was to facilitate communication between journal editors/reviewers and researchers/authors and thereby improve the scientific and statistical quality of research and submitted manuscripts. Editors and statistical reviewers of 54 high-impact psychiatry journals were surveyed to learn what statistical or design problems they encounter most often in submitted manuscripts. Respondents completed the survey online. The authors analyzed survey text responses using content analysis procedures to identify major themes related to commonly encountered statistical or research design problems. Editors and reviewers (n=15) who handle manuscripts from 39 different high-impact psychiatry journals responded to the survey. The most commonly cited problems regarded failure to map statistical models onto research questions, improper handling of missing data, not controlling for multiple comparisons, not understanding the difference between equivalence and difference trials, and poor controls in quasi-experimental designs. The scientific quality of psychiatry research and submitted reports could be greatly improved if researchers became sensitive to, or sought consultation on frequently encountered methodological and analytic issues.
Interpreting carnivore scent-station surveys
Sargeant, G.A.; Johnson, D.H.; Berg, W.E.
1998-01-01
The scent-station survey method has been widely used to estimate trends in carnivore abundance. However, statistical properties of scent-station data are poorly understood, and the relation between scent-station indices and carnivore abundance has not been adequately evaluated. We assessed properties of scent-station indices by analyzing data collected in Minnesota during 1986-03. Visits to stations separated by <2 km were correlated for all species because individual carnivores sometimes visited several stations in succession. Thus, visits to stations had an intractable statistical distribution. Dichotomizing results for lines of 10 stations (0 or 21 visits) produced binomially distributed data that were robust to multiple visits by individuals. We abandoned 2-way comparisons among years in favor of tests for population trend, which are less susceptible to bias, and analyzed results separately for biogeographic sections of Minnesota because trends differed among sections. Before drawing inferences about carnivore population trends, we reevaluated published validation experiments. Results implicated low statistical power and confounding as possible explanations for equivocal or conflicting results of validation efforts. Long-term trends in visitation rates probably reflect real changes in populations, but poor spatial and temporal resolution, susceptibility to confounding, and low statistical power limit the usefulness of this survey method.
NASA Astrophysics Data System (ADS)
Decraene, Carolina; Dijckmans, Arne; Reynders, Edwin P. B.
2018-05-01
A method is developed for computing the mean and variance of the diffuse field sound transmission loss of finite-sized layered wall and floor systems that consist of solid, fluid and/or poroelastic layers. This is achieved by coupling a transfer matrix model of the wall or floor to statistical energy analysis subsystem models of the adjacent room volumes. The modal behavior of the wall is approximately accounted for by projecting the wall displacement onto a set of sinusoidal lateral basis functions. This hybrid modal transfer matrix-statistical energy analysis method is validated on multiple wall systems: a thin steel plate, a polymethyl methacrylate panel, a thick brick wall, a sandwich panel, a double-leaf wall with poro-elastic material in the cavity, and a double glazing. The predictions are compared with experimental data and with results obtained using alternative prediction methods such as the transfer matrix method with spatial windowing, the hybrid wave based-transfer matrix method, and the hybrid finite element-statistical energy analysis method. These comparisons confirm the prediction accuracy of the proposed method and the computational efficiency against the conventional hybrid finite element-statistical energy analysis method.
De Groote, Sandra L; Blecic, Deborah D; Martin, Kristin
2013-04-01
Libraries require efficient and reliable methods to assess journal use. Vendors provide complete counts of articles retrieved from their platforms. However, if a journal is available on multiple platforms, several sets of statistics must be merged. Link-resolver reports merge data from all platforms into one report but only record partial use because users can access library subscriptions from other paths. Citation data are limited to publication use. Vendor, link-resolver, and local citation data were examined to determine correlation. Because link-resolver statistics are easy to obtain, the study library especially wanted to know if they correlate highly with the other measures. Vendor, link-resolver, and local citation statistics for the study institution were gathered for health sciences journals. Spearman rank-order correlation coefficients were calculated. There was a high positive correlation between all three data sets, with vendor data commonly showing the highest use. However, a small percentage of titles showed anomalous results. Link-resolver data correlate well with vendor and citation data, but due to anomalies, low link-resolver data would best be used to suggest titles for further evaluation using vendor data. Citation data may not be needed as it correlates highly with other measures.
SWATH Mass Spectrometry Performance Using Extended Peptide MS/MS Assay Libraries.
Wu, Jemma X; Song, Xiaomin; Pascovici, Dana; Zaw, Thiri; Care, Natasha; Krisp, Christoph; Molloy, Mark P
2016-07-01
The use of data-independent acquisition methods such as SWATH for mass spectrometry based proteomics is usually performed with peptide MS/MS assay libraries which enable identification and quantitation of peptide peak areas. Reference assay libraries can be generated locally through information dependent acquisition, or obtained from community data repositories for commonly studied organisms. However, there have been no studies performed to systematically evaluate how locally generated or repository-based assay libraries affect SWATH performance for proteomic studies. To undertake this analysis, we developed a software workflow, SwathXtend, which generates extended peptide assay libraries by integration with a local seed library and delivers statistical analysis of SWATH-quantitative comparisons. We designed test samples using peptides from a yeast extract spiked into peptides from human K562 cell lysates at three different ratios to simulate protein abundance change comparisons. SWATH-MS performance was assessed using local and external assay libraries of varying complexities and proteome compositions. These experiments demonstrated that local seed libraries integrated with external assay libraries achieve better performance than local assay libraries alone, in terms of the number of identified peptides and proteins and the specificity to detect differentially abundant proteins. Our findings show that the performance of extended assay libraries is influenced by the MS/MS feature similarity of the seed and external libraries, while statistical analysis using multiple testing corrections increases the statistical rigor needed when searching against large extended assay libraries. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
SWATH Mass Spectrometry Performance Using Extended Peptide MS/MS Assay Libraries*
Wu, Jemma X.; Song, Xiaomin; Pascovici, Dana; Zaw, Thiri; Care, Natasha; Krisp, Christoph; Molloy, Mark P.
2016-01-01
The use of data-independent acquisition methods such as SWATH for mass spectrometry based proteomics is usually performed with peptide MS/MS assay libraries which enable identification and quantitation of peptide peak areas. Reference assay libraries can be generated locally through information dependent acquisition, or obtained from community data repositories for commonly studied organisms. However, there have been no studies performed to systematically evaluate how locally generated or repository-based assay libraries affect SWATH performance for proteomic studies. To undertake this analysis, we developed a software workflow, SwathXtend, which generates extended peptide assay libraries by integration with a local seed library and delivers statistical analysis of SWATH-quantitative comparisons. We designed test samples using peptides from a yeast extract spiked into peptides from human K562 cell lysates at three different ratios to simulate protein abundance change comparisons. SWATH-MS performance was assessed using local and external assay libraries of varying complexities and proteome compositions. These experiments demonstrated that local seed libraries integrated with external assay libraries achieve better performance than local assay libraries alone, in terms of the number of identified peptides and proteins and the specificity to detect differentially abundant proteins. Our findings show that the performance of extended assay libraries is influenced by the MS/MS feature similarity of the seed and external libraries, while statistical analysis using multiple testing corrections increases the statistical rigor needed when searching against large extended assay libraries. PMID:27161445
Megalopoulos, Fivos A; Ochsenkuehn-Petropoulou, Maria T
2015-01-01
A statistical model based on multiple linear regression is developed, to estimate the bromine residual that can be expected after the bromination of cooling water. Make-up water sampled from a power plant in the Greek territory was used for the creation of the various cooling water matrices under investigation. The amount of bromine fed to the circuit, as well as other important operational parameters such as concentration at the cooling tower, temperature, organic load and contact time are taken as the independent variables. It is found that the highest contribution to the model's predictive ability comes from cooling water's organic load concentration, followed by the amount of bromine fed to the circuit, the water's mean temperature, the duration of the bromination period and finally its conductivity. Comparison of the model results with the experimental data confirms its ability to predict residual bromine given specific bromination conditions.
A Powerful Test for Comparing Multiple Regression Functions.
Maity, Arnab
2012-09-01
In this article, we address the important problem of comparison of two or more population regression functions. Recently, Pardo-Fernández, Van Keilegom and González-Manteiga (2007) developed test statistics for simple nonparametric regression models: Y(ij) = θ(j)(Z(ij)) + σ(j)(Z(ij))∊(ij), based on empirical distributions of the errors in each population j = 1, … , J. In this paper, we propose a test for equality of the θ(j)(·) based on the concept of generalized likelihood ratio type statistics. We also generalize our test for other nonparametric regression setups, e.g, nonparametric logistic regression, where the loglikelihood for population j is any general smooth function [Formula: see text]. We describe a resampling procedure to obtain the critical values of the test. In addition, we present a simulation study to evaluate the performance of the proposed test and compare our results to those in Pardo-Fernández et al. (2007).
Multivariate space - time analysis of PRE-STORM precipitation
NASA Technical Reports Server (NTRS)
Polyak, Ilya; North, Gerald R.; Valdes, Juan B.
1994-01-01
This paper presents the methodologies and results of the multivariate modeling and two-dimensional spectral and correlation analysis of PRE-STORM rainfall gauge data. Estimated parameters of the models for the specific spatial averages clearly indicate the eastward and southeastward wave propagation of rainfall fluctuations. A relationship between the coefficients of the diffusion equation and the parameters of the stochastic model of rainfall fluctuations is derived that leads directly to the exclusive use of rainfall data to estimate advection speed (about 12 m/s) as well as other coefficients of the diffusion equation of the corresponding fields. The statistical methodology developed here can be used for confirmation of physical models by comparison of the corresponding second-moment statistics of the observed and simulated data, for generating multiple samples of any size, for solving the inverse problem of the hydrodynamic equations, and for application in some other areas of meteorological and climatological data analysis and modeling.
Wirtzfeld, Lauren A; Ghoshal, Goutam; Rosado-Mendez, Ivan M; Nam, Kibo; Park, Yeonjoo; Pawlicki, Alexander D; Miller, Rita J; Simpson, Douglas G; Zagzebski, James A; Oelze, Michael L; Hall, Timothy J; O'Brien, William D
2015-08-01
Quantitative ultrasound estimates such as the frequency-dependent backscatter coefficient (BSC) have the potential to enhance noninvasive tissue characterization and to identify tumors better than traditional B-mode imaging. Thus, investigating system independence of BSC estimates from multiple imaging platforms is important for assessing their capabilities to detect tissue differences. Mouse and rat mammary tumor models, 4T1 and MAT, respectively, were used in a comparative experiment using 3 imaging systems (Siemens, Ultrasonix, and VisualSonics) with 5 different transducers covering a range of ultrasonic frequencies. Functional analysis of variance of the MAT and 4T1 BSC-versus-frequency curves revealed statistically significant differences between the two tumor types. Variations also were found among results from different transducers, attributable to frequency range effects. At 3 to 8 MHz, tumor BSC functions using different systems showed no differences between tumor type, but at 10 to 20 MHz, there were differences between 4T1 and MAT tumors. Fitting an average spline model to the combined BSC estimates (3-22 MHz) demonstrated that the BSC differences between tumors increased with increasing frequency, with the greatest separation above 15 MHz. Confining the analysis to larger tumors resulted in better discrimination over a wider bandwidth. Confining the comparison to higher ultrasonic frequencies or larger tumor sizes allowed for separation of BSC-versus-frequency curves from 4T1 and MAT tumors. These constraints ensure that a greater fraction of the backscattered signals originated from within the tumor, thus demonstrating that statistically significant tumor differences were detected. © 2015 by the American Institute of Ultrasound in Medicine.
Hosaka, Keiichi; Nakajima, Masatoshi; Monticelli, Francesca; Carrilho, Marcela; Yamauti, Monica; Aksornmuang, Juthatip; Nishitani, Yoshihiro; Tay, Franklin R; Pashley, David H; Tagami, Junji
2007-10-01
To evaluate the microtensile bond strength (microTBS) of two all-in-one self-etching adhesive systems and two self-etching adhesives with and without simulated hydrostatic pulpal pressure (PP). Flat coronal dentin surfaces of extracted human molars were prepared. Two all-in-one self-etching adhesive systems, One-Up Bond F (OBF; Tokuyama) and Clearfil S3 Bond (Tri-S, Kuraray Medical) and two self-etching primer adhesives, Clearfil Protect Bond (PB; Kuraray) and Clearfil SE Bond (SE; Kuraray) were applied to the dentin surfaces according to manufacturers' instructions under either a pulpal pressure (PP) of zero or 15 cm H2O. A hybrid resin composite (Clearfil AP-X, Kuraray) was used for the coronal buildup. Specimens bonded under PP were stored in water at 37 degrees C under 15 cm H2O for 24 h. Specimens not bonded under PP were stored under a PP of zero. After storage, the bonded specimens were sectioned into slabs that were trimmed to hourglass-shaped specimens, and were subjected to microtensile bond testing (microTBS). The bond strength data were statistically analyzed using two-way ANOVA and the Holm-Sidak method for multiple comparison tests (alpha = 0.05). The surface area percentage of different failure modes for each material was also statistically analyzed with three one-way ANOVAs and Tukey's multiple comparison tests. The microTBS of OBF and Tri-S fell significantly under PP. However, in the, PB and SE bonded specimens under PP, there were no significant differences compared with the control groups without PP. The microTBS of the two all-in-one adhesive systems decreased when PP was applied. However, the microTBS of both self-etching primer adhesives did not decrease under PP.
Voormolen, Eduard H.J.; Wei, Corie; Chow, Eva W.C.; Bassett, Anne S.; Mikulis, David J.; Crawley, Adrian P.
2011-01-01
Voxel-based morphometry (VBM) and automated lobar region of interest (ROI) volumetry are comprehensive and fast methods to detect differences in overall brain anatomy on magnetic resonance images. However, VBM and automated lobar ROI volumetry have detected dissimilar gray matter differences within identical image sets in our own experience and in previous reports. To gain more insight into how diverging results arise and to attempt to establish whether one method is superior to the other, we investigated how differences in spatial scale and in the need to statistically correct for multiple spatial comparisons influence the relative sensitivity of either technique to group differences in gray matter volumes. We assessed the performance of both techniques on a small dataset containing simulated gray matter deficits and additionally on a dataset of 22q11-deletion syndrome patients with schizophrenia (22q11DS-SZ) vs. matched controls. VBM was more sensitive to simulated focal deficits compared to automated ROI volumetry, and could detect global cortical deficits equally well. Moreover, theoretical calculations of VBM and ROI detection sensitivities to focal deficits showed that at increasing ROI size, ROI volumetry suffers more from loss in sensitivity than VBM. Furthermore, VBM and automated ROI found corresponding GM deficits in 22q11DS-SZ patients, except in the parietal lobe. Here, automated lobar ROI volumetry found a significant deficit only after a smaller subregion of interest was employed. Thus, sensitivity to focal differences is impaired relatively more by averaging over larger volumes in automated ROI methods than by the correction for multiple comparisons in VBM. These findings indicate that VBM is to be preferred over automated lobar-scale ROI volumetry for assessing gray matter volume differences between groups. PMID:19619660
Perception of midline deviations in smile esthetics by laypersons.
Ferreira, Jamille Barros; Silva, Licínio Esmeraldo da; Caetano, Márcia Tereza de Oliveira; Motta, Andrea Fonseca Jardim da; Cury-Saramago, Adriana de Alcantara; Mucha, José Nelson
2016-01-01
To evaluate the esthetic perception of upper dental midline deviation by laypersons and if adjacent structures influence their judgment. An album with 12 randomly distributed frontal view photographs of the smile of a woman with the midline digitally deviated was evaluated by 95 laypersons. The frontal view smiling photograph was modified to create from 1 mm to 5 mm deviations in the upper midline to the left side. The photographs were cropped in two different manners and divided into two groups of six photographs each: group LCN included the lips, chin, and two-thirds of the nose, and group L included the lips only. The laypersons performed the rate of each smile using a visual analog scale (VAS). Wilcoxon test, Student's t-test and Mann-Whitney test were applied, adopting a 5% level of significance. Laypersons were able to perceive midline deviations starting at 1 mm. Statistically significant results (p< 0.05) were found for all multiple comparisons of the values in photographs of group LCN and for almost all comparisons in photographs of group L. Comparisons between the photographs of groups LCN and L showed statistically significant values (p< 0.05) when the deviation was 1 mm. Laypersons were able to perceive the upper dental midline deviations of 1 mm, and above when the adjacent structures of the smiles were included. Deviations of 2 mm and above when the lips only were included. The visualization of structures adjacent to the smile demonstrated influence on the perception of midline deviation.
NASA Astrophysics Data System (ADS)
Toliver, Paul; Ozdur, Ibrahim; Agarwal, Anjali; Woodward, T. K.
2013-05-01
In this paper, we describe a detailed performance comparison of alternative single-pixel, single-mode LIDAR architectures including (i) linear-mode APD-based direct-detection, (ii) optically-preamplified PIN receiver, (iii) PINbased coherent-detection, and (iv) Geiger-mode single-photon-APD counting. Such a comparison is useful when considering next-generation LIDAR on a chip, which would allow one to leverage extensive waveguide-based structures and processing elements developed for telecom and apply them to small form-factor sensing applications. Models of four LIDAR transmit and receive systems are described in detail, which include not only the dominant sources of receiver noise commonly assumed in each of the four detection limits, but also additional noise terms present in realistic implementations. These receiver models are validated through the analysis of detection statistics collected from an experimental LIDAR testbed. The receiver is reconfigurable into four modes of operation, while transmit waveforms and channel characteristics are held constant. The use of a diffuse hard target highlights the importance of including speckle noise terms in the overall system analysis. All measurements are done at 1550 nm, which offers multiple system advantages including less stringent eye safety requirements and compatibility with available telecom components, optical amplification, and photonic integration. Ultimately, the experimentally-validated detection statistics can be used as part of an end-to-end system model for projecting rate, range, and resolution performance limits and tradeoffs of alternative integrated LIDAR architectures.
Electromagnetic wave scattering from rough terrain
NASA Astrophysics Data System (ADS)
Papa, R. J.; Lennon, J. F.; Taylor, R. L.
1980-09-01
This report presents two aspects of a program designed to calculate electromagnetic scattering from rough terrain: (1) the use of statistical estimation techniques to determine topographic parameters and (2) the results of a single-roughness-scale scattering calculation based on those parameters, including comparison with experimental data. In the statistical part of the present calculation, digitized topographic maps are used to generate data bases for the required scattering cells. The application of estimation theory to the data leads to the specification of statistical parameters for each cell. The estimated parameters are then used in a hypothesis test to decide on a probability density function (PDF) that represents the height distribution in the cell. Initially, the formulation uses a single observation of the multivariate data. A subsequent approach involves multiple observations of the heights on a bivariate basis, and further refinements are being considered. The electromagnetic scattering analysis, the second topic, calculates the amount of specular and diffuse multipath power reaching a monopulse receiver from a pulsed beacon positioned over a rough Earth. The program allows for spatial inhomogeneities and multiple specular reflection points. The analysis of shadowing by the rough surface has been extended to the case where the surface heights are distributed exponentially. The calculated loss of boresight pointing accuracy attributable to diffuse multipath is then compared with the experimental results. The extent of the specular region, the use of localized height variations, and the effect of the azimuthal variation in power pattern are all assessed.
Local multiplicity adjustment for the spatial scan statistic using the Gumbel distribution.
Gangnon, Ronald E
2012-03-01
The spatial scan statistic is an important and widely used tool for cluster detection. It is based on the simultaneous evaluation of the statistical significance of the maximum likelihood ratio test statistic over a large collection of potential clusters. In most cluster detection problems, there is variation in the extent of local multiplicity across the study region. For example, using a fixed maximum geographic radius for clusters, urban areas typically have many overlapping potential clusters, whereas rural areas have relatively few. The spatial scan statistic does not account for local multiplicity variation. We describe a previously proposed local multiplicity adjustment based on a nested Bonferroni correction and propose a novel adjustment based on a Gumbel distribution approximation to the distribution of a local scan statistic. We compare the performance of all three statistics in terms of power and a novel unbiased cluster detection criterion. These methods are then applied to the well-known New York leukemia dataset and a Wisconsin breast cancer incidence dataset. © 2011, The International Biometric Society.
Local multiplicity adjustment for the spatial scan statistic using the Gumbel distribution
Gangnon, Ronald E.
2011-01-01
Summary The spatial scan statistic is an important and widely used tool for cluster detection. It is based on the simultaneous evaluation of the statistical significance of the maximum likelihood ratio test statistic over a large collection of potential clusters. In most cluster detection problems, there is variation in the extent of local multiplicity across the study region. For example, using a fixed maximum geographic radius for clusters, urban areas typically have many overlapping potential clusters, while rural areas have relatively few. The spatial scan statistic does not account for local multiplicity variation. We describe a previously proposed local multiplicity adjustment based on a nested Bonferroni correction and propose a novel adjustment based on a Gumbel distribution approximation to the distribution of a local scan statistic. We compare the performance of all three statistics in terms of power and a novel unbiased cluster detection criterion. These methods are then applied to the well-known New York leukemia dataset and a Wisconsin breast cancer incidence dataset. PMID:21762118
Huh, Yeamin; Smith, David E.; Feng, Meihau Rose
2014-01-01
Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879
Multiple-dose safety study of ibuprofen/codeine and aspirin/codeine combinations.
Friedman, H; Seckman, C; Stubbs, C; Oster, H; Royer, G
1990-01-01
This multiple-dose, double-blind, placebo-controlled, randomized, normal volunteer study compared formulations of ibuprofen/codeine and aspirin/codeine for systemic safety. Vital signs, hematologic, biochemical and urinary parameters, side effects, mood and mental alertness, were monitored. The placebo group had less gastrointestinal side effects and more frequent stools than the active treatment groups. There was statistical evidence for greater adverse effects of aspirin/codeine on mood and mental alertness in comparison to ibuprofen/codeine and placebo. Ibuprofen/codeine had a more favorable adverse effect profile than aspirin/codeine. A mild respiratory and cardiac depressant effect attributable to codeine was evident in all active treatment groups after 7 days of frequent therapy. More work needs to be done to elucidate the factors regulating the development of tolerance to the respiratory and cardiovascular depressant effects of opiates in general, and for codeine in particular.
Piepho, H P
1994-11-01
Multilocation trials are often used to analyse the adaptability of genotypes in different environments and to find for each environment the genotype that is best adapted; i.e. that is highest yielding in that environment. For this purpose, it is of interest to obtain a reliable estimate of the mean yield of a cultivar in a given environment. This article compares two different statistical estimation procedures for this task: the Additive Main Effects and Multiplicative Interaction (AMMI) analysis and Best Linear Unbiased Prediction (BLUP). A modification of a cross validation procedure commonly used with AMMI is suggested for trials that are laid out as a randomized complete block design. The use of these procedure is exemplified using five faba bean datasets from German registration trails. BLUP was found to outperform AMMI in four of five faba bean datasets.
Holtzclaw, Dan J
2017-02-01
Previously published research for a single metropolitan market (Austin, Texas) found that periodontists fare poorly on the Yelp website for nearly all measured metrics, including average star ratings, number of reviews, review removal rate, and evaluations by "elite" Yelp users. The purpose of the current study is to confirm or refute these findings by expanding datasets to additional metropolitan markets of various sizes and geographic locations. A total of 6,559 Yelp reviews were examined for general dentists, endodontists, pediatric dentists, oral surgeons, orthodontists, and periodontists in small (Austin, Texas), medium (Seattle, Washington), and large (New York City, New York) metropolitan markets. Numerous review characteristics were evaluated, including: 1) total number of reviews; 2) average star rating; 3) review filtering rate; and 4) number of reviews by Yelp members with elite status. Results were compared in multiple ways to determine whether statistically significant differences existed. In all metropolitan markets, periodontists were outperformed by all other dental specialties for all measured Yelp metrics in this study. Intermetropolitan comparisons of periodontal practices showed no statistically significant differences. Periodontists were outperformed consistently by all other dental specialties in every measured metric on the Yelp website. These results were consistent and repeated in all three metropolitan markets evaluated in this study. Poor performance of periodontists on Yelp may be related to the age profile of patients in the typical periodontal practice. This may result in inadvertently biased filtering of periodontal reviews and subsequently poor performance in multiple other categories.
2013-01-01
Background Perturbations in intestinal microbiota composition have been associated with a variety of gastrointestinal tract-related diseases. The alleviation of symptoms has been achieved using treatments that alter the gastrointestinal tract microbiota toward that of healthy individuals. Identifying differences in microbiota composition through the use of 16S rRNA gene hypervariable tag sequencing has profound health implications. Current computational methods for comparing microbial communities are usually based on multiple alignments and phylogenetic inference, making them time consuming and requiring exceptional expertise and computational resources. As sequencing data rapidly grows in size, simpler analysis methods are needed to meet the growing computational burdens of microbiota comparisons. Thus, we have developed a simple, rapid, and accurate method, independent of multiple alignments and phylogenetic inference, to support microbiota comparisons. Results We create a metric, called compression-based distance (CBD) for quantifying the degree of similarity between microbial communities. CBD uses the repetitive nature of hypervariable tag datasets and well-established compression algorithms to approximate the total information shared between two datasets. Three published microbiota datasets were used as test cases for CBD as an applicable tool. Our study revealed that CBD recaptured 100% of the statistically significant conclusions reported in the previous studies, while achieving a decrease in computational time required when compared to similar tools without expert user intervention. Conclusion CBD provides a simple, rapid, and accurate method for assessing distances between gastrointestinal tract microbiota 16S hypervariable tag datasets. PMID:23617892
Fletcher, Jack M.; Stuebing, Karla K.; Barth, Amy E.; Miciak, Jeremy; Francis, David J.; Denton, Carolyn A.
2013-01-01
Purpose Agreement across methods for identifying students as inadequate responders or as learning disabled is often poor. We report (1) an empirical examination of final status (post-intervention benchmarks) and dual-discrepancy growth methods based on growth during the intervention and final status for assessing response to intervention; and (2) a statistical simulation of psychometric issues that may explain low agreement. Methods After a Tier 2 intervention, final status benchmark criteria were used to identify 104 inadequate and 85 adequate responders to intervention, with comparisons of agreement and coverage for these methods and a dual-discrepancy method. Factors affecting agreement were investigated using computer simulation to manipulate reliability, the intercorrelation between measures, cut points, normative samples, and sample size. Results Identification of inadequate responders based on individual measures showed that single measures tended not to identify many members of the pool of 104 inadequate responders. Poor to fair levels of agreement for identifying inadequate responders were apparent between pairs of measures In the simulation, comparisons across two simulated measures generated indices of agreement (kappa) that were generally low because of multiple psychometric issues inherent in any test. Conclusions Expecting excellent agreement between two correlated tests with even small amounts of unreliability may not be realistic. Assessing outcomes based on multiple measures, such as level of CBM performance and short norm-referenced assessments of fluency may improve the reliability of diagnostic decisions. PMID:25364090
Yang, Fang; Chia, Nicholas; White, Bryan A; Schook, Lawrence B
2013-04-23
Perturbations in intestinal microbiota composition have been associated with a variety of gastrointestinal tract-related diseases. The alleviation of symptoms has been achieved using treatments that alter the gastrointestinal tract microbiota toward that of healthy individuals. Identifying differences in microbiota composition through the use of 16S rRNA gene hypervariable tag sequencing has profound health implications. Current computational methods for comparing microbial communities are usually based on multiple alignments and phylogenetic inference, making them time consuming and requiring exceptional expertise and computational resources. As sequencing data rapidly grows in size, simpler analysis methods are needed to meet the growing computational burdens of microbiota comparisons. Thus, we have developed a simple, rapid, and accurate method, independent of multiple alignments and phylogenetic inference, to support microbiota comparisons. We create a metric, called compression-based distance (CBD) for quantifying the degree of similarity between microbial communities. CBD uses the repetitive nature of hypervariable tag datasets and well-established compression algorithms to approximate the total information shared between two datasets. Three published microbiota datasets were used as test cases for CBD as an applicable tool. Our study revealed that CBD recaptured 100% of the statistically significant conclusions reported in the previous studies, while achieving a decrease in computational time required when compared to similar tools without expert user intervention. CBD provides a simple, rapid, and accurate method for assessing distances between gastrointestinal tract microbiota 16S hypervariable tag datasets.
NASA Astrophysics Data System (ADS)
Zink, Frank Edward
The detection and classification of pulmonary nodules is of great interest in chest radiography. Nodules are often indicative of primary cancer, and their detection is particularly important in asymptomatic patients. The ability to classify nodules as calcified or non-calcified is important because calcification is a positive indicator that the nodule is benign. Dual-energy methods offer the potential to improve both the detection and classification of nodules by allowing the formation of material-selective images. Tissue-selective images can improve detection by virtue of the elimination of obscuring rib structure. Bone -selective images are essentially calcium images, allowing classification of the nodule. A dual-energy technique is introduced which uses a computed radiography system to acquire dual-energy chest radiographs in a single-exposure. All aspects of the dual-energy technique are described, with particular emphasis on scatter-correction, beam-hardening correction, and noise-reduction algorithms. The adaptive noise-reduction algorithm employed improves material-selective signal-to-noise ratio by up to a factor of seven with minimal sacrifice in selectivity. A clinical comparison study is described, undertaken to compare the dual-energy technique to conventional chest radiography for the tasks of nodule detection and classification. Observer performance data were collected using the Free Response Observer Characteristic (FROC) method and the bi-normal Alternative FROC (AFROC) performance model. Results of the comparison study, analyzed using two common multiple observer statistical models, showed that the dual-energy technique was superior to conventional chest radiography for detection of nodules at a statistically significant level (p < .05). Discussion of the comparison study emphasizes the unique combination of data collection and analysis techniques employed, as well as the limitations of comparison techniques in the larger context of technology assessment.
Reconstructing surface wave profiles from reflected acoustic pulses using multiple receivers.
Walstead, Sean P; Deane, Grant B
2014-08-01
Surface wave shapes are determined by analyzing underwater reflected acoustic signals collected at multiple receivers. The transmitted signals are of nominal frequency 300 kHz and are reflected off surface gravity waves that are paddle-generated in a wave tank. An inverse processing algorithm reconstructs 50 surface wave shapes over a length span of 2.10 m. The inverse scheme uses a broadband forward scattering model based on Kirchhoff's diffraction formula to determine wave shapes. The surface reconstruction algorithm is self-starting in that source and receiver geometry and initial estimates of wave shape are determined from the same acoustic signals used in the inverse processing. A high speed camera provides ground-truth measurements of the surface wave field for comparison with the acoustically derived surface waves. Within Fresnel zone regions the statistical confidence of the inversely optimized surface profile exceeds that of the camera profile. Reconstructed surfaces are accurate to a resolution of about a quarter-wavelength of the acoustic pulse only within Fresnel zones associated with each source and receiver pair. Multiple isolated Fresnel zones from multiple receivers extend the spatial extent of accurate surface reconstruction while overlapping Fresnel zones increase confidence in the optimized profiles there.
Austin, Peter C; Mamdani, Muhammad M; Juurlink, David N; Hux, Janet E
2006-09-01
To illustrate how multiple hypotheses testing can produce associations with no clinical plausibility. We conducted a study of all 10,674,945 residents of Ontario aged between 18 and 100 years in 2000. Residents were randomly assigned to equally sized derivation and validation cohorts and classified according to their astrological sign. Using the derivation cohort, we searched through 223 of the most common diagnoses for hospitalization until we identified two for which subjects born under one astrological sign had a significantly higher probability of hospitalization compared to subjects born under the remaining signs combined (P<0.05). We tested these 24 associations in the independent validation cohort. Residents born under Leo had a higher probability of gastrointestinal hemorrhage (P=0.0447), while Sagittarians had a higher probability of humerus fracture (P=0.0123) compared to all other signs combined. After adjusting the significance level to account for multiple comparisons, none of the identified associations remained significant in either the derivation or validation cohort. Our analyses illustrate how the testing of multiple, non-prespecified hypotheses increases the likelihood of detecting implausible associations. Our findings have important implications for the analysis and interpretation of clinical studies.
A Low-Cost Method for Multiple Disease Prediction.
Bayati, Mohsen; Bhaskar, Sonia; Montanari, Andrea
Recently, in response to the rising costs of healthcare services, employers that are financially responsible for the healthcare costs of their workforce have been investing in health improvement programs for their employees. A main objective of these so called "wellness programs" is to reduce the incidence of chronic illnesses such as cardiovascular disease, cancer, diabetes, and obesity, with the goal of reducing future medical costs. The majority of these wellness programs include an annual screening to detect individuals with the highest risk of developing chronic disease. Once these individuals are identified, the company can invest in interventions to reduce the risk of those individuals. However, capturing many biomarkers per employee creates a costly screening procedure. We propose a statistical data-driven method to address this challenge by minimizing the number of biomarkers in the screening procedure while maximizing the predictive power over a broad spectrum of diseases. Our solution uses multi-task learning and group dimensionality reduction from machine learning and statistics. We provide empirical validation of the proposed solution using data from two different electronic medical records systems, with comparisons to a statistical benchmark.
Stein, Marjorie W; Frank, Susan J; Roberts, Jeffrey H; Finkelstein, Malka; Heo, Moonseong
2016-05-01
The aim of this study was to determine whether group-based or didactic teaching is more effective to teach ACR Appropriateness Criteria to medical students. An identical pretest, posttest, and delayed multiple-choice test was used to evaluate the efficacy of the two teaching methods. Descriptive statistics comparing test scores were obtained. On the posttest, the didactic group gained 12.5 points (P < .0001), and the group-based learning students gained 16.3 points (P < .0001). On the delayed test, the didactic group gained 14.4 points (P < .0001), and the group-based learning students gained 11.8 points (P < .001). The gains in scores on both tests were statistically significant for both groups. However, the differences in scores were not statistically significant comparing the two educational methods. Compared with didactic lectures, group-based learning is more enjoyable, time efficient, and equally efficacious. The choice of educational method can be individualized for each institution on the basis of group size, time constraints, and faculty availability. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Predicting Slag Generation in Sub-Scale Test Motors Using a Neural Network
NASA Technical Reports Server (NTRS)
Wiesenberg, Brent
1999-01-01
Generation of slag (aluminum oxide) is an important issue for the Reusable Solid Rocket Motor (RSRM). Thiokol performed testing to quantify the relationship between raw material variations and slag generation in solid propellants by testing sub-scale motors cast with propellant containing various combinations of aluminum fuel and ammonium perchlorate (AP) oxidizer particle sizes. The test data were analyzed using statistical methods and an artificial neural network. This paper primarily addresses the neural network results with some comparisons to the statistical results. The neural network showed that the particle sizes of both the aluminum and unground AP have a measurable effect on slag generation. The neural network analysis showed that aluminum particle size is the dominant driver in slag generation, about 40% more influential than AP. The network predictions of the amount of slag produced during firing of sub-scale motors were 16% better than the predictions of a statistically derived empirical equation. Another neural network successfully characterized the slag generated during full-scale motor tests. The success is attributable to the ability of neural networks to characterize multiple complex factors including interactions that affect slag generation.
Influence of Additive and Multiplicative Structure and Direction of Comparison on the Reversal Error
ERIC Educational Resources Information Center
González-Calero, José Antonio; Arnau, David; Laserna-Belenguer, Belén
2015-01-01
An empirical study has been carried out to evaluate the potential of word order matching and static comparison as explanatory models of reversal error. Data was collected from 214 undergraduate students who translated a set of additive and multiplicative comparisons expressed in Spanish into algebraic language. In these multiplicative comparisons…
2015-07-15
Long-term effects on cancer survivors’ quality of life of physical training versus physical training combined with cognitive-behavioral therapy ...COMPARISON OF NEURAL NETWORK AND LINEAR REGRESSION MODELS IN STATISTICALLY PREDICTING MENTAL AND PHYSICAL HEALTH STATUS OF BREAST...34Comparison of Neural Network and Linear Regression Models in Statistically Predicting Mental and Physical Health Status of Breast Cancer Survivors
Sajobi, Tolulope T; Lix, Lisa M; Singh, Gurbakhshash; Lowerison, Mark; Engbers, Jordan; Mayo, Nancy E
2015-03-01
Response shift (RS) is an important phenomenon that influences the assessment of longitudinal changes in health-related quality of life (HRQOL) studies. Given that RS effects are often small, missing data due to attrition or item non-response can contribute to failure to detect RS effects. Since missing data are often encountered in longitudinal HRQOL data, effective strategies to deal with missing data are important to consider. This study aims to compare different imputation methods on the detection of reprioritization RS in the HRQOL of caregivers of stroke survivors. Data were from a Canadian multi-center longitudinal study of caregivers of stroke survivors over a one-year period. The Stroke Impact Scale physical function score at baseline, with a cutoff of 75, was used to measure patient stroke severity for the reprioritization RS analysis. Mean imputation, likelihood-based expectation-maximization imputation, and multiple imputation methods were compared in test procedures based on changes in relative importance weights to detect RS in SF-36 domains over a 6-month period. Monte Carlo simulation methods were used to compare the statistical powers of relative importance test procedures for detecting RS in incomplete longitudinal data under different missing data mechanisms and imputation methods. Of the 409 caregivers, 15.9 and 31.3 % of them had missing data at baseline and 6 months, respectively. There were no statistically significant changes in relative importance weights on any of the domains when complete-case analysis was adopted. But statistical significant changes were detected on physical functioning and/or vitality domains when mean imputation or EM imputation was adopted. There were also statistically significant changes in relative importance weights for physical functioning, mental health, and vitality domains when multiple imputation method was adopted. Our simulations revealed that relative importance test procedures were least powerful under complete-case analysis method and most powerful when a mean imputation or multiple imputation method was adopted for missing data, regardless of the missing data mechanism and proportion of missing data. Test procedures based on relative importance measures are sensitive to the type and amount of missing data and imputation method. Relative importance test procedures based on mean imputation and multiple imputation are recommended for detecting RS in incomplete data.
Tekinarslan, Erdem; Keskin, Suat; Buldu, İbrahim; Sönmez, Mehmet Giray; Karatag, Tuna; Istanbulluoglu, Mustafa Okan
2015-01-01
Introduction To determine and evaluate the effective radiation exposure during a one year follow-up of urolithiasis patients following the SWL (extracorporeal shock wave lithotripsy) treatment. Material and methods Total Effective Radiation Exposure (ERE) doses for each of the 129 patients: 44 kidney stone patients, 41 ureter stone patients, and 44 multiple stone location patients were calculated by adding up the radiation doses of each ionizing radiation session including images (IVU, KUB, CT) throughout a one year follow-up period following the SWL. Results Total mean ERE values for the kidney stone group was calculated as 15, 91 mSv (5.10-27.60), for the ureter group as 13.32 mSv (5.10-24.70), and in the multiple stone location group as 27.02 mSv (9.41-54.85). There was no statistically significant differences between the kidney and ureter groups in terms of the ERE dose values (p = 0.221) (p >0.05). In the comparison of the kidney and ureter stone groups with the multiple stone location group; however, there was a statistically significant difference (p = 0.000) (p <0.05). Conclusions ERE doses should be a factor to be considered right at the initiation of any diagnostic and/or therapeutic procedure. Especially in the case of multiple stone locations, due to the high exposure to ionized radiation, different imaging modalities with low dose and/or totally without a dose should be employed in the diagnosis, treatment, and follow-up bearing the aim to optimize diagnosis while minimizing the radiation dose as much as possible. PMID:26568880
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, X; Sun, T; Yin, Y
Purpose: To study the dosimetric impact of intensity-modulated radiotherapy (IMRT), hybrid intensity-modulated radiotherapy (h-IMRT) and volumetric modulated arc therapy(VMAT) for whole-brain radiotherapy (WBRT) with simultaneous integrated boost in patients with multiple brain metastases. Methods: Ten patients with multiple brain metastases were included in this analysis. The prescribed dose was 45 Gy to the whole brain (PTVWBRT) and 55 Gy to individual brain metastases (PTVboost) delivered simultaneously in 25 fractions. Three treatment techniques were designed: the 7 equal spaced fields IMRT plan, hybrid IMRT plan and VMAT with two 358°arcs. In hybrid IMRT plan, two fields(90°and 270°) were planned to themore » whole brain. This was used as a base dose plan. Then 5 fields IMRT plan was optimized based on the two fields plan. The dose distribution in the target, the dose to the organs at risk and total MU in three techniques were compared. Results: For the target dose, conformity and homogeneity in PTV, no statistically differences were observed in the three techniques. For the maximum dose in bilateral lens and the mean dose in bilateral eyes, IMRT and h-IMRT plans showed the highest and lowest value respectively. No statistically significant differences were observed in the dose of optic nerve and brainstem. For the monitor units, IMRT and VMAT plans showed the highest and lowest value respectively. Conclusion: For WBRT with simultaneous integrated boost in patients with multiple brain metastases, hybrid IMRT could reduce the doses to lens and eyes. It is feasible for patients with brain metastases.« less
Mathysen, Danny G P; Aclimandos, Wagih; Roelant, Ella; Wouters, Kristien; Creuzot-Garcher, Catherine; Ringens, Peter J; Hawlina, Marko; Tassignon, Marie-José
2013-11-01
To investigate whether introduction of item-response theory (IRT) analysis, in parallel to the 'traditional' statistical analysis methods available for performance evaluation of multiple T/F items as used in the European Board of Ophthalmology Diploma (EBOD) examination, has proved beneficial, and secondly, to study whether the overall assessment performance of the current written part of EBOD is sufficiently high (KR-20≥ 0.90) to be kept as examination format in future EBOD editions. 'Traditional' analysis methods for individual MCQ item performance comprise P-statistics, Rit-statistics and item discrimination, while overall reliability is evaluated through KR-20 for multiple T/F items. The additional set of statistical analysis methods for the evaluation of EBOD comprises mainly IRT analysis. These analysis techniques are used to monitor whether the introduction of negative marking for incorrect answers (since EBOD 2010) has a positive influence on the statistical performance of EBOD as a whole and its individual test items in particular. Item-response theory analysis demonstrated that item performance parameters should not be evaluated individually, but should be related to one another. Before the introduction of negative marking, the overall EBOD reliability (KR-20) was good though with room for improvement (EBOD 2008: 0.81; EBOD 2009: 0.78). After the introduction of negative marking, the overall reliability of EBOD improved significantly (EBOD 2010: 0.92; EBOD 2011:0.91; EBOD 2012: 0.91). Although many statistical performance parameters are available to evaluate individual items, our study demonstrates that the overall reliability assessment remains the only crucial parameter to be evaluated allowing comparison. While individual item performance analysis is worthwhile to undertake as secondary analysis, drawing final conclusions seems to be more difficult. Performance parameters need to be related, as shown by IRT analysis. Therefore, IRT analysis has proved beneficial for the statistical analysis of EBOD. Introduction of negative marking has led to a significant increase in the reliability (KR-20 > 0.90), indicating that the current examination format can be kept for future EBOD examinations. © 2013 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Poulin, Julie; Chouinard, Sylvie; Pampoulova, Tania; Lecomte, Yves; Stip, Emmanuel; Godbout, Roger
2010-10-30
Patients with schizophrenia may have sleep disorders even when clinically stable under antipsychotic treatments. To better understand this issue, we measured sleep characteristics between 1999 and 2003 in 150 outpatients diagnosed with Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV) schizophrenia or schizoaffective disorder and 80 healthy controls using a sleep habits questionnaire. Comparisons between both groups were performed and multiple comparisons were Bonferroni corrected. Compared to healthy controls, patients with schizophrenia reported significantly increased sleep latency, time in bed, total sleep time and frequency of naps during weekdays and weekends along with normal sleep efficiency, sleep satisfaction, and feeling of restfulness in the morning. In conclusion, sleep-onset insomnia is a major, enduring disorder in middle-aged, non-hospitalized patients with schizophrenia that are otherwise clinically stable under antipsychotic and adjuvant medications. Noteworthy, these patients do not complain of sleep-maintenance insomnia but report increased sleep propensity and normal sleep satisfaction. These results may reflect circadian disturbances in schizophrenia, but objective laboratory investigations are needed to confirm subjective sleep reports. Copyright © 2009 Elsevier Ltd. All rights reserved.
Selected 1966-69 interior Alaska wildfire statistics with long-term comparisons.
Richard J. Barney
1971-01-01
This paper presents selected interior Alaska forest and range wildfire statistics for the period 1966-69. Comparisons are made with the decade 1956-65 and the 30-year period 1940-69, which are essentially the total recorded statistical history on wildfires available for Alaska.
ERIC Educational Resources Information Center
Noell, George H.; Gresham, Frank M.
2001-01-01
Describes design logic and potential uses of a variant of the multiple-baseline design. The multiple-baseline multiple-sequence (MBL-MS) consists of multiple-baseline designs that are interlaced with one another and include all possible sequences of treatments. The MBL-MS design appears to be primarily useful for comparison of treatments taking…
Al Quran, Firas A M; Kamal, Mudar S
2006-06-01
Two occlusal splints, the full-arch stabilization splint and the anterior midline point stop (AMPS) device, were evaluated for their efficiency in relieving myogenous temporomandibular disorders (TMD). One hundred and fourteen patients with myogenous TMD were distributed into 3 groups. The first group was treated with the AMPS device, the second with the stabilization splint, and the third group was the control group. Pain intensity was scored using the visual analogue scale before treatment and 1 month and 3 months after treatment. Statistical Package for the Social Sciences (SPSS, Chicago, Ill) and multiple comparisons tests were used to compare results before and after treatment and to compare the groups. The use of AMPS device in the first group resulted in a significant improvement after 1 month and 3 months (P < or = .001) and showed a 56.66% pain reduction. A significant improvement was also noticed in the second group (P = .001) with a 47.71% pain reduction. Although pain reduction percentage appeared more in the first group, this was not statistically significant. There was a highly significant difference between groups treated with both kinds of splints and the control group. It was concluded that both types of occlusal splints are beneficial to patients with myogenous TMD.
Kendall, William L.; Hines, James E.; Nichols, James D.; Grant, Evan H. Campbell
2013-01-01
Occupancy statistical models that account for imperfect detection have proved very useful in several areas of ecology, including species distribution and spatial dynamics, disease ecology, and ecological responses to climate change. These models are based on the collection of multiple samples at each of a number of sites within a given season, during which it is assumed the species is either absent or present and available for detection while each sample is taken. However, for some species, individuals are only present or available for detection seasonally. We present a statistical model that relaxes the closure assumption within a season by permitting staggered entry and exit times for the species of interest at each site. Based on simulation, our open model eliminates bias in occupancy estimators and in some cases increases precision. The power to detect the violation of closure is high if detection probability is reasonably high. In addition to providing more robust estimation of occupancy, this model permits comparison of phenology across sites, species, or years, by modeling variation in arrival or departure probabilities. In a comparison of four species of amphibians in Maryland we found that two toad species arrived at breeding sites later in the season than a salamander and frog species, and departed from sites earlier.
Registration and Fusion of Multiple Source Remotely Sensed Image Data
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline
2004-01-01
Earth and Space Science often involve the comparison, fusion, and integration of multiple types of remotely sensed data at various temporal, radiometric, and spatial resolutions. Results of this integration may be utilized for global change analysis, global coverage of an area at multiple resolutions, map updating or validation of new instruments, as well as integration of data provided by multiple instruments carried on multiple platforms, e.g. in spacecraft constellations or fleets of planetary rovers. Our focus is on developing methods to perform fast, accurate and automatic image registration and fusion. General methods for automatic image registration are being reviewed and evaluated. Various choices for feature extraction, feature matching and similarity measurements are being compared, including wavelet-based algorithms, mutual information and statistically robust techniques. Our work also involves studies related to image fusion and investigates dimension reduction and co-kriging for application-dependent fusion. All methods are being tested using several multi-sensor datasets, acquired at EOS Core Sites, and including multiple sensors such as IKONOS, Landsat-7/ETM+, EO1/ALI and Hyperion, MODIS, and SeaWIFS instruments. Issues related to the coregistration of data from the same platform (i.e., AIRS and MODIS from Aqua) or from several platforms of the A-train (i.e., MLS, HIRDLS, OMI from Aura with AIRS and MODIS from Terra and Aqua) will also be considered.
Agier, Lydiane; Portengen, Lützen; Chadeau-Hyam, Marc; Basagaña, Xavier; Giorgis-Allemand, Lise; Siroux, Valérie; Robinson, Oliver; Vlaanderen, Jelle; González, Juan R; Nieuwenhuijsen, Mark J; Vineis, Paolo; Vrijheid, Martine; Slama, Rémy; Vermeulen, Roel
2016-12-01
The exposome constitutes a promising framework to improve understanding of the effects of environmental exposures on health by explicitly considering multiple testing and avoiding selective reporting. However, exposome studies are challenged by the simultaneous consideration of many correlated exposures. We compared the performances of linear regression-based statistical methods in assessing exposome-health associations. In a simulation study, we generated 237 exposure covariates with a realistic correlation structure and with a health outcome linearly related to 0 to 25 of these covariates. Statistical methods were compared primarily in terms of false discovery proportion (FDP) and sensitivity. On average over all simulation settings, the elastic net and sparse partial least-squares regression showed a sensitivity of 76% and an FDP of 44%; Graphical Unit Evolutionary Stochastic Search (GUESS) and the deletion/substitution/addition (DSA) algorithm revealed a sensitivity of 81% and an FDP of 34%. The environment-wide association study (EWAS) underperformed these methods in terms of FDP (average FDP, 86%) despite a higher sensitivity. Performances decreased considerably when assuming an exposome exposure matrix with high levels of correlation between covariates. Correlation between exposures is a challenge for exposome research, and the statistical methods investigated in this study were limited in their ability to efficiently differentiate true predictors from correlated covariates in a realistic exposome context. Although GUESS and DSA provided a marginally better balance between sensitivity and FDP, they did not outperform the other multivariate methods across all scenarios and properties examined, and computational complexity and flexibility should also be considered when choosing between these methods. Citation: Agier L, Portengen L, Chadeau-Hyam M, Basagaña X, Giorgis-Allemand L, Siroux V, Robinson O, Vlaanderen J, González JR, Nieuwenhuijsen MJ, Vineis P, Vrijheid M, Slama R, Vermeulen R. 2016. A systematic comparison of linear regression-based statistical methods to assess exposome-health associations. Environ Health Perspect 124:1848-1856; http://dx.doi.org/10.1289/EHP172.
Bond, John W; Weart, Jocelyn R
2017-05-01
Recovery, profiling, and speculative searching of trace DNA (not attributable to a body fluid/cell type) over a twelve-month period in a U.S. Crime Laboratory and U.K. police force are compared. Results show greater numbers of U.S. firearm-related items submitted for analysis compared with the U.K., where greatest numbers were submitted from burglary or vehicle offenses. U.S. multiple recovery techniques (double swabbing) occurred mainly during laboratory examination, whereas the majority of U.K. multiple recovery techniques occurred at the scene. No statistical difference was observed for useful profiles from single or multiple recovery. Database loading of interpretable profiles was most successful for U.K. items related to burglary or vehicle offenses. Database associations (matches) represented 7.0% of all U.S. items and 13.1% of all U.K. items. The U.K. strategy for burglary and vehicle examination demonstrated that careful selection of both items and sampling techniques is crucial to obtaining the observed results. © 2016 American Academy of Forensic Sciences.
Prognostic Value of Serum Free Light Chain in Multiple Myeloma.
El Naggar, Amel A; El-Naggar, Mostafa; Mokhamer, El-Hassan; Avad, Mona W
2015-01-01
The measurement of serum free light chain (sFLC) has been shown to be valuable in screening for the presence of plasma cell dyscrasia as well as for baseline prognosis in newly diagnosed patients. The aim of the present work was to study the prognostic value of sFLC in multiple myeloma in relation to other serum biomarkers, response to therapy and survival. Forty five newly diagnosed patients with MM were included in the study. Patients were divided into responders and non-responders groups according to response to therapy. sFLC and serum Amyloid A (SAA) were measured by immunonephelometry. The non-responders group showed a statistically significant higher kappa/lambda or lambda/kappa ratio and higher β2 microglobulin level, but lower albumin level at presentation, as compared to the responders group (P < 0.001). However, no statistically significant difference was detected between the two groups regarding SA A or calcium levels. Comparison between sFLC ratio obtained before and after therapy revealed significant decrease after treatment in the responders group (P = 0.05). Survival was significantly inferior in patients with an FLC ratio of ≥ 2.6 or ≤ 0.56 compared with those with an FLC ratio that was between 0.56 and 2.6 (P = 0.002).
GOEAST: a web-based software toolkit for Gene Ontology enrichment analysis.
Zheng, Qi; Wang, Xiu-Jie
2008-07-01
Gene Ontology (GO) analysis has become a commonly used approach for functional studies of large-scale genomic or transcriptomic data. Although there have been a lot of software with GO-related analysis functions, new tools are still needed to meet the requirements for data generated by newly developed technologies or for advanced analysis purpose. Here, we present a Gene Ontology Enrichment Analysis Software Toolkit (GOEAST), an easy-to-use web-based toolkit that identifies statistically overrepresented GO terms within given gene sets. Compared with available GO analysis tools, GOEAST has the following improved features: (i) GOEAST displays enriched GO terms in graphical format according to their relationships in the hierarchical tree of each GO category (biological process, molecular function and cellular component), therefore, provides better understanding of the correlations among enriched GO terms; (ii) GOEAST supports analysis for data from various sources (probe or probe set IDs of Affymetrix, Illumina, Agilent or customized microarrays, as well as different gene identifiers) and multiple species (about 60 prokaryote and eukaryote species); (iii) One unique feature of GOEAST is to allow cross comparison of the GO enrichment status of multiple experiments to identify functional correlations among them. GOEAST also provides rigorous statistical tests to enhance the reliability of analysis results. GOEAST is freely accessible at http://omicslab.genetics.ac.cn/GOEAST/
Robust detection of multiple sclerosis lesions from intensity-normalized multi-channel MRI
NASA Astrophysics Data System (ADS)
Karpate, Yogesh; Commowick, Olivier; Barillot, Christian
2015-03-01
Multiple sclerosis (MS) is a disease with heterogeneous evolution among the patients. Quantitative analysis of longitudinal Magnetic Resonance Images (MRI) provides a spatial analysis of the brain tissues which may lead to the discovery of biomarkers of disease evolution. Better understanding of the disease will lead to a better discovery of pathogenic mechanisms, allowing for patient-adapted therapeutic strategies. To characterize MS lesions, we propose a novel paradigm to detect white matter lesions based on a statistical framework. It aims at studying the benefits of using multi-channel MRI to detect statistically significant differences between each individual MS patient and a database of control subjects. This framework consists in two components. First, intensity standardization is conducted to minimize the inter-subject intensity difference arising from variability of the acquisition process and different scanners. The intensity normalization maps parameters obtained using a robust Gaussian Mixture Model (GMM) estimation not affected by the presence of MS lesions. The second part studies the comparison of multi-channel MRI of MS patients with respect to an atlas built from the control subjects, thereby allowing us to look for differences in normal appearing white matter, in and around the lesions of each patient. Experimental results demonstrate that our technique accurately detects significant differences in lesions consequently improving the results of MS lesion detection.
Farrington, C. Paddy; Noufaily, Angela; Andrews, Nick J.; Charlett, Andre
2016-01-01
A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749
Accelerating simulation for the multiple-point statistics algorithm using vector quantization
NASA Astrophysics Data System (ADS)
Zuo, Chen; Pan, Zhibin; Liang, Hao
2018-03-01
Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.
Giambartolomei, Claudia; Vukcevic, Damjan; Schadt, Eric E; Franke, Lude; Hingorani, Aroon D; Wallace, Chris; Plagnol, Vincent
2014-05-01
Genetic association studies, in particular the genome-wide association study (GWAS) design, have provided a wealth of novel insights into the aetiology of a wide range of human diseases and traits, in particular cardiovascular diseases and lipid biomarkers. The next challenge consists of understanding the molecular basis of these associations. The integration of multiple association datasets, including gene expression datasets, can contribute to this goal. We have developed a novel statistical methodology to assess whether two association signals are consistent with a shared causal variant. An application is the integration of disease scans with expression quantitative trait locus (eQTL) studies, but any pair of GWAS datasets can be integrated in this framework. We demonstrate the value of the approach by re-analysing a gene expression dataset in 966 liver samples with a published meta-analysis of lipid traits including >100,000 individuals of European ancestry. Combining all lipid biomarkers, our re-analysis supported 26 out of 38 reported colocalisation results with eQTLs and identified 14 new colocalisation results, hence highlighting the value of a formal statistical test. In three cases of reported eQTL-lipid pairs (SYPL2, IFT172, TBKBP1) for which our analysis suggests that the eQTL pattern is not consistent with the lipid association, we identify alternative colocalisation results with SORT1, GCKR, and KPNB1, indicating that these genes are more likely to be causal in these genomic intervals. A key feature of the method is the ability to derive the output statistics from single SNP summary statistics, hence making it possible to perform systematic meta-analysis type comparisons across multiple GWAS datasets (implemented online at http://coloc.cs.ucl.ac.uk/coloc/). Our methodology provides information about candidate causal genes in associated intervals and has direct implications for the understanding of complex diseases as well as the design of drugs to target disease pathways.
Mameli, Giuseppe; Cocco, Eleonora; Frau, Jessica; Arru, Giannina; Caggiu, Elisa; Marrosu, Maria Giovanna; Sechi, Leonardo A
2016-07-07
Elevated B lymphocyte activating factor BAFF levels have been reported in multiple sclerosis (MS) patients; moreover, disease-modifying treatments (DMT) have shown to influence blood BAFF levels in MS patients, although the significance of these changes is still controversial. In addition, BAFF levels were reported increased during infectious diseases. In our study, we wanted to investigate on the serum BAFF concentrations correlated to the antibody response against Mycobacterium avium subspecies paratuberculosis (MAP), Epstein-Barr virus (EBV) and their human homologous epitopes in MS and in patients affected with other neurological diseases (OND), divided in Inflammatory Neurological Diseases (IND), Non Inflammatory Neurological Diseases (NIND) and Undetermined Neurological Diseases (UND), in comparison to healthy controls (HCs). Our results confirmed a statistically significant high BAFF levels in MS and IND patients in comparison to HCs but not NIND and UND patients. Interestingly, BAFF levels were inversely proportional to antibodies level against EBV and MAP peptides and the BAFF levels significantly decreased in MS patients after methylprednisolone therapy. These results implicate that lower circulating BAFF concentrations were present in MS patients with humoral response against MAP and EBV. In conclusion MS patients with no IgGs against EBV and MAP may support the hypothesis that elevated blood BAFF levels could be associated with a more stable disease.
Intersubject Differences in False Nonmatch Rates for a Fingerprint-Based Authentication System
NASA Astrophysics Data System (ADS)
Breebaart, Jeroen; Akkermans, Ton; Kelkboom, Emile
2009-12-01
The intersubject dependencies of false nonmatch rates were investigated for a minutiae-based biometric authentication process using single enrollment and verification measurements. A large number of genuine comparison scores were subjected to statistical inference tests that indicated that the number of false nonmatches depends on the subject and finger under test. This result was also observed if subjects associated with failures to enroll were excluded from the test set. The majority of the population (about 90%) showed a false nonmatch rate that was considerably smaller than the average false nonmatch rate of the complete population. The remaining 10% could be characterized as "goats due to their relatively high probability for a false nonmatch. The image quality reported by the template extraction module only weakly correlated with the genuine comparison scores. When multiple verification attempts were investigated, only a limited benefit was observed for "goats, since the conditional probability for a false nonmatch given earlier nonsuccessful attempts increased with the number of attempts. These observations suggest that (1) there is a need for improved identification of "goats during enrollment (e.g., using dedicated signal-driven analysis and classification methods and/or the use of multiple enrollment images) and (2) there should be alternative means for identity verification in the biometric system under test in case of two subsequent false nonmatches.
El-Baradey, Ghada F; Elshmaa, Nagat S
2014-11-01
The aim was to assess the effectiveness of adding either dexamethasone or midazolam in comparison with epinephrine addition to 0.5% bupivacaine in supraclavicular brachial plexus block. This is a prospective randomized controlled observer-blinded study. This study was carried out in Tanta University Hospital on 60 patients of both sexes; American Society of Anesthesiologists physical Status I and II, age range from 18 to 45 years undergo elective surgery to upper limb. All patients were anesthetized with ultrasound guided supraclavicular brachial plexus block and randomly divided into three groups (each group 20 patients) Group E (epinephrine): 30 mL bupivacaine 0.5%with 1:200,000 epinephrine (5 μg/mL). Group D (dexamethasone): 30 mL bupivacaine 0.5% and dexamethasone 8 mg. Group M (midazolam): 30 ml bupivacaine 0.5% and midazolam 50 μg/kg. The primary outcome measures were onset and duration of sensory and motor block and time to first analgesic request. The windows version of SPSS 11.0.1 (SPSS Inc., Chicago, IL, USA) was used for statistical analysis. Data were presented in form of mean ± standard deviation multiple analysis of variance (ANOVA) was used to compare the three groups and Scheffe test was used after ANOVA. Power of significance P < 0.05 was considered to be statistically significant. Onset of sensory and motor block was significantly rapid (P < 0.05) in Groups D and M in comparison with Group E. Time of administration of rescue analgesic, duration of sensory and motor block showed significant increase (P < 0.05) in Group D in comparison with Group M which showed significant increase (P < 0.05) in comparison with Group E. In comparison with epinephrine and midazolam addition of dexamethasone to bupivacaine had rapid onset of block and longer time to first analgesic request with fewer side-effects.
Ondeck, Nathaniel T; Fu, Michael C; Skrip, Laura A; McLynn, Ryan P; Cui, Jonathan J; Basques, Bryce A; Albert, Todd J; Grauer, Jonathan N
2018-04-09
The presence of missing data is a limitation of large datasets, including the National Surgical Quality Improvement Program (NSQIP). In addressing this issue, most studies use complete case analysis, which excludes cases with missing data, thus potentially introducing selection bias. Multiple imputation, a statistically rigorous approach that approximates missing data and preserves sample size, may be an improvement over complete case analysis. The present study aims to evaluate the impact of using multiple imputation in comparison with complete case analysis for assessing the associations between preoperative laboratory values and adverse outcomes following anterior cervical discectomy and fusion (ACDF) procedures. This is a retrospective review of prospectively collected data. Patients undergoing one-level ACDF were identified in NSQIP 2012-2015. Perioperative adverse outcome variables assessed included the occurrence of any adverse event, severe adverse events, and hospital readmission. Missing preoperative albumin and hematocrit values were handled using complete case analysis and multiple imputation. These preoperative laboratory levels were then tested for associations with 30-day postoperative outcomes using logistic regression. A total of 11,999 patients were included. Of this cohort, 63.5% of patients had missing preoperative albumin and 9.9% had missing preoperative hematocrit. When using complete case analysis, only 4,311 patients were studied. The removed patients were significantly younger, healthier, of a common body mass index, and male. Logistic regression analysis failed to identify either preoperative hypoalbuminemia or preoperative anemia as significantly associated with adverse outcomes. When employing multiple imputation, all 11,999 patients were included. Preoperative hypoalbuminemia was significantly associated with the occurrence of any adverse event and severe adverse events. Preoperative anemia was significantly associated with the occurrence of any adverse event, severe adverse events, and hospital readmission. Multiple imputation is a rigorous statistical procedure that is being increasingly used to address missing values in large datasets. Using this technique for ACDF avoided the loss of cases that may have affected the representativeness and power of the study and led to different results than complete case analysis. Multiple imputation should be considered for future spine studies. Copyright © 2018 Elsevier Inc. All rights reserved.
Multiple outcomes are often measured on each experimental unit in toxicology experiments. These multiple observations typically imply the existence of correlation between endpoints, and a statistical analysis that incorporates it may result in improved inference. When both disc...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Algan, Ozer, E-mail: oalgan@ouhsc.edu; Giem, Jared; Young, Julie
To investigate the doses received by the hippocampus and normal brain tissue during a course of stereotactic radiation therapy using a single isocenter (SI)–based or multiple isocenter (MI)–based treatment planning in patients with less than 4 brain metastases. In total, 10 patients with magnetic resonance imaging (MRI) demonstrating 2-3 brain metastases were included in this retrospective study, and 2 sets of stereotactic intensity-modulated radiation therapy (IMRT) treatment plans (SI vs MI) were generated. The hippocampus was contoured on SPGR sequences, and doses received by the hippocampus and the brain were calculated and compared between the 2 treatment techniques. A totalmore » of 23 lesions in 10 patients were evaluated. The median tumor volume, the right hippocampus volume, and the left hippocampus volume were 3.15, 3.24, and 2.63 mL, respectively. In comparing the 2 treatment plans, there was no difference in the planning target volume (PTV) coverage except in the tail for the dose-volume histogram (DVH) curve. The only statistically significant dosimetric parameter was the V{sub 100}. All of the other measured dosimetric parameters including the V{sub 95}, V{sub 99}, and D{sub 100} were not significantly different between the 2 treatment planning techniques. None of the dosimetric parameters evaluated for the hippocampus revealed any statistically significant difference between the MI and SI plans. The total brain doses were slightly higher in the SI plans, especially in the lower dose region, although this difference was not statistically different. The use of SI-based treatment plan resulted in a 35% reduction in beam-on time. The use of SI treatments for patients with up to 3 brain metastases produces similar PTV coverage and similar normal tissue doses to the hippocampus and the brain when compared with MI plans. SI treatment planning should be considered in patients with multiple brain metastases undergoing stereotactic treatment.« less
Algan, Ozer; Giem, Jared; Young, Julie; Ali, Imad; Ahmad, Salahuddin; Hossain, Sabbir
2015-01-01
To investigate the doses received by the hippocampus and normal brain tissue during a course of stereotactic radiation therapy using a single isocenter (SI)-based or multiple isocenter (MI)-based treatment planning in patients with less than 4 brain metastases. In total, 10 patients with magnetic resonance imaging (MRI) demonstrating 2-3 brain metastases were included in this retrospective study, and 2 sets of stereotactic intensity-modulated radiation therapy (IMRT) treatment plans (SI vs MI) were generated. The hippocampus was contoured on SPGR sequences, and doses received by the hippocampus and the brain were calculated and compared between the 2 treatment techniques. A total of 23 lesions in 10 patients were evaluated. The median tumor volume, the right hippocampus volume, and the left hippocampus volume were 3.15, 3.24, and 2.63mL, respectively. In comparing the 2 treatment plans, there was no difference in the planning target volume (PTV) coverage except in the tail for the dose-volume histogram (DVH) curve. The only statistically significant dosimetric parameter was the V100. All of the other measured dosimetric parameters including the V95, V99, and D100 were not significantly different between the 2 treatment planning techniques. None of the dosimetric parameters evaluated for the hippocampus revealed any statistically significant difference between the MI and SI plans. The total brain doses were slightly higher in the SI plans, especially in the lower dose region, although this difference was not statistically different. The use of SI-based treatment plan resulted in a 35% reduction in beam-on time. The use of SI treatments for patients with up to 3 brain metastases produces similar PTV coverage and similar normal tissue doses to the hippocampus and the brain when compared with MI plans. SI treatment planning should be considered in patients with multiple brain metastases undergoing stereotactic treatment. Copyright © 2015 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
Statistics attack on `quantum private comparison with a malicious third party' and its improvement
NASA Astrophysics Data System (ADS)
Gu, Jun; Ho, Chih-Yung; Hwang, Tzonelih
2018-02-01
Recently, Sun et al. (Quantum Inf Process:14:2125-2133, 2015) proposed a quantum private comparison protocol allowing two participants to compare the equality of their secrets via a malicious third party (TP). They designed an interesting trap comparison method to prevent the TP from knowing the final comparison result. However, this study shows that the malicious TP can use the statistics attack to reveal the comparison result. A simple modification is hence proposed to solve this problem.
Nedelcu, R; Olsson, P; Nyström, I; Rydén, J; Thor, A
2018-02-01
To evaluate a novel methodology using industrial scanners as a reference, and assess in vivo accuracy of 3 intraoral scanners (IOS) and conventional impressions. Further, to evaluate IOS precision in vivo. Four reference-bodies were bonded to the buccal surfaces of upper premolars and incisors in five subjects. After three reference-scans, ATOS Core 80 (ATOS), subjects were scanned three times with three IOS systems: 3M True Definition (3M), CEREC Omnicam (OMNI) and Trios 3 (TRIOS). One conventional impression (IMPR) was taken, 3M Impregum Penta Soft, and poured models were digitized with laboratory scanner 3shape D1000 (D1000). Best-fit alignment of reference-bodies and 3D Compare Analysis was performed. Precision of ATOS and D1000 was assessed for quantitative evaluation and comparison. Accuracy of IOS and IMPR were analyzed using ATOS as reference. Precision of IOS was evaluated through intra-system comparison. Precision of ATOS reference scanner (mean 0.6 μm) and D1000 (mean 0.5 μm) was high. Pairwise multiple comparisons of reference-bodies located in different tooth positions displayed a statistically significant difference of accuracy between two scanner-groups: 3M and TRIOS, over OMNI (p value range 0.0001 to 0.0006). IMPR did not show any statistically significant difference to IOS. However, deviations of IOS and IMPR were within a similar magnitude. No statistical difference was found for IOS precision. The methodology can be used for assessing accuracy of IOS and IMPR in vivo in up to five units bilaterally from midline. 3M and TRIOS had a higher accuracy than OMNI. IMPR overlapped both groups. Intraoral scanners can be used as a replacement for conventional impressions when restoring up to ten units without extended edentulous spans. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Please don't misuse the museum: 'declines' may be statistical
Grant, Evan H. Campbell
2015-01-01
Detecting declines in populations at broad spatial scales takes enormous effort, and long-term data are often more sparse than is desired for estimating trends, identifying drivers for population changes, framing conservation decisions or taking management actions. Museum records and historic data can be available at large scales across multiple decades, and are therefore an attractive source of information on the comparative status of populations. However, changes in populations may be real (e.g., in response to environmental covariates) or resulting from variation in our ability to observe the true population response (also possibly related to environmental covariates). This is a (statistical) nuisance in understanding the true status of a population. Evaluating statistical hypotheses alongside more interesting ecological ones is important in the appropriate use of museum data. Two statistical considerations are generally applicable to use of museum records: first without initial random sampling, comparison with contemporary results cannot provide inference to the entire range of a species, and second the availability of only some individuals in a population may respond to environmental changes. Changes in the availability of individuals may reduce the proportion of the population that is present and able to be counted on a given survey event, resulting in an apparent decline even when population size is stable.
De Groote, Sandra L.; Blecic, Deborah D.; Martin, Kristin
2013-01-01
Objective: Libraries require efficient and reliable methods to assess journal use. Vendors provide complete counts of articles retrieved from their platforms. However, if a journal is available on multiple platforms, several sets of statistics must be merged. Link-resolver reports merge data from all platforms into one report but only record partial use because users can access library subscriptions from other paths. Citation data are limited to publication use. Vendor, link-resolver, and local citation data were examined to determine correlation. Because link-resolver statistics are easy to obtain, the study library especially wanted to know if they correlate highly with the other measures. Methods: Vendor, link-resolver, and local citation statistics for the study institution were gathered for health sciences journals. Spearman rank-order correlation coefficients were calculated. Results: There was a high positive correlation between all three data sets, with vendor data commonly showing the highest use. However, a small percentage of titles showed anomalous results. Discussion and Conclusions: Link-resolver data correlate well with vendor and citation data, but due to anomalies, low link-resolver data would best be used to suggest titles for further evaluation using vendor data. Citation data may not be needed as it correlates highly with other measures. PMID:23646026
Perception of midline deviations in smile esthetics by laypersons
Ferreira, Jamille Barros; da Silva, Licínio Esmeraldo; Caetano, Márcia Tereza de Oliveira; da Motta, Andrea Fonseca Jardim; Cury-Saramago, Adriana de Alcantara; Mucha, José Nelson
2016-01-01
ABSTRACT Objective: To evaluate the esthetic perception of upper dental midline deviation by laypersons and if adjacent structures influence their judgment. Methods: An album with 12 randomly distributed frontal view photographs of the smile of a woman with the midline digitally deviated was evaluated by 95 laypersons. The frontal view smiling photograph was modified to create from 1 mm to 5 mm deviations in the upper midline to the left side. The photographs were cropped in two different manners and divided into two groups of six photographs each: group LCN included the lips, chin, and two-thirds of the nose, and group L included the lips only. The laypersons performed the rate of each smile using a visual analog scale (VAS). Wilcoxon test, Student’s t-test and Mann-Whitney test were applied, adopting a 5% level of significance. Results: Laypersons were able to perceive midline deviations starting at 1 mm. Statistically significant results (p< 0.05) were found for all multiple comparisons of the values in photographs of group LCN and for almost all comparisons in photographs of group L. Comparisons between the photographs of groups LCN and L showed statistically significant values (p< 0.05) when the deviation was 1 mm. Conclusions: Laypersons were able to perceive the upper dental midline deviations of 1 mm, and above when the adjacent structures of the smiles were included. Deviations of 2 mm and above when the lips only were included. The visualization of structures adjacent to the smile demonstrated influence on the perception of midline deviation. PMID:28125140
Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.
Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E
2014-02-28
The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.
Comparison of time-dependent changes in the surface hardness of different composite resins
Ozcan, Suat; Yikilgan, Ihsan; Uctasli, Mine Betul; Bala, Oya; Kurklu, Zeliha Gonca Bek
2013-01-01
Objective: The aim of this study was to evaluate the change in surface hardness of silorane-based composite resin (Filtek Silorane) in time and compare the results with the surface hardness of two methacrylate-based resins (Filtek Supreme and Majesty Posterior). Materials and Methods: From each composite material, 18 wheel-shaped samples (5-mm diameter and 2-mm depth) were prepared. Top and bottom surface hardness of these samples was measured using a Vicker's hardness tester. The samples were then stored at 37°C and 100% humidity. After 24 h and 7, 30 and 90 days, the top and bottom surface hardness of the samples was measured. In each measurement, the rate between the hardness of the top and bottom surfaces were recorded as the hardness rate. Statistical analysis was performed by one-way analysis of variance, multiple comparisons by Tukey's test and binary comparisons by t-test with a significance level of P = 0.05. Results: The highest hardness values were obtained from each two surfaces of Majesty Posterior and the lowest from Filtek Silorane. Both the top and bottom surface hardness of the methacrylate based composite resins was high and there was a statistically significant difference between the top and bottom hardness values of only the silorane-based composite, Filtek Silorane (P < 0.05). The lowest was obtained with Filtek Silorane. The hardness values of all test groups increased after 24 h (P < 0.05). Conclusion: Although silorane-based composite resin Filtek Silorane showed adequate hardness ratio, the use of incremental technic during application is more important than methacrylate based composites. PMID:24966724
Hong, Hye Jeong; Kim, Jin Sung; Seo, Wan Seok; Koo, Bon Hoon; Bai, Dai Seg; Jeong, Jin Young
2010-01-01
Objective We investigated executive functions (EFs), as evaluated by the Wisconsin Card Sorting Test (WCST), and other EF between lower grades (LG) and higher grades (HG) in elementary-school-age attention deficit hyperactivity disorder (ADHD) children. Methods We classified a sample of 112 ADHD children into 4 groups (composed of 28 each) based on age (LG vs. HG) and WCST performance [lower vs. higher performance on WCST, defined by the number of completed categories (CC)] Participants in each group were matched according to age, gender, ADHD subtype, and intelligence. We used the Wechsler intelligence Scale for Children 3rd edition to test intelligence and the Computerized Neurocognitive Function Test-IV, which included the WCST, to test EF. Results Comparisons of EFs scores in LG ADHD children showed statistically significant differences in performing digit spans backward, some verbal learning scores, including all memory scores, and Stroop test scores. However, comparisons of EF scores in HG ADHD children did not show any statistically significant differences. Correlation analyses of the CC and EF variables and stepwise multiple regression analysis in LG ADHD children showed a combination of the backward form of the Digit span test and Visual span test in lower-performance ADHD participants significantly predicted the number of CC (R2=0.273, p<0.001). Conclusion This study suggests that the design of any battery of neuropsychological tests for measuring EF in ADHD children should first consider age before interpreting developmental variations and neuropsychological test results. Researchers should consider the dynamics of relationships within EF, as measured by neuropsychological tests. PMID:20927306
Multiple-Solution Problems in a Statistics Classroom: An Example
ERIC Educational Resources Information Center
Chu, Chi Wing; Chan, Kevin L. T.; Chan, Wai-Sum; Kwong, Koon-Shing
2017-01-01
The mathematics education literature shows that encouraging students to develop multiple solutions for given problems has a positive effect on students' understanding and creativity. In this paper, we present an example of multiple-solution problems in statistics involving a set of non-traditional dice. In particular, we consider the exact…
2013-01-01
Background As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. Results We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS A : DE genes with non-zero effect sizes in all studies, (2) HS B : DE genes with non-zero effect sizes in one or more studies and (3) HS r : DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. Conclusions The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS A , HS B , and HS r ). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author’s publication website. PMID:24359104
Chang, Lun-Ching; Lin, Hui-Min; Sibille, Etienne; Tseng, George C
2013-12-21
As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS(A): DE genes with non-zero effect sizes in all studies, (2) HS(B): DE genes with non-zero effect sizes in one or more studies and (3) HS(r): DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS(A), HS(B), and HS(r)). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author's publication website.
How to Compare Parametric and Nonparametric Person-Fit Statistics Using Real Data
ERIC Educational Resources Information Center
Sinharay, Sandip
2017-01-01
Person-fit assessment (PFA) is concerned with uncovering atypical test performance as reflected in the pattern of scores on individual items on a test. Existing person-fit statistics (PFSs) include both parametric and nonparametric statistics. Comparison of PFSs has been a popular research topic in PFA, but almost all comparisons have employed…
Intracalibration of particle detectors on a three-axis stabilized geostationary platform
NASA Astrophysics Data System (ADS)
Rowland, W.; Weigel, R. S.
2012-11-01
We describe an algorithm for intracalibration of measurements from plasma or energetic particle detectors on a three-axis stabilized platform. Modeling and forecasting of Earth's radiation belt environment requires data from particle instruments, and these data depend on measurements which have an inherent calibration uncertainty. Pre-launch calibration is typically performed, but on-orbit changes in the instrument often necessitate adjustment of calibration parameters to mitigate the effect of these changes on the measurements. On-orbit calibration practices for particle detectors aboard spin-stabilized spacecraft are well established. Three-axis stabilized platforms, however, pose unique challenges even when comparisons are being performed between multiple telescopes measuring the same energy ranges aboard the same satellite. This algorithm identifies time intervals when different telescopes are measuring particles with the same pitch angles. These measurements are used to compute scale factors which can be multiplied by the pre-launch geometric factor to correct any changes. The approach is first tested using measurements from GOES-13 MAGED particle detectors over a 5-month time period in 2010. We find statistically significant variations which are generally on the order of 5% or less. These results do not appear to be dependent on Poisson statistics nor upon whether a dead time correction was performed. When applied to data from a 5-month interval in 2011, one telescope shows a 10% shift from the 2010 scale factors. This technique has potential for operational use to help maintain relative calibration between multiple telescopes aboard a single satellite. It should also be extensible to inter-calibration between multiple satellites.
Noufal, Ahammed; George, Antony; Jose, Maji; Khader, Mohasin Abdul; Jayapalan, Cheriyanthal Sisupalan
2014-01-01
Tobacco in any form (smoking or chewing), arecanut chewing, and alcohol are considered to be the major extrinsic etiological factors for potentially malignant disorders of the oral cavity and for squamous cell carcinoma, the most common oral malignancy in India. An increase in nuclear diameter (ND) and nucleus-cell ratio (NCR) with a reduction in cell diameter (CD) are early cytological indicators of dysplastic change. The authors sought to identify cytomorphometric changes in ND, CD, and NCR of oral buccal cells in tobacco and arecanut chewers who chewed with or without betel leaf. Participants represented 3 groups. Group I consisted of 30 individuals who chewed tobacco and arecanut with betel leaf (BQT chewers). Group II consisted of 30 individuals who chewed tobacco and arecanut without betel leaf (Gutka chewers). Group III comprised 30 apparently healthy nonabusers. Cytological smears were prepared and stained with modified-Papanicolaou stain. Comparisons between Groups I and II and Groups II and III showed that ND was increased, with P values of .054 and .008, respectively, whereas a comparison of Groups I and III showed no statistical significance. Comparisons between Groups I and II and Groups II and III showed that CD was statistically reduced, with P values of .037 and <.000, respectively, whereas comparison of Groups I and III showed no statistical significance. Comparisons between Groups I and II and groups II and III showed that NCR was statistically increased, with P values of <.000, whereas a comparison of Groups I and III showed no statistical significance. CD, ND, and NCR showed statistically significant changes in Group II in comparison with Group I, which could indicate larger and earlier risk of carcinoma for Gutka chewers than in BQT chewers.
Cellular response of pulp fibroblast to single or multiple photobiomodulation applications
NASA Astrophysics Data System (ADS)
Fernandes, Amanda; Lourenço Neto, Natalino; Teixeira Marques, Nadia Carolina; Lourenço Ribeiro Vitor, Luciana; Tavares Oliveira Prado, Mariel; Cardoso Oliveira, Rodrigo; Moreira Machado, Maria Aparecida Andrade; Marchini Oliveira, Thais
2018-06-01
This study aimed to evaluate in vitro the effects of single or multiple photobiomodulation (PBM) applications on the viability and proliferation of pulp fibroblasts. Pulp fibroblasts from human deciduous teeth were obtained from a biorepository, plated into 96-well plates, and irradiated according to the experimental groups. At 24 h, 48 h, and 72 h after irradiation, cell viability and proliferation were assessed through MTT and Crystal Violet assays, respectively. The intragroup comparison revealed statistically significant differences for 2.5 J cm‑2 (3×) with increasing viability at 72 h over 48 h (p = 0.027). The intergroup analysis showed a greater viability of the multiple PBM applications 2.5 J cm‑2 (3×) over the single application 7.5 J cm‑2 (1×) at 72 h. The application of 5 J cm‑2 (1×) exhibited greater proliferation than the application of 7.5 J cm‑2 (1×), 2.5 J cm‑2 (2×) and 2.5 J cm‑2 (3×). Single or multiple PBM applications demonstration different stimulatory effects on pulp fibroblast. The results show that the group submitted to multiple irradiation presented significantly higher cell viability than the groups with single irradiation at 72 h. However, the photobiomodulation therapy with single irradiations was more effective on cell proliferation at 24 h.
MultiSETTER: web server for multiple RNA structure comparison.
Čech, Petr; Hoksza, David; Svozil, Daniel
2015-08-12
Understanding the architecture and function of RNA molecules requires methods for comparing and analyzing their tertiary and quaternary structures. While structural superposition of short RNAs is achievable in a reasonable time, large structures represent much bigger challenge. Therefore, we have developed a fast and accurate algorithm for RNA pairwise structure superposition called SETTER and implemented it in the SETTER web server. However, though biological relationships can be inferred by a pairwise structure alignment, key features preserved by evolution can be identified only from a multiple structure alignment. Thus, we extended the SETTER algorithm to the alignment of multiple RNA structures and developed the MultiSETTER algorithm. In this paper, we present the updated version of the SETTER web server that implements a user friendly interface to the MultiSETTER algorithm. The server accepts RNA structures either as the list of PDB IDs or as user-defined PDB files. After the superposition is computed, structures are visualized in 3D and several reports and statistics are generated. To the best of our knowledge, the MultiSETTER web server is the first publicly available tool for a multiple RNA structure alignment. The MultiSETTER server offers the visual inspection of an alignment in 3D space which may reveal structural and functional relationships not captured by other multiple alignment methods based either on a sequence or on secondary structure motifs.
Accounting for Multiple Births in Neonatal and Perinatal Trials: Systematic Review and Case Study
Hibbs, Anna Maria; Black, Dennis; Palermo, Lisa; Cnaan, Avital; Luan, Xianqun; Truog, William E; Walsh, Michele C; Ballard, Roberta A
2010-01-01
Objectives To determine the prevalence in the neonatal literature of statistical approaches accounting for the unique clustering patterns of multiple births. To explore the sensitivity of an actual trial to several analytic approaches to multiples. Methods A systematic review of recent perinatal trials assessed the prevalence of studies accounting for clustering of multiples. The NO CLD trial served as a case study of the sensitivity of the outcome to several statistical strategies. We calculated odds ratios using non-clustered (logistic regression) and clustered (generalized estimating equations, multiple outputation) analyses. Results In the systematic review, most studies did not describe the randomization of twins and did not account for clustering. Of those studies that did, exclusion of multiples and generalized estimating equations were the most common strategies. The NO CLD study included 84 infants with a sibling enrolled in the study. Multiples were more likely than singletons to be white and were born to older mothers (p<0.01). Analyses that accounted for clustering were statistically significant; analyses assuming independence were not. Conclusions The statistical approach to multiples can influence the odds ratio and width of confidence intervals, thereby affecting the interpretation of a study outcome. A minority of perinatal studies address this issue. PMID:19969305
Accounting for multiple births in neonatal and perinatal trials: systematic review and case study.
Hibbs, Anna Maria; Black, Dennis; Palermo, Lisa; Cnaan, Avital; Luan, Xianqun; Truog, William E; Walsh, Michele C; Ballard, Roberta A
2010-02-01
To determine the prevalence in the neonatal literature of statistical approaches accounting for the unique clustering patterns of multiple births and to explore the sensitivity of an actual trial to several analytic approaches to multiples. A systematic review of recent perinatal trials assessed the prevalence of studies accounting for clustering of multiples. The Nitric Oxide to Prevent Chronic Lung Disease (NO CLD) trial served as a case study of the sensitivity of the outcome to several statistical strategies. We calculated odds ratios using nonclustered (logistic regression) and clustered (generalized estimating equations, multiple outputation) analyses. In the systematic review, most studies did not describe the random assignment of twins and did not account for clustering. Of those studies that did, exclusion of multiples and generalized estimating equations were the most common strategies. The NO CLD study included 84 infants with a sibling enrolled in the study. Multiples were more likely than singletons to be white and were born to older mothers (P < .01). Analyses that accounted for clustering were statistically significant; analyses assuming independence were not. The statistical approach to multiples can influence the odds ratio and width of confidence intervals, thereby affecting the interpretation of a study outcome. A minority of perinatal studies address this issue. Copyright 2010 Mosby, Inc. All rights reserved.
Lee, L.; Helsel, D.
2005-01-01
Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.
Theodorsson-Norheim, E
1986-08-01
Multiple t tests at a fixed p level are frequently used to analyse biomedical data where analysis of variance followed by multiple comparisons or the adjustment of the p values according to Bonferroni would be more appropriate. The Kruskal-Wallis test is a nonparametric 'analysis of variance' which may be used to compare several independent samples. The present program is written in an elementary subset of BASIC and will perform Kruskal-Wallis test followed by multiple comparisons between the groups on practically any computer programmable in BASIC.
An adaptive two-stage dose-response design method for establishing proof of concept.
Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R
2013-01-01
We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.
Multiple scaling behaviour and nonlinear traits in music scores
Larralde, Hernán; Martínez-Mekler, Gustavo; Müller, Markus
2017-01-01
We present a statistical analysis of music scores from different composers using detrended fluctuation analysis (DFA). We find different fluctuation profiles that correspond to distinct autocorrelation structures of the musical pieces. Further, we reveal evidence for the presence of nonlinear autocorrelations by estimating the DFA of the magnitude series, a result validated by a corresponding study of appropriate surrogate data. The amount and the character of nonlinear correlations vary from one composer to another. Finally, we performed a simple experiment in order to evaluate the pleasantness of the musical surrogate pieces in comparison with the original music and find that nonlinear correlations could play an important role in the aesthetic perception of a musical piece. PMID:29308256
Random number generators tested on quantum Monte Carlo simulations.
Hongo, Kenta; Maezono, Ryo; Miura, Kenichi
2010-08-01
We have tested and compared several (pseudo) random number generators (RNGs) applied to a practical application, ground state energy calculations of molecules using variational and diffusion Monte Carlo metheds. A new multiple recursive generator with 8th-order recursion (MRG8) and the Mersenne twister generator (MT19937) are tested and compared with the RANLUX generator with five luxury levels (RANLUX-[0-4]). Both MRG8 and MT19937 are proven to give the same total energy as that evaluated with RANLUX-4 (highest luxury level) within the statistical error bars with less computational cost to generate the sequence. We also tested the notorious implementation of linear congruential generator (LCG), RANDU, for comparison. (c) 2010 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Ngamga, Eulalie Joelle; Bialonski, Stephan; Marwan, Norbert; Kurths, Jürgen; Geier, Christian; Lehnertz, Klaus
2016-04-01
We investigate the suitability of selected measures of complexity based on recurrence quantification analysis and recurrence networks for an identification of pre-seizure states in multi-day, multi-channel, invasive electroencephalographic recordings from five epilepsy patients. We employ several statistical techniques to avoid spurious findings due to various influencing factors and due to multiple comparisons and observe precursory structures in three patients. Our findings indicate a high congruence among measures in identifying seizure precursors and emphasize the current notion of seizure generation in large-scale epileptic networks. A final judgment of the suitability for field studies, however, requires evaluation on a larger database.
Multiple scaling behaviour and nonlinear traits in music scores
NASA Astrophysics Data System (ADS)
González-Espinoza, Alfredo; Larralde, Hernán; Martínez-Mekler, Gustavo; Müller, Markus
2017-12-01
We present a statistical analysis of music scores from different composers using detrended fluctuation analysis (DFA). We find different fluctuation profiles that correspond to distinct autocorrelation structures of the musical pieces. Further, we reveal evidence for the presence of nonlinear autocorrelations by estimating the DFA of the magnitude series, a result validated by a corresponding study of appropriate surrogate data. The amount and the character of nonlinear correlations vary from one composer to another. Finally, we performed a simple experiment in order to evaluate the pleasantness of the musical surrogate pieces in comparison with the original music and find that nonlinear correlations could play an important role in the aesthetic perception of a musical piece.
Benchmarking and performance analysis of the CM-2. [SIMD computer
NASA Technical Reports Server (NTRS)
Myers, David W.; Adams, George B., II
1988-01-01
A suite of benchmarking routines testing communication, basic arithmetic operations, and selected kernel algorithms written in LISP and PARIS was developed for the CM-2. Experiment runs are automated via a software framework that sequences individual tests, allowing for unattended overnight operation. Multiple measurements are made and treated statistically to generate well-characterized results from the noisy values given by cm:time. The results obtained provide a comparison with similar, but less extensive, testing done on a CM-1. Tests were chosen to aid the algorithmist in constructing fast, efficient, and correct code on the CM-2, as well as gain insight into what performance criteria are needed when evaluating parallel processing machines.
Contribution of botanical origin and sugar composition of honeys on the crystallization phenomenon.
Escuredo, Olga; Dobre, Irina; Fernández-González, María; Seijo, M Carmen
2014-04-15
The present work provides information regarding the statistical relationships among the palynological characteristics, sugars (fructose, glucose, sucrose, melezitose and maltose), moisture content and sugar ratios (F+G, F/G and G/W) of 136 different honey types (including bramble, chestnut, eucalyptus, heather, acacia, lime, rape, sunflower and honeydew). Results of the statistical analyses (multiple comparison Bonferroni test, Spearman rank correlations and principal components) revealed the valuable significance of the botanical origin on the sugar ratios (F+G, F/G and G/W). Brassica napus and Helianthus annuus pollen were the variables situated near F+G and G/W ratio, while Castanea sativa, Rubus and Eucalyptus pollen were located further away, as shown in the principal component analysis. The F/G ratio of sunflower, rape and lime honeys were lower than those found for the chestnut, eucalyptus, heather, acacia and honeydew honeys (>1.4). A lower value F/G ratio and lower water content were related with a faster crystallization in the honey. Copyright © 2013 Elsevier Ltd. All rights reserved.
Seven Pervasive Statistical Flaws in Cognitive Training Interventions
Moreau, David; Kirk, Ian J.; Waldie, Karen E.
2016-01-01
The prospect of enhancing cognition is undoubtedly among the most exciting research questions currently bridging psychology, neuroscience, and evidence-based medicine. Yet, convincing claims in this line of work stem from designs that are prone to several shortcomings, thus threatening the credibility of training-induced cognitive enhancement. Here, we present seven pervasive statistical flaws in intervention designs: (i) lack of power; (ii) sampling error; (iii) continuous variable splits; (iv) erroneous interpretations of correlated gain scores; (v) single transfer assessments; (vi) multiple comparisons; and (vii) publication bias. Each flaw is illustrated with a Monte Carlo simulation to present its underlying mechanisms, gauge its magnitude, and discuss potential remedies. Although not restricted to training studies, these flaws are typically exacerbated in such designs, due to ubiquitous practices in data collection or data analysis. The article reviews these practices, so as to avoid common pitfalls when designing or analyzing an intervention. More generally, it is also intended as a reference for anyone interested in evaluating claims of cognitive enhancement. PMID:27148010
Kistemann, Thomas; Zimmer, Sonja; Vågsholm, Ivar; Andersson, Yvonne
2004-01-01
This article describes the spatial and temporal distribution of verotoxin-producing Escherichia coli among humans (EHEC) and cattle (VTEC) in Sweden, in order to evaluate relationships between the incidence of EHEC in humans, prevalence of VTEC O157 in livestock and agricultural structure by an ecological study. The spatial patterns of the distribution of human infections were described and compared with spatial patterns of occurrence in cattle, using a Geographic Information System (GIS). The findings implicate a concentration of human infection and cattle prevalence in the southwest of Sweden. The use of probability mapping confirmed unusual patterns of infection rates. The comparison of human and cattle infection indicated a spatial and statistical association. The correlation between variables of the agricultural structure and human EHEC incidence was high, indicating a significant statistical association of cattle and farm density with human infection. The explained variation of a multiple linear regression model was 0.56. PMID:15188718
Development of a statistical oil spill model for risk assessment.
Guo, Weijun
2017-11-01
To gain a better understanding of the impacts from potential risk sources, we developed an oil spill model using probabilistic method, which simulates numerous oil spill trajectories under varying environmental conditions. The statistical results were quantified from hypothetical oil spills under multiple scenarios, including area affected probability, mean oil slick thickness, and duration of water surface exposed to floating oil. The three sub-indices together with marine area vulnerability are merged to compute the composite index, characterizing the spatial distribution of risk degree. Integral of the index can be used to identify the overall risk from an emission source. The developed model has been successfully applied in comparison to and selection of an appropriate oil port construction location adjacent to a marine protected area for Phoca largha in China. The results highlight the importance of selection of candidates before project construction, since that risk estimation from two adjacent potential sources may turn out to be significantly different regarding hydrodynamic conditions and eco-environmental sensitivity. Copyright © 2017. Published by Elsevier Ltd.
Calibrating genomic and allelic coverage bias in single-cell sequencing.
Zhang, Cheng-Zhong; Adalsteinsson, Viktor A; Francis, Joshua; Cornils, Hauke; Jung, Joonil; Maire, Cecile; Ligon, Keith L; Meyerson, Matthew; Love, J Christopher
2015-04-16
Artifacts introduced in whole-genome amplification (WGA) make it difficult to derive accurate genomic information from single-cell genomes and require different analytical strategies from bulk genome analysis. Here, we describe statistical methods to quantitatively assess the amplification bias resulting from whole-genome amplification of single-cell genomic DNA. Analysis of single-cell DNA libraries generated by different technologies revealed universal features of the genome coverage bias predominantly generated at the amplicon level (1-10 kb). The magnitude of coverage bias can be accurately calibrated from low-pass sequencing (∼0.1 × ) to predict the depth-of-coverage yield of single-cell DNA libraries sequenced at arbitrary depths. We further provide a benchmark comparison of single-cell libraries generated by multi-strand displacement amplification (MDA) and multiple annealing and looping-based amplification cycles (MALBAC). Finally, we develop statistical models to calibrate allelic bias in single-cell whole-genome amplification and demonstrate a census-based strategy for efficient and accurate variant detection from low-input biopsy samples.
Calibrating genomic and allelic coverage bias in single-cell sequencing
Francis, Joshua; Cornils, Hauke; Jung, Joonil; Maire, Cecile; Ligon, Keith L.; Meyerson, Matthew; Love, J. Christopher
2016-01-01
Artifacts introduced in whole-genome amplification (WGA) make it difficult to derive accurate genomic information from single-cell genomes and require different analytical strategies from bulk genome analysis. Here, we describe statistical methods to quantitatively assess the amplification bias resulting from whole-genome amplification of single-cell genomic DNA. Analysis of single-cell DNA libraries generated by different technologies revealed universal features of the genome coverage bias predominantly generated at the amplicon level (1–10 kb). The magnitude of coverage bias can be accurately calibrated from low-pass sequencing (~0.1 ×) to predict the depth-of-coverage yield of single-cell DNA libraries sequenced at arbitrary depths. We further provide a benchmark comparison of single-cell libraries generated by multi-strand displacement amplification (MDA) and multiple annealing and looping-based amplification cycles (MALBAC). Finally, we develop statistical models to calibrate allelic bias in single-cell whole-genome amplification and demonstrate a census-based strategy for efficient and accurate variant detection from low-input biopsy samples. PMID:25879913
Chen, Qiong; Yang, Hailan; Feng, Yongliang; Zhang, Ping; Wu, Weiwei; Li, Shuzhen; Thompson, Brian; Wang, Xin; Peng, Tingting; Wang, Fang; Xie, Bingjie; Guo, Pengge; Li, Mei; Wang, Ying; Zhao, Nan; Wang, Suping; Zhang, Yawei
2018-03-01
Gestational diabetes mellitus is a growing public health concern due to its large disease burden; however, the underlying pathophysiology remains unclear. Therefore, we examined the relationship between 107 single-nucleotide polymorphisms in insulin signalling pathway genes and gestational diabetes mellitus risk using a nested case-control study. The SOS1 rs7598922 GA and AA genotype were statistically significantly associated with reduced gestational diabetes mellitus risk ( p trend = 0.0006) compared with GG genotype. At the gene level, SOS1 was statistically significantly associated with gestational diabetes mellitus risk after adjusting for multiple comparisons. Moreover, AGGA and GGGG haplotypes in SOS1 gene were associated with reduced risk of gestational diabetes mellitus. Our study provides evidence for an association between the SOS1 gene and risk of gestational diabetes mellitus; however, its role in the pathogenesis of gestational diabetes mellitus will need to be verified by further studies.
New powerful statistics for alignment-free sequence comparison under a pattern transfer model.
Liu, Xuemei; Wan, Lin; Li, Jing; Reinert, Gesine; Waterman, Michael S; Sun, Fengzhu
2011-09-07
Alignment-free sequence comparison is widely used for comparing gene regulatory regions and for identifying horizontally transferred genes. Recent studies on the power of a widely used alignment-free comparison statistic D2 and its variants D*2 and D(s)2 showed that their power approximates a limit smaller than 1 as the sequence length tends to infinity under a pattern transfer model. We develop new alignment-free statistics based on D2, D*2 and D(s)2 by comparing local sequence pairs and then summing over all the local sequence pairs of certain length. We show that the new statistics are much more powerful than the corresponding statistics and the power tends to 1 as the sequence length tends to infinity under the pattern transfer model. Copyright © 2011 Elsevier Ltd. All rights reserved.
New Powerful Statistics for Alignment-free Sequence Comparison Under a Pattern Transfer Model
Liu, Xuemei; Wan, Lin; Li, Jing; Reinert, Gesine; Waterman, Michael S.; Sun, Fengzhu
2011-01-01
Alignment-free sequence comparison is widely used for comparing gene regulatory regions and for identifying horizontally transferred genes. Recent studies on the power of a widely used alignment-free comparison statistic D2 and its variants D2∗ and D2s showed that their power approximates a limit smaller than 1 as the sequence length tends to infinity under a pattern transfer model. We develop new alignment-free statistics based on D2, D2∗ and D2s by comparing local sequence pairs and then summing over all the local sequence pairs of certain length. We show that the new statistics are much more powerful than the corresponding statistics and the power tends to 1 as the sequence length tends to infinity under the pattern transfer model. PMID:21723298
Paulsson, Anna K.; Holmes, Jordan A.; Peiffer, Ann M.; Miller, Lance D.; Liu, Wennuan; Xu, Jianfeng; Hinson, William H.; Lesser, Glenn J.; Laxton, Adrian W.; Tatter, Stephen B.; Debinski, Waldemar
2014-01-01
We investigate the differences in molecular signature and clinical outcomes between multiple lesion glioblastoma (GBM) and single focus GBM in the modern treatment era. Between August 2000 and May 2010, 161 patients with GBM were treated with modern radiotherapy techniques. Of this group, 33 were considered to have multiple lesion GBM (25 multifocal and 8 multicentric). Patterns of failure, time to progression and overall survival were compared based on whether the tumor was considered a single focus or multiple lesion GBM. Genomic groupings and methylation status were also investigated as a possible predictor of multifocality in a cohort of 41 patients with available tissue for analysis. There was no statistically significant difference in overall survival (p < 0.3) between the multiple lesion tumors (8.2 months) and single focus GBM (11 months). Progression free survival was superior in the single focus tumors (7.1 months) as compared to multi-focal (5.6 months, p = 0.02). For patients with single focus, multifocal and multicentric GBM, 81, 76 and 88 % of treatment failures occurred in the 60 Gy volume (p < 0.5), while 54, 72, and 38 % of treatment failures occurred in the 46 Gy volume (p < 0.4). Out of field failures were rare in both single focus and multiple foci GBM (7 vs 3 %). Genomic groupings and methylation status were not found to predict for multifocality. Patterns of failure, survival and genomic signatures for multiple lesion GBM do not appreciably differ when compared to single focus tumors. PMID:24990827
Wu, Xia; Li, Juan; Ayutyanont, Napatkamon; Protas, Hillary; Jagust, William; Fleisher, Adam; Reiman, Eric; Yao, Li; Chen, Kewei
2013-01-01
Given a single index, the receiver operational characteristic (ROC) curve analysis is routinely utilized for characterizing performances in distinguishing two conditions/groups in terms of sensitivity and specificity. Given the availability of multiple data sources (referred to as multi-indices), such as multimodal neuroimaging data sets, cognitive tests, and clinical ratings and genomic data in Alzheimer’s disease (AD) studies, the single-index-based ROC underutilizes all available information. For a long time, a number of algorithmic/analytic approaches combining multiple indices have been widely used to simultaneously incorporate multiple sources. In this study, we propose an alternative for combining multiple indices using logical operations, such as “AND,” “OR,” and “at least n” (where n is an integer), to construct multivariate ROC (multiV-ROC) and characterize the sensitivity and specificity statistically associated with the use of multiple indices. With and without the “leave-one-out” cross-validation, we used two data sets from AD studies to showcase the potentially increased sensitivity/specificity of the multiV-ROC in comparison to the single-index ROC and linear discriminant analysis (an analytic way of combining multi-indices). We conclude that, for the data sets we investigated, the proposed multiV-ROC approach is capable of providing a natural and practical alternative with improved classification accuracy as compared to univariate ROC and linear discriminant analysis.
Wu, Xia; Li, Juan; Ayutyanont, Napatkamon; Protas, Hillary; Jagust, William; Fleisher, Adam; Reiman, Eric; Yao, Li; Chen, Kewei
2014-01-01
Given a single index, the receiver operational characteristic (ROC) curve analysis is routinely utilized for characterizing performances in distinguishing two conditions/groups in terms of sensitivity and specificity. Given the availability of multiple data sources (referred to as multi-indices), such as multimodal neuroimaging data sets, cognitive tests, and clinical ratings and genomic data in Alzheimer’s disease (AD) studies, the single-index-based ROC underutilizes all available information. For a long time, a number of algorithmic/analytic approaches combining multiple indices have been widely used to simultaneously incorporate multiple sources. In this study, we propose an alternative for combining multiple indices using logical operations, such as “AND,” “OR,” and “at least n” (where n is an integer), to construct multivariate ROC (multiV-ROC) and characterize the sensitivity and specificity statistically associated with the use of multiple indices. With and without the “leave-one-out” cross-validation, we used two data sets from AD studies to showcase the potentially increased sensitivity/specificity of the multiV-ROC in comparison to the single-index ROC and linear discriminant analysis (an analytic way of combining multi-indices). We conclude that, for the data sets we investigated, the proposed multiV-ROC approach is capable of providing a natural and practical alternative with improved classification accuracy as compared to univariate ROC and linear discriminant analysis. PMID:23702553
Eikenberry, Barbara C. Scudder; Bell, Amanda H.; Olds, Hayley T.; Burns, Daniel J.
2016-07-25
Recent data are lacking to assess whether impairments still exist at four of Wisconsin’s largest Lake Michigan harbors that were designated as Areas of Concern (AOCs) in the late 1980s due to sediment contamination and multiple Beneficial Use Impairments (BUIs), such as those affecting benthos (macroinvertebrates) and plankton (zooplankton and phytoplankton) communities. During three seasonal sampling events (“seasons”) in May through August 2012, the U.S. Geological Survey collected sediment benthos and water plankton at the four AOCs as well as six less-degraded non-AOCs along the western Lake Michigan shoreline to assess whether AOC communities were degraded in comparison to non-AOC communities. The four AOCs are the Lower Menominee River, the Lower Green Bay and Fox River, the Sheboygan River, and the Milwaukee Estuary. Due to their size and complexity, multiple locations or “subsites” were sampled within the Lower Green Bay and Fox River AOC (Lower Green Bay, the Fox River near Allouez, and the Fox River near De Pere) and within the Milwaukee Estuary AOC (the Milwaukee River, the Menomonee River, and the Milwaukee Harbor) and single locations were sampled at the other AOCs and non-AOCs. The six non-AOCs are the Escanaba River in Michigan, and the Oconto River, Ahnapee River, Kewaunee River, Manitowoc River, and Root River in Wisconsin. Benthos samples were collected by using Hester-Dendy artificial substrates deployed for 30 days and by using a dredge sampler; zooplankton were collected by net and phytoplankton by whole-water sampler. Except for the Lower Green Bay and Milwaukee Harbor locations, communities at each AOC were compared to all non-AOCs as a group and to paired non-AOCs using taxa relative abundances and metrics, including richness, diversity, and an Index of Biotic Integrity (IBI, for Hester-Dendy samples only). Benthos samples collected during one or more seasons were rated as degraded for at least one metric at all AOCs. In the Milwaukee Estuary, benthos richness was lower in the Milwaukee River subsite spring and summer samples and in the Menomonee River subsite spring sample relative to the paired non-AOCs. Benthos diversity and IBIs at the Menomonee River subsite and IBIs at the Milwaukee River subsite and Sheboygan River were significantly lower than at all non-AOCs as a group across all seasons and therefore were rated as degraded. In addition, IBIs at the Lower Menominee River were significantly lower than those at the paired non-AOCs during all seasons and were therefore rated degraded. Benthos at both Fox River subsites and the Milwaukee River subsite were significantly different from their paired non-AOCs during all three seasons, based on a comparison of the relative abundances of taxa using multivariate testing. Metrics for plankton at AOCs were not significantly lower than those at the paired or group non-AOCs during all seasons; however, zooplankton richness in spring at the Sheboygan River and in fall at the Menomonee River subsite was rated as degraded in comparison to paired non-AOCs. Also, zooplankton richness in fall at the Fox River near Allouez subsite and in spring at the Milwaukee River subsite was rated degraded overall because values were lower than at all non-AOCs as a group and lower than at the paired non-AOCs. Zooplankton diversity in fall at the Fox River near Allouez subsite and the Lower Menominee River was rated degraded in comparison to paired non-AOC comparison sites. Zooplankton communities at the Fox River near Allouez subsite were significantly different from the paired non-AOCs when multivariate comparisons were made without rotifers other than A. priodonta. Overall, benthos and zooplankton BUIs remained at the AOCs in 2012 but no AOCs with a phytoplankton BUI were rated degraded in comparison to non-AOCs. The use of a multiple ecological measures, structural and functional, and multiple statistical analyses, biological metrics and multivariate statistics, provided assessments that defined 2012 status of communities relative to less-impaired non-AOCs in the Great Lakes area.
Kinematic and Hydrometer Data Products from Scanning Radars during MC3E
matthews, Alyssa; Dolan, Brenda; Rutledge, Steven
2016-02-29
Recently the Radar Meteorology Group at Colorado State University has completed major case studies of some top cases from MC3E including 25 April, 20 May and 23 May 2011. A discussion on the analysis methods as well as radar quality control methods is included. For each case, a brief overview is first provided. Then, multiple Doppler (using available X-SAPR and C-SAPR data) analyses are presented including statistics on vertical air motions, sub-divided by convective and stratiform precipitation. Mean profiles and CFAD's of vertical motion are included to facilitate comparison with ASR model simulations. Retrieved vertical motion has also been verified with vertically pointing profiler data. Finally for each case, hydrometeor types are included derived from polarimetric radar observations. The latter can be used to provide comparisons to model-generated hydrometeor fields. Instructions for accessing all the data fields are also included. The web page can be found at: http://radarmet.atmos.colostate.edu/mc3e/research/
Multiple Phenotype Association Tests Using Summary Statistics in Genome-Wide Association Studies
Liu, Zhonghua; Lin, Xihong
2017-01-01
Summary We study in this paper jointly testing the associations of a genetic variant with correlated multiple phenotypes using the summary statistics of individual phenotype analysis from Genome-Wide Association Studies (GWASs). We estimated the between-phenotype correlation matrix using the summary statistics of individual phenotype GWAS analyses, and developed genetic association tests for multiple phenotypes by accounting for between-phenotype correlation without the need to access individual-level data. Since genetic variants often affect multiple phenotypes differently across the genome and the between-phenotype correlation can be arbitrary, we proposed robust and powerful multiple phenotype testing procedures by jointly testing a common mean and a variance component in linear mixed models for summary statistics. We computed the p-values of the proposed tests analytically. This computational advantage makes our methods practically appealing in large-scale GWASs. We performed simulation studies to show that the proposed tests maintained correct type I error rates, and to compare their powers in various settings with the existing methods. We applied the proposed tests to a GWAS Global Lipids Genetics Consortium summary statistics data set and identified additional genetic variants that were missed by the original single-trait analysis. PMID:28653391
Multiple phenotype association tests using summary statistics in genome-wide association studies.
Liu, Zhonghua; Lin, Xihong
2018-03-01
We study in this article jointly testing the associations of a genetic variant with correlated multiple phenotypes using the summary statistics of individual phenotype analysis from Genome-Wide Association Studies (GWASs). We estimated the between-phenotype correlation matrix using the summary statistics of individual phenotype GWAS analyses, and developed genetic association tests for multiple phenotypes by accounting for between-phenotype correlation without the need to access individual-level data. Since genetic variants often affect multiple phenotypes differently across the genome and the between-phenotype correlation can be arbitrary, we proposed robust and powerful multiple phenotype testing procedures by jointly testing a common mean and a variance component in linear mixed models for summary statistics. We computed the p-values of the proposed tests analytically. This computational advantage makes our methods practically appealing in large-scale GWASs. We performed simulation studies to show that the proposed tests maintained correct type I error rates, and to compare their powers in various settings with the existing methods. We applied the proposed tests to a GWAS Global Lipids Genetics Consortium summary statistics data set and identified additional genetic variants that were missed by the original single-trait analysis. © 2017, The International Biometric Society.
Piccioni, Chiara; Di Carlo, Stefano; Capogreco, Mario
2017-01-01
Aim of this study was to investigate a specific airborne particle abrasion pretreatment on dentin and its effects on microtensile bond strengths of four commercial total-etch adhesives. Midcoronal occlusal dentin of extracted human molars was used. Teeth were randomly assigned to 4 groups according to the adhesive system used: OptiBond FL (FL), OptiBond Solo Plus (SO), Prime & Bond (PB), and Riva Bond LC (RB). Specimens from each group were further divided into two subgroups: control specimens were treated with adhesive procedures; abraded specimens were pretreated with airborne particle abrasion using 50 μm Al2O3 before adhesion. After bonding procedures, composite crowns were incrementally built up. Specimens were sectioned perpendicular to adhesive interface to produce multiple beams, which were tested under tension until failure. Data were statistically analysed. Failure mode analysis was performed. Overall comparison showed significant increase in bond strength (p < 0.001) between abraded and no-abraded specimens, independently of brand. Intrabrand comparison showed statistical increase when abraded specimens were tested compared to no-abraded ones, with the exception of PB that did not show such difference. Distribution of failure mode was relatively uniform among all subgroups. Surface treatment by airborne particle abrasion with Al2O3 particles can increase the bond strength of total-etch adhesives. PMID:29392128
ERIC Educational Resources Information Center
Downing, Steven M.; Maatsch, Jack L.
To test the effect of clinically relevant multiple-choice item content on the validity of statistical discriminations of physicians' clinical competence, data were collected from a field test of the Emergency Medicine Examination, test items for the certification of specialists in emergency medicine. Two 91-item multiple-choice subscales were…
Foong, Shaohui; Sun, Zhenglong
2016-08-12
In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.
Sensing multiple ligands with single receptor
NASA Astrophysics Data System (ADS)
Singh, Vijay; Nemenman, Ilya
2015-03-01
Cells use surface receptors to measure concentrations of external ligand molecules. Limits on the accuracy of such sensing are well-known for the scenario where concentration of one molecular species is being determined by one receptor [Endres]. However, in more realistic scenarios, a cognate (high-affinity) ligand competes with many non-cognate (low-affinity) ligands for binding to the receptor. We analyze effects of this competition on the accuracy of sensing. We show that maximum-likelihood statistical inference allows determination of concentrations of multiple ligands, cognate and non-cognate, by the same receptor concurrently. While it is unclear if traditional biochemical circuitry downstream of the receptor can implement such inference exactly, we show that an approximate inference can be performed by coupling the receptor to a kinetic proofreading cascade. We characterize the accuracy of such kinetic proofreading sensing in comparison to the exact maximum-likelihood approach. We acknowledge the support from the James S. McDonnell Foundation and the Human Frontier Science Program.
Design of an image encryption scheme based on a multiple chaotic map
NASA Astrophysics Data System (ADS)
Tong, Xiao-Jun
2013-07-01
In order to solve the problem that chaos is degenerated in limited computer precision and Cat map is the small key space, this paper presents a chaotic map based on topological conjugacy and the chaotic characteristics are proved by Devaney definition. In order to produce a large key space, a Cat map named block Cat map is also designed for permutation process based on multiple-dimensional chaotic maps. The image encryption algorithm is based on permutation-substitution, and each key is controlled by different chaotic maps. The entropy analysis, differential analysis, weak-keys analysis, statistical analysis, cipher random analysis, and cipher sensibility analysis depending on key and plaintext are introduced to test the security of the new image encryption scheme. Through the comparison to the proposed scheme with AES, DES and Logistic encryption methods, we come to the conclusion that the image encryption method solves the problem of low precision of one dimensional chaotic function and has higher speed and higher security.
Methods, caveats and the future of large-scale microelectrode recordings in the non-human primate
Dotson, Nicholas M.; Goodell, Baldwin; Salazar, Rodrigo F.; Hoffman, Steven J.; Gray, Charles M.
2015-01-01
Cognitive processes play out on massive brain-wide networks, which produce widely distributed patterns of activity. Capturing these activity patterns requires tools that are able to simultaneously measure activity from many distributed sites with high spatiotemporal resolution. Unfortunately, current techniques with adequate coverage do not provide the requisite spatiotemporal resolution. Large-scale microelectrode recording devices, with dozens to hundreds of microelectrodes capable of simultaneously recording from nearly as many cortical and subcortical areas, provide a potential way to minimize these tradeoffs. However, placing hundreds of microelectrodes into a behaving animal is a highly risky and technically challenging endeavor that has only been pursued by a few groups. Recording activity from multiple electrodes simultaneously also introduces several statistical and conceptual dilemmas, such as the multiple comparisons problem and the uncontrolled stimulus response problem. In this perspective article, we discuss some of the techniques that we, and others, have developed for collecting and analyzing large-scale data sets, and address the future of this emerging field. PMID:26578906
Werling, Donna M; Brand, Harrison; An, Joon-Yong; Stone, Matthew R; Zhu, Lingxue; Glessner, Joseph T; Collins, Ryan L; Dong, Shan; Layer, Ryan M; Markenscoff-Papadimitriou, Eirene; Farrell, Andrew; Schwartz, Grace B; Wang, Harold Z; Currall, Benjamin B; Zhao, Xuefang; Dea, Jeanselle; Duhn, Clif; Erdman, Carolyn A; Gilson, Michael C; Yadav, Rachita; Handsaker, Robert E; Kashin, Seva; Klei, Lambertus; Mandell, Jeffrey D; Nowakowski, Tomasz J; Liu, Yuwen; Pochareddy, Sirisha; Smith, Louw; Walker, Michael F; Waterman, Matthew J; He, Xin; Kriegstein, Arnold R; Rubenstein, John L; Sestan, Nenad; McCarroll, Steven A; Neale, Benjamin M; Coon, Hilary; Willsey, A Jeremy; Buxbaum, Joseph D; Daly, Mark J; State, Matthew W; Quinlan, Aaron R; Marth, Gabor T; Roeder, Kathryn; Devlin, Bernie; Talkowski, Michael E; Sanders, Stephan J
2018-05-01
Genomic association studies of common or rare protein-coding variation have established robust statistical approaches to account for multiple testing. Here we present a comparable framework to evaluate rare and de novo noncoding single-nucleotide variants, insertion/deletions, and all classes of structural variation from whole-genome sequencing (WGS). Integrating genomic annotations at the level of nucleotides, genes, and regulatory regions, we define 51,801 annotation categories. Analyses of 519 autism spectrum disorder families did not identify association with any categories after correction for 4,123 effective tests. Without appropriate correction, biologically plausible associations are observed in both cases and controls. Despite excluding previously identified gene-disrupting mutations, coding regions still exhibited the strongest associations. Thus, in autism, the contribution of de novo noncoding variation is probably modest in comparison to that of de novo coding variants. Robust results from future WGS studies will require large cohorts and comprehensive analytical strategies that consider the substantial multiple-testing burden.
Uzoka, Faith-Michael Emeka; Obot, Okure; Barker, Ken; Osuji, J
2011-07-01
The task of medical diagnosis is a complex one, considering the level vagueness and uncertainty management, especially when the disease has multiple symptoms. A number of researchers have utilized the fuzzy-analytic hierarchy process (fuzzy-AHP) methodology in handling imprecise data in medical diagnosis and therapy. The fuzzy logic is able to handle vagueness and unstructuredness in decision making, while the AHP has the ability to carry out pairwise comparison of decision elements in order to determine their importance in the decision process. This study attempts to do a case comparison of the fuzzy and AHP methods in the development of medical diagnosis system, which involves basic symptoms elicitation and analysis. The results of the study indicate a non-statistically significant relative superiority of the fuzzy technology over the AHP technology. Data collected from 30 malaria patients were used to diagnose using AHP and fuzzy logic independent of one another. The results were compared and found to covary strongly. It was also discovered from the results of fuzzy logic diagnosis covary a little bit more strongly to the conventional diagnosis results than that of AHP. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Ghivari, Sheetal B; Kubasad, Girish C; Deshpande, Preethi
2012-01-01
Aim: To evaluate the bacteria extruded apically during root canal preparation using two hand and rotary instrumentation techniques. Materials and Methods: Eighty freshly extracted mandibular premolars were mounted in bacteria collection apparatus. Root canals were contaminated with the pure culture of Enterococcus fecalis (ATCC 29212) and dried at 37°C for 24 h. Bacteria extruded were collected, incubated in brain heart infusion agar for 24 h at 36°C and the colony forming units (CFU) were counted. Statistical Analysis: The mean number of colony forming units were calculated by One-way ANOVA and comparison between the groups made by multiple comparison (Dunnet D) test. Results: The step-back technique extruded highest number of bacteria in comparison to other hand and rotary Ni–Ti systems. Conclusion: Under the limitation of this study all hand and rotary instrumentation techniques extruded bacteria. Among all the instrumentation techniques step-back technique extruded more number of bacteria and K-3 system the least. Further in vivo research in this direction could provide more insight into the biologic factors associated and focus on bacterial species that essentially play a major role in post instrumentation flare-ups. PMID:22368332
Demonstrating microbial co-occurrence pattern analyses within and between ecosystems
Williams, Ryan J.; Howe, Adina; Hofmockel, Kirsten S.
2014-01-01
Co-occurrence patterns are used in ecology to explore interactions between organisms and environmental effects on coexistence within biological communities. Analysis of co-occurrence patterns among microbial communities has ranged from simple pairwise comparisons between all community members to direct hypothesis testing between focal species. However, co-occurrence patterns are rarely studied across multiple ecosystems or multiple scales of biological organization within the same study. Here we outline an approach to produce co-occurrence analyses that are focused at three different scales: co-occurrence patterns between ecosystems at the community scale, modules of co-occurring microorganisms within communities, and co-occurring pairs within modules that are nested within microbial communities. To demonstrate our co-occurrence analysis approach, we gathered publicly available 16S rRNA amplicon datasets to compare and contrast microbial co-occurrence at different taxonomic levels across different ecosystems. We found differences in community composition and co-occurrence that reflect environmental filtering at the community scale and consistent pairwise occurrences that may be used to infer ecological traits about poorly understood microbial taxa. However, we also found that conclusions derived from applying network statistics to microbial relationships can vary depending on the taxonomic level chosen and criteria used to build co-occurrence networks. We present our statistical analysis and code for public use in analysis of co-occurrence patterns across microbial communities. PMID:25101065
Using Comparison of Multiple Strategies in the Mathematics Classroom: Lessons Learned and Next Steps
ERIC Educational Resources Information Center
Durkin, Kelley; Star, Jon R.; Rittle-Johnson, Bethany
2017-01-01
Comparison is a fundamental cognitive process that can support learning in a variety of domains, including mathematics. The current paper aims to summarize empirical findings that support recommendations on using comparison of multiple strategies in mathematics classrooms. We report the results of our classroom-based research on using comparison…
Corella, Dolores; Sorlí, Jose V; González, José I; Ortega, Carolina; Fitó, Montserrat; Bulló, Monica; Martínez-González, Miguel Angel; Ros, Emilio; Arós, Fernando; Lapetra, José; Gómez-Gracia, Enrique; Serra-Majem, Lluís; Ruiz-Gutierrez, Valentina; Fiol, Miquel; Coltell, Oscar; Vinyoles, Ernest; Pintó, Xavier; Martí, Amelia; Saiz, Carmen; Ordovás, José M; Estruch, Ramón
2014-01-06
The Fas apoptotic pathway has been implicated in type 2 diabetes and cardiovascular disease. Although a polymorphism (rs7138803; G > A) near the Fas apoptotic inhibitory molecule 2 (FAIM2) locus has been related to obesity, its association with other cardiovascular risk factors and disease remains uncertain. We analyzed the association between the FAIM2-rs7138803 polymorphism and obesity, blood pressure and heart rate in 7,161 participants (48.3% with type 2 diabetes) in the PREDIMED study at baseline. We also explored gene-diet interactions with adherence to the Mediterranean diet (MedDiet) and examined the effects of the polymorphism on cardiovascular disease incidence per diabetes status after a median 4.8-year dietary intervention (MedDiet versus control group) follow-up. We replicated the association between the FAIM2-rs7138803 polymorphism and greater obesity risk (OR: 1.08; 95% CI: 1.01-1.16; P = 0.011; per-A allele). Moreover, we detected novel associations of this polymorphism with higher diastolic blood pressure (DBP) and heart rate at baseline (B = 1.07; 95% CI: 0.97-1.28 bmp in AA vs G-carriers for the whole population), that remained statistically significant even after adjustment for body mass index (P = 0.012) and correction for multiple comparisons. This association was greater and statistically significant in type-2 diabetic subjects (B = 1.44: 95% CI: 0.23-2.56 bmp; P = 0.010 for AA versus G-carriers). Likewise, these findings were also observed longitudinally over 5-year follow-up. Nevertheless, we found no statistically significant gene-diet interactions with MedDiet for this trait. On analyzing myocardial infarction risk, we detected a nominally significant (P = 0.041) association in type-2 diabetic subjects (HR: 1.86; 95% CI:1.03-3.37 for AA versus G-carriers), although this association did not remain statistically significant following correction for multiple comparisons. We confirmed the FAIM2-rs7138803 relationship with obesity and identified novel and consistent associations with heart rate in particular in type 2 diabetic subjects. Furthermore, our results suggest a possible association of this polymorphism with higher myocardial infarction risk in type-2 diabetic subjects, although this result needs to be replicated as it could represent a false positive.
Lauer, K; Firnhaber, W
1984-10-01
In order to discover possible exogenous variables associated with a higher multiple sclerosis risk, the distribution of cases with definite and probable multiple sclerosis ascertained in the course of a micro-epidemiologic study in Southern Hesse was evaluated and compared with some environmental factors. The prevalence in 1980, the prevalence of cases with disease-onset within the region according to locality of onset and the rate of native Southern Hesse patients according to childhood residence all showed a similar geographical distribution, with the highest values in the south-eastern, mountainous part of the region. This district has a lower annual mean temperature, more annual snow-days and a higher annual precipitation compared to the remaining area. A statistical comparison revealed no association with industrial or agricultural activities, with a particular type of land use, with cattle, pig- or horse-breeding, or with sanitary or housing standards. On the other hand, a slight association with the soil type could be demonstrated, with higher rates on loam and clay subsoils when compared to predominantly sandy regions. Whether this finding has any significance or not remains to be clarified.
Jones, B T; McMahon, J
1996-01-01
Within social learning theory, positive alcohol expectancies represent motivation to drink and negative expectancies, motivation to restrain. It is also recognized that a subjective evaluation of expectancies ought to moderate their impact, although the evidence for this in social drinkers is problematic. This paper addresses the speculation that the moderating effect will be more evident in clinical populations. This study shows that (i) both expectancy and value reliably, independently and equally predict clients' abstinence survivorship following discharge from a treatment programme (and that this is almost entirely confined to the negative rather than positive terms). When (ii) expectancy evaluations are processed against expectancy through multiplicative composites (i.e. expectancy x value), their predictive power is only equivalent to either expectancy or value on its own. However (iii) when the multiplicative composite is assessed following the statistical guidelines advocated by Evans (1991) (i.e. within the same model as its constituents, expectancy and value) the increase in outcome variance explained by its inclusion is negligible and casts doubt upon its use in alcohol research. This does not appear to apply to value, however, and its possible role in treatment is discussed.
Cruza, Norberto Sotelo; Fierros, Luis E
2006-01-01
The present study was done at the internal medicine service oft he Hospital lnfantil in the State of Sonora, Mexico. We tried to address the question of the use of conceptual schemes and mind maps and its impact on the teaching-learning-evaluation process among medical residents. Analyze the effects of conceptual schemes, and mind maps as a teaching and evaluation tool and compare them with multiple choice exams among Pediatric residents. Twenty two residents (RI, RII, RIII)on service rotation during six months were assessed initially, followed by a lecture on a medical subject. Conceptual schemes and mind maps were then introduced as a teaching-learning-evaluation instrument. Comprehension impact and comparison with a standard multiple choice evaluation was done. The statistical package (JMP version 5, SAS inst. 2004) was used. We noted that when we used conceptual schemes and mind mapping, learning improvement was noticeable among the three groups of residents (P < 0.001) and constitutes a better evaluation tool when compared with multiple choice exams (P < 0.0005). Based on our experience we recommend the use of this educational technique for medical residents in training.
Multiple traumatic brain injury and concussive symptoms among deployed military personnel.
Bryan, Craig J
2013-01-01
To identify if concussive symptoms occur with greater frequency among military personnel with multiple lifetime TBIs and if a history of TBI increases risk for subsequent TBI. One hundred and sixty-one military personnel referred to a TBI clinic for evaluation and treatment of suspected head injury at a military clinic in Iraq. Military patients completed standardized self-report measures of concussion, depression and post-traumatic stress symptoms; clinical interview; and physical examination. Group comparisons were made according to number of lifetime TBIs and logistic regression was utilized to determine the association of past TBIs on current TBI. Patients with one or more previous TBIs were more likely to report concussion symptoms immediately following a recent injury and during the evaluation. Although differences between single and multiple TBI groups were observed, these did not reach the level of statistical significance. A history of any TBI increased the likelihood of current TBI diagnosis, but this relationship was no longer significant when adjusting for injury mechanism, depression and post-traumatic stress symptoms. Among deployed military personnel, the relationship of previous TBI with recent TBI and concussive symptoms may be largely explained by the presence of psychological symptoms.
AGARWAL, SANDEEP K.; GOURH, PRAVITT; SHETE, SANJAY; PAZ, GENE; DIVECHA, DIPAL; REVEILLE, JOHN D.; ASSASSI, SHERVIN; TAN, FILEMON K.; MAYES, MAUREEN D.; ARNETT, FRANK C.
2010-01-01
Objective IL23R has been identified as a susceptibility gene for development of multiple autoimmune diseases. We investigated the possible association of IL23R with systemic sclerosis (SSc), an autoimmune disease that leads to the development of cutaneous and visceral fibrosis. Methods We tested 9 single-nucleotide polymorphisms (SNP) in IL23R for association with SSc in a cohort of 1402 SSc cases and 1038 controls. IL23R SNP tested were previously identified as SNP showing associations with inflammatory bowel disease. Results Case-control comparisons revealed no statistically significant differences between patients and healthy controls with any of the IL23R polymorphisms. Analyses of subsets of SSc patients showed that rs11209026 (Arg381Gln variant) was associated with anti-topoisomerase I antibody (ATA)-positive SSc (p = 0.001)) and rs11465804 SNP was associated with diffuse and ATA-positive SSc (p = 0.0001, p = 0.0026, respectively). These associations remained significant after accounting for multiple comparisons using the false discovery rate method. Wild-type genotype at both rs11209026 and rs11465804 showed significant protection against the presence of pulmonary hypertension (PHT). (p = 3×10−5, p = 1×10−5, respectively). Conclusion Polymorphisms in IL23R are associated with susceptibility to ATA-positive SSc and protective against development of PHT in patients with SSc. PMID:19918037
Northern Hemisphere winter storm track trends since 1959 derived from multiple reanalysis datasets
NASA Astrophysics Data System (ADS)
Chang, Edmund K. M.; Yau, Albert M. W.
2016-09-01
In this study, a comprehensive comparison of Northern Hemisphere winter storm track trend since 1959 derived from multiple reanalysis datasets and rawinsonde observations has been conducted. In addition, trends in terms of variance and cyclone track statistics have been compared. Previous studies, based largely on the National Center for Environmental Prediction-National Center for Atmospheric Research Reanalysis (NNR), have suggested that both the Pacific and Atlantic storm tracks have significantly intensified between the 1950s and 1990s. Comparison with trends derived from rawinsonde observations suggest that the trends derived from NNR are significantly biased high, while those from the European Center for Medium Range Weather Forecasts 40-year Reanalysis and the Japanese 55-year Reanalysis are much less biased but still too high. Those from the two twentieth century reanalysis datasets are most consistent with observations but may exhibit slight biases of opposite signs. Between 1959 and 2010, Pacific storm track activity has likely increased by 10 % or more, while Atlantic storm track activity has likely increased by <10 %. Our analysis suggests that trends in Pacific and Atlantic basin wide storm track activity prior to the 1950s derived from the two twentieth century reanalysis datasets are unlikely to be reliable due to changes in density of surface observations. Nevertheless, these datasets may provide useful information on interannual variability, especially over the Atlantic.
Antanasijević, Davor; Pocajt, Viktor; Povrenović, Dragan; Perić-Grujić, Aleksandra; Ristić, Mirjana
2013-12-01
The aims of this study are to create an artificial neural network (ANN) model using non-specific water quality parameters and to examine the accuracy of three different ANN architectures: General Regression Neural Network (GRNN), Backpropagation Neural Network (BPNN) and Recurrent Neural Network (RNN), for prediction of dissolved oxygen (DO) concentration in the Danube River. The neural network model has been developed using measured data collected from the Bezdan monitoring station on the Danube River. The input variables used for the ANN model are water flow, temperature, pH and electrical conductivity. The model was trained and validated using available data from 2004 to 2008 and tested using the data from 2009. The order of performance for the created architectures based on their comparison with the test data is RNN > GRNN > BPNN. The ANN results are compared with multiple linear regression (MLR) model using multiple statistical indicators. The comparison of the RNN model with the MLR model indicates that the RNN model performs much better, since all predictions of the RNN model for the test data were within the error of less than ± 10 %. In case of the MLR, only 55 % of predictions were within the error of less than ± 10 %. The developed RNN model can be used as a tool for the prediction of DO in river waters.
Bristow, P; Tivers, M; Packer, R; Brockman, D; Ortiz, V; Newson, K; Lipscomb, V
2017-08-01
To report the long-term bile acid stimulation test results for dogs that have undergone complete suture ligation of a single congenital extrahepatic portosystemic shunt. Data were collected from the hospital records of all dogs that had undergone a complete suture ligation of a single congenital extrahepatic portosystemic shunt. Owners were invited to return to the referral centre or their local veterinarian for repeat serum bile acid measurement. Dogs diagnosed with idiopathic epilepsy and undergoing bile acid stimulation tests were used as a comparison population. Fifty-one study dogs were included, with a mean follow-up time of 62 months. 48 dogs had no evidence of multiple acquired shunts and a significant reduction in the pre- and post-prandial serum bile acid concentrations at long-term follow-up compared with pre-operative measurements. Pre- and post-prandial serum bile acids were statistically significantly greater for dogs that had undergone a full ligation (with no evidence of multiple acquired shunts) at all time points compared to the control dogs (P<0·001 for all comparisons). The results suggest that in dogs treated with complete suture ligation mild increases in serum bile acids are not clinically relevant if there are no physical examination abnormalities, a normal body condition score and no relapse in clinical signs. © 2017 British Small Animal Veterinary Association.
Knopik, Valerie S.; Marceau, Kristine; Palmer, Rohan H. C.; Smith, Taylor F.; Heath, Andrew C.
2016-01-01
Maternal smoking during pregnancy (SDP) is a significant public health concern with adverse consequences to the health and well-being of the fetus. There is considerable debate about the best method of assessing SDP, including birth/medical records, timeline follow-back approaches, multiple reporters, and biological verification (e.g., cotinine). This is particularly salient for genetically-informed approaches where it is not always possible or practical to do a prospective study starting during the prenatal period when concurrent biological specimen samples can be collected with ease. In a sample of families (N = 173) specifically selected for sibling pairs discordant for prenatal smoking exposure, we: (1) compare rates of agreement across different types of report—maternal report of SDP, paternal report of maternal SDP, and SDP contained on birth records from the Department of Vital Statistics; (2) examine whether SDP is predictive of birth weight outcomes using our best SDP report as identified via step (1); and (3) use a sibling-comparison approach that controls for genetic and familial influences that siblings share in order to assess the effects of SDP on birth weight. Results show high agreement between reporters and support the utility of retrospective report of SDP. Further, we replicate a causal association between SDP and birth weight, wherein SDP results in reduced birth weight even when accounting for genetic and familial confounding factors via a sibling comparison approach. PMID:26494459
Brain responses to verbal stimuli among multiple sclerosis patients with pseudobulbar affect.
Haiman, Guy; Pratt, Hillel; Miller, Ariel
2008-08-15
To characterize the brain activity and associated cortical structures involved in pseudobulbar affect (PBA), a condition characterized by uncontrollable episodes of emotional lability in patients with multiple sclerosis (MS). Behavioral responses and event related potentials (ERP) in response to subjectively significant and neutral verbal stimuli were recorded from 33 subjects in 3 groups: 1) MS patients with PBA (MS+PBA); 2) MS patients without PBA (MS); 3) Healthy control subjects (HC). Statistical non-parametric mapping comparisons of ERP source current density distributions between groups were conducted separately for subjectively significant and for neutral stimuli. Behavioral responses showed more impulsive performance in patients with PBA. As expected, almost all ERP waveform comparisons between the MS groups and controls were significant. Source analysis indicated significantly distinct activation in MS+PBA in the vicinity of the somatosensory and motor areas in response to neutral stimuli, and at pre-motor and supplementary motor areas in response to subjectively significant stimuli. Both subjectively significant and neutral stimuli evoked higher current density in MS+PBA compared to both other groups. PBA of MS patients involves cortical structures related to sensory-motor and emotional processing, in addition to overactive involvement of motor cortical areas in response to neutral stimuli. These results may suggest that a 'disinhibition' of a "gate control"-type mechanism for emotional expression may lead to the lower emotional expression threshold of pseudobulbar affect.
Goudriaan, Marije; Van den Hauwe, Marleen; Simon-Martinez, Cristina; Huenaerts, Catherine; Molenaers, Guy; Goemans, Nathalie; Desloovere, Kaat
2018-04-30
Prolonged ambulation is considered important in children with Duchenne muscular dystrophy (DMD). However, previous studies analyzing DMD gait were sensitive to false positive outcomes, caused by uncorrected multiple comparisons, regional focus bias, and inter-component covariance bias. Also, while muscle weakness is often suggested to be the main cause for the altered gait pattern in DMD, this was never verified. Our research question was twofold: 1) are we able to confirm the sagittal kinematic and kinetic gait alterations described in a previous review with statistical non-parametric mapping (SnPM)? And 2) are these gait deviations related to lower limb weakness? We compared gait kinematics and kinetics of 15 children with DMD and 15 typical developing (TD) children (5-17 years), with a two sample Hotelling's T 2 test and post-hoc two-tailed, two-sample t-test. We used canonical correlation analyses to study the relationship between weakness and altered gait parameters. For all analyses, α-level was corrected for multiple comparisons, resulting in α = 0.005. We only found one of the previously reported kinematic deviations: the children with DMD had an increased knee flexion angle during swing (p = 0.0006). Observed gait deviations that were not reported in the review were an increased hip flexion angle during stance (p = 0.0009) and swing (p = 0.0001), altered combined knee and ankle torques (p = 0.0002), and decreased power absorption during stance (p = 0.0001). No relationships between weakness and these gait deviations were found. We were not able to replicate the gait deviations in DMD previously reported in literature, thus DMD gait remains undefined. Further, weakness does not seem to be linearly related to altered gait features. The progressive nature of the disease requires larger study populations and longitudinal analyses to gain more insight into DMD gait and its underlying causes. Copyright © 2018 Elsevier B.V. All rights reserved.
Assessing the significance of pedobarographic signals using random field theory.
Pataky, Todd C
2008-08-07
Traditional pedobarographic statistical analyses are conducted over discrete regions. Recent studies have demonstrated that regionalization can corrupt pedobarographic field data through conflation when arbitrary dividing lines inappropriately delineate smooth field processes. An alternative is to register images such that homologous structures optimally overlap and then conduct statistical tests at each pixel to generate statistical parametric maps (SPMs). The significance of SPM processes may be assessed within the framework of random field theory (RFT). RFT is ideally suited to pedobarographic image analysis because its fundamental data unit is a lattice sampling of a smooth and continuous spatial field. To correct for the vast number of multiple comparisons inherent in such data, recent pedobarographic studies have employed a Bonferroni correction to retain a constant family-wise error rate. This approach unfortunately neglects the spatial correlation of neighbouring pixels, so provides an overly conservative (albeit valid) statistical threshold. RFT generally relaxes the threshold depending on field smoothness and on the geometry of the search area, but it also provides a framework for assigning p values to suprathreshold clusters based on their spatial extent. The current paper provides an overview of basic RFT concepts and uses simulated and experimental data to validate both RFT-relevant field smoothness estimations and RFT predictions regarding the topological characteristics of random pedobarographic fields. Finally, previously published experimental data are re-analysed using RFT inference procedures to demonstrate how RFT yields easily understandable statistical results that may be incorporated into routine clinical and laboratory analyses.
Early postnatal hyperglycaemia is a risk factor for treatment-demanding retinopathy of prematurity.
Slidsborg, Carina; Jensen, Louise Bering; Rasmussen, Steen Christian; Fledelius, Hans Callø; Greisen, Gorm; Cour, Morten de la
2018-01-01
To investigate whether neonatal hyperglycaemia in the first postnatal week is associated with treatment-demanding retinopathy of prematurity (ROP). This is a Danish national, retrospective, case-control study of premature infants (birth period 2003-2006). Three national registers were searched, and data were linked through a unique civil registration number. The study sample consisted of 106 cases each matched with two comparison infants. Matching criteria were gestational age (GA) at birth, ROP not registered and born at the same neonatal intensive care unit. Potential 'new' risk factors were analysed in a multivariate logistic regression model, while adjusted for previously recognised risk factors (ie, GA at birth, small for gestational age, multiple birth and male sex). Hospital records of 310 preterm infants (106 treated; 204 comparison infants) were available. Nutrition in terms of energy (kcal/kg/week) and protein (g/kg/week) given to the preterm infants during the first postnatal week were statistically insignificant between the study groups (Mann-Whitney U test; p=0.165/p=0.163). Early postnatal weight gain between the two study groups was borderline significant (t-test; p=0.047). Hyperglycaemic events (indexed value) were statistically significantly different between the two study groups (Mann-Whitney U test; p<0.001). Hyperglycaemia was a statistically independent risk factor (OR: 1.022; 95% CI 1.002 to 1.042; p=0.031). An independent association was found between the occurrence of hyperglycaemic events during the first postnatal week and later development of treatment-demanding ROP, when adjusted for known risk factors. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Two-year drinking water carcinogenicity study of methyl tertiary-butyl ether (MTBE) in Wistar rats.
Dodd, Darol; Willson, Gabrielle; Parkinson, Horace; Bermudez, Edilberto
2013-07-01
Methyl tertiary-butyl ether (MTBE) has been used as a gasoline additive to reduce tailpipe emissions and its use has been discontinued. There remains a concern that drinking water sources have been contaminated with MTBE. A two-year drinking water carcinogenicity study of MTBE was conducted in Wistar rats (males, 0, 0.5, 3, 7.5 mg ml(-1); and females, 0, 0.5, 3, and 15 mg ml(-1)). Body weights were unaffected and water consumption was reduced in MTBE-exposed males and females. Wet weights of male kidneys were increased at the end of two years of exposure to 7.5 mg ml(-1) MTBE. Chronic progressive nephropathy was observed in males and females, was more severe in males, and was exacerbated in the high MTBE exposure groups. Brain was the only tissue with a statistically significant finding of neoplasms. One astrocytoma (1/50) was found in a female rat (15 mg ml(-1)). The incidence of brain astrocytomas in male rats was 1/50, 1/50, 1/50 and 4/50 for the 0, 0.5, 3 and 7.5 mg ml(-1) exposure groups, respectively. This was a marginally significant statistical trend, but not statistically significant when pairwise comparisons were made or when multiple comparisons were taken into account. The incidence of astrocytoma fell within historical control ranges for Wistar rats, and the brain has not been identified as a target organ following chronic administration of MTBE, ethyl tert-butyl ether, or tertiary butyl alcohol (in drinking water) to mice and rats. We conclude that the astrocytomas observed in this study are not associated with exposure to MTBE. Copyright © 2011 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Delaval, Marine; Michinov, Nicolas; Le Bohec, Olivier; Le Hénaff, Benjamin
2017-01-01
The aim of this study was to examine how social or temporal-self comparison feedback, delivered in real-time in a web-based training environment, could influence the academic performance of students in a statistics examination. First-year psychology students were given the opportunity to train for a statistics examination during a semester by…
Mittal, Rakesh; Singla, Meenu G; Garg, Ashima; Dhawan, Anu
2015-12-01
Apical extrusion of irrigants and debris is an inherent limitation associated with cleaning and shaping of root canals and has been studied extensively because of its clinical relevance as a cause of flare-ups. Many factors affect the amount of extruded intracanal materials. The purpose of this study was to assess the bacterial extrusion by using manual, multiple-file continuous rotary system (ProTaper) and single-file continuous rotary system (One Shape). Forty-two human mandibular premolars were inoculated with Enterococcus faecalis by using a bacterial extrusion model. The teeth were divided into 3 experimental groups (n = 12) and 1 control group (n = 6). The root canals of experimental groups were instrumented according to the manufacturers' instructions by using manual technique, ProTaper rotary system, or One Shape rotary system. Sterilized saline was used as an irrigant, and bacterial extrusion was quantified as colony-forming units/milliliter. The results obtained were statistically analyzed by using one-way analysis of variance for intergroup comparison and post hoc Tukey test for pair-wise comparison. The level for accepting statistical significance was set at P < .05. All the instrumentation techniques resulted in bacterial extrusion, with manual step-back technique exhibiting significantly more bacterial extrusion than the engine-driven systems. Of the 2 engine-driven systems, ProTaper rotary extruded significantly more bacteria than One Shape rotary system (P < .05). The engine-driven nickel-titanium systems were associated with less apical extrusion. The instrument design may play a role in amount of extrusion. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Strum, David P; May, Jerrold H; Sampson, Allan R; Vargas, Luis G; Spangler, William E
2003-01-01
Variability inherent in the duration of surgical procedures complicates surgical scheduling. Modeling the duration and variability of surgeries might improve time estimates. Accurate time estimates are important operationally to improve utilization, reduce costs, and identify surgeries that might be considered outliers. Surgeries with multiple procedures are difficult to model because they are difficult to segment into homogenous groups and because they are performed less frequently than single-procedure surgeries. The authors studied, retrospectively, 10,740 surgeries each with exactly two CPTs and 46,322 surgical cases with only one CPT from a large teaching hospital to determine if the distribution of dual-procedure surgery times fit more closely a lognormal or a normal model. The authors tested model goodness of fit to their data using Shapiro-Wilk tests, studied factors affecting the variability of time estimates, and examined the impact of coding permutations (ordered combinations) on modeling. The Shapiro-Wilk tests indicated that the lognormal model is statistically superior to the normal model for modeling dual-procedure surgeries. Permutations of component codes did not appear to differ significantly with respect to total procedure time and surgical time. To improve individual models for infrequent dual-procedure surgeries, permutations may be reduced and estimates may be based on the longest component procedure and type of anesthesia. The authors recommend use of the lognormal model for estimating surgical times for surgeries with two component procedures. Their results help legitimize the use of log transforms to normalize surgical procedure times prior to hypothesis testing using linear statistical models. Multiple-procedure surgeries may be modeled using the longest (statistically most important) component procedure and type of anesthesia.
NASA Astrophysics Data System (ADS)
Greenberg, Ariela Caren
Differential item functioning (DIF) and differential distractor functioning (DDF) are methods used to screen for item bias (Camilli & Shepard, 1994; Penfield, 2008). Using an applied empirical example, this mixed-methods study examined the congruency and relationship of DIF and DDF methods in screening multiple-choice items. Data for Study I were drawn from item responses of 271 female and 236 male low-income children on a preschool science assessment. Item analyses employed a common statistical approach of the Mantel-Haenszel log-odds ratio (MH-LOR) to detect DIF in dichotomously scored items (Holland & Thayer, 1988), and extended the approach to identify DDF (Penfield, 2008). Findings demonstrated that the using MH-LOR to detect DIF and DDF supported the theoretical relationship that the magnitude and form of DIF and are dependent on the DDF effects, and demonstrated the advantages of studying DIF and DDF in multiple-choice items. A total of 4 items with DIF and DDF and 5 items with only DDF were detected. Study II incorporated an item content review, an important but often overlooked and under-published step of DIF and DDF studies (Camilli & Shepard). Interviews with 25 female and 22 male low-income preschool children and an expert review helped to interpret the DIF and DDF results and their comparison, and determined that a content review process of studied items can reveal reasons for potential item bias that are often congruent with the statistical results. Patterns emerged and are discussed in detail. The quantitative and qualitative analyses were conducted in an applied framework of examining the validity of the preschool science assessment scores for evaluating science programs serving low-income children, however, the techniques can be generalized for use with measures across various disciplines of research.
Kwakkenbos, Linda; Willems, Linda M; Baron, Murray; Hudson, Marie; Cella, David; van den Ende, Cornelia H M; Thombs, Brett D
2014-01-01
The Functional Assessment of Chronic Illness Therapy-Fatigue (FACIT-F) is commonly used to assess fatigue in rheumatic diseases, and has shown to discriminate better across levels of the fatigue spectrum than other commonly used measures. The aim of this study was to assess the cross-language measurement equivalence of the English, French, and Dutch versions of the FACIT-F in systemic sclerosis (SSc) patients. The FACIT-F was completed by 871 English-speaking Canadian, 238 French-speaking Canadian and 230 Dutch SSc patients. Confirmatory factor analysis was used to assess the factor structure in the three samples. The Multiple-Indicator Multiple-Cause (MIMIC) model was utilized to assess differential item functioning (DIF), comparing English versus French and versus Dutch patient responses separately. A unidimensional factor model showed good fit in all samples. Comparing French versus English patients, statistically significant, but small-magnitude DIF was found for 3 of 13 items. French patients had 0.04 of a standard deviation (SD) lower latent fatigue scores than English patients and there was an increase of only 0.03 SD after accounting for DIF. For the Dutch versus English comparison, 4 items showed small, but statistically significant, DIF. Dutch patients had 0.20 SD lower latent fatigue scores than English patients. After correcting for DIF, there was a reduction of 0.16 SD in this difference. There was statistically significant DIF in several items, but the overall effect on fatigue scores was minimal. English, French and Dutch versions of the FACIT-F can be reasonably treated as having equivalent scoring metrics.
Ida, Satoshi; Murata, Kazuya; Ishihara, Yuki; Imataka, Kanako; Kaneko, Ryutaro; Fujiwara, Ryoko; Takahashi, Hiroka
2017-01-01
To comparatively investigate whether dynapenia and sarcopenia, as defined by the Asian Working Group for Sarcopenia (AWGS), are associated with fear of falling in elderly patients with diabetes. The subjects were outpatients with diabetes who were at least 65 years of age when they visited our hospital. Sarcopenia was evaluated based on the AWGS definition. The cutoff values for the appendicular skeletal mass index (multi-frequency bioelectrical impedance method), grip strength, and walking speed were, respectively, 7.0 kg/m 2 for men and 5.7 kg/m 2 for women, 26 kg for men and 18 kg for women, and ≤0.8 m/s for both men and women. Those with grip strength of less than or equal to the cutoff value were considered to have dynapenia. Fear of falling was assessed by a self-administered questionnaire survey with the Fall Efficacy Scale (FES) Japanese version. A multiple regression analysis was conducted using the FES score as a dependent variable and dynapenia or sarcopenia and moderators as explanatory variables. A total of 202 patients (male, n=127; female, n=75) were analyzed in this study. The FES scores of the patients with and without sarcopenia did not differ to a statistically significant extent in either male or female patients. The multiple regression analysis revealed a statistically significant association between dynapenia and the FES score in men (P=0.028). In elderly outpatients with diabetes, no association was found between sarcopenia and the fear of falling in either men or women. In contrast, a statistically significant association was found between dynapenia and fear of falling in men. This suggests the importance paying attention to the fear of falling when examining elderly male diabetes patients with dynapenia.
El Batawi, H Y
2015-02-01
To investigate the possible effect of intraoperative analgesia, namely diclofenac sodium compared to acetaminophen on post-recovery pain perception in children undergoing painful dental procedures under general anaesthesia. A double-blind randomised clinical trial. A sample of 180 consecutive cases of children undergoing full dental rehabilitation under general anaesthesia in a private hospital in Saudi Arabia during 2013 was divided into three groups (60 children each) according to the analgesic used prior to extubation. Group A, children had diclofenac sodium suppository. Group B, children received acetaminophen suppository and Group C, the control group. Using an authenticated Arabic version of the Wong and Baker faces Pain assessment Scale, patients were asked to choose the face that suits best the pain he/she is suffering. Data were collected and recorded for statistical analysis. Student's t test was used for comparison of sample means. A preliminary F test to compare sample variances was carried out to determine the appropriate t test variant to be used. A "p" value less than 0.05 was considered significant. More than 93% of children had post-operative pain in varying degrees. High statistical significance was observed between children in groups A and B compared to control group C with the later scoring high pain perception. Diclofenac showed higher potency in multiple painful procedures, while the statistical difference was not significant in children with three or less painful dental procedures. Diclophenac sodium is more potent than acetaminophen, especially for multiple pain-provoking or traumatic procedures. A timely use of NSAID analgesia just before extubation helps provide adequate coverage during recovery. Peri-operative analgesia is to be recommended as an essential treatment adjunct for child dental rehabilitation under general anaesthesia.
Does rational selection of training and test sets improve the outcome of QSAR modeling?
Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander
2012-10-22
Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.
Kwakkenbos, Linda; Willems, Linda M.; Baron, Murray; Hudson, Marie; Cella, David; van den Ende, Cornelia H. M.; Thombs, Brett D.
2014-01-01
Objective The Functional Assessment of Chronic Illness Therapy- Fatigue (FACIT-F) is commonly used to assess fatigue in rheumatic diseases, and has shown to discriminate better across levels of the fatigue spectrum than other commonly used measures. The aim of this study was to assess the cross-language measurement equivalence of the English, French, and Dutch versions of the FACIT-F in systemic sclerosis (SSc) patients. Methods The FACIT-F was completed by 871 English-speaking Canadian, 238 French-speaking Canadian and 230 Dutch SSc patients. Confirmatory factor analysis was used to assess the factor structure in the three samples. The Multiple-Indicator Multiple-Cause (MIMIC) model was utilized to assess differential item functioning (DIF), comparing English versus French and versus Dutch patient responses separately. Results A unidimensional factor model showed good fit in all samples. Comparing French versus English patients, statistically significant, but small-magnitude DIF was found for 3 of 13 items. French patients had 0.04 of a standard deviation (SD) lower latent fatigue scores than English patients and there was an increase of only 0.03 SD after accounting for DIF. For the Dutch versus English comparison, 4 items showed small, but statistically significant, DIF. Dutch patients had 0.20 SD lower latent fatigue scores than English patients. After correcting for DIF, there was a reduction of 0.16 SD in this difference. Conclusions There was statistically significant DIF in several items, but the overall effect on fatigue scores was minimal. English, French and Dutch versions of the FACIT-F can be reasonably treated as having equivalent scoring metrics. PMID:24638101
Multiple commodities in statistical microeconomics: Model and market
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Yu, Miao; Du, Xin
2016-11-01
A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.
Estimating and comparing microbial diversity in the presence of sequencing errors
Chiu, Chun-Huo
2016-01-01
Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures’ emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This approach aims to compare diversity estimates for equally-large or equally-complete samples; it is based on the seamless rarefaction and extrapolation sampling curves of Hill numbers, specifically for q = 0, 1 and 2. (2) An asymptotic approach refers to the comparison of the estimated asymptotic diversity profiles. That is, this approach compares the estimated profiles for complete samples or samples whose size tends to be sufficiently large. It is based on statistical estimation of the true Hill number of any order q ≥ 0. In the two approaches, replacing the spurious singleton count by our estimated count, we can greatly remove the positive biases associated with diversity estimates due to spurious singletons and also make fair comparisons across microbial communities, as illustrated in our simulation results and in applying our method to analyze sequencing data from viral metagenomes. PMID:26855872
Is My Network Module Preserved and Reproducible?
Langfelder, Peter; Luo, Rui; Oldham, Michael C.; Horvath, Steve
2011-01-01
In many applications, one is interested in determining which of the properties of a network module change across conditions. For example, to validate the existence of a module, it is desirable to show that it is reproducible (or preserved) in an independent test network. Here we study several types of network preservation statistics that do not require a module assignment in the test network. We distinguish network preservation statistics by the type of the underlying network. Some preservation statistics are defined for a general network (defined by an adjacency matrix) while others are only defined for a correlation network (constructed on the basis of pairwise correlations between numeric variables). Our applications show that the correlation structure facilitates the definition of particularly powerful module preservation statistics. We illustrate that evaluating module preservation is in general different from evaluating cluster preservation. We find that it is advantageous to aggregate multiple preservation statistics into summary preservation statistics. We illustrate the use of these methods in six gene co-expression network applications including 1) preservation of cholesterol biosynthesis pathway in mouse tissues, 2) comparison of human and chimpanzee brain networks, 3) preservation of selected KEGG pathways between human and chimpanzee brain networks, 4) sex differences in human cortical networks, 5) sex differences in mouse liver networks. While we find no evidence for sex specific modules in human cortical networks, we find that several human cortical modules are less preserved in chimpanzees. In particular, apoptosis genes are differentially co-expressed between humans and chimpanzees. Our simulation studies and applications show that module preservation statistics are useful for studying differences between the modular structure of networks. Data, R software and accompanying tutorials can be downloaded from the following webpage: http://www.genetics.ucla.edu/labs/horvath/CoexpressionNetwork/ModulePreservation. PMID:21283776
Please don't misuse the museum: 'declines' may be statistical.
Campbell Grant, Evan H
2015-03-01
Detecting declines in populations at broad spatial scales takes enormous effort, and long-term data are often more sparse than is desired for estimating trends, identifying drivers for population changes, framing conservation decisions, or taking management actions. Museum records and historic data can be available at large scales across multiple decades, and are therefore an attractive source of information on the comparative status of populations. However, changes in populations may be real (e.g. in response to environmental covariates) or resulting from variation in our ability to observe the true population response (also possibly related to environmental covariates). This is a (statistical) nuisance in understanding the true status of a population. Evaluating statistical hypotheses alongside more interesting ecological ones is important in the appropriate use of museum data. Two statistical considerations are generally applicable to use of museum records: first without initial random sampling, comparison with contemporary results cannot provide inference to the entire range of a species, and second the availability of only some individuals in a population may respond to environmental changes. Changes in the availability of individuals may reduce the proportion of the population that is present and able to be counted on a given survey event, resulting in an apparent decline even when population size is stable. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
The Ups and Downs of Repeated Cleavage and Internal Fragment Production in Top-Down Proteomics.
Lyon, Yana A; Riggs, Dylan; Fornelli, Luca; Compton, Philip D; Julian, Ryan R
2018-01-01
Analysis of whole proteins by mass spectrometry, or top-down proteomics, has several advantages over methods relying on proteolysis. For example, proteoforms can be unambiguously identified and examined. However, from a gas-phase ion-chemistry perspective, proteins are enormous molecules that present novel challenges relative to peptide analysis. Herein, the statistics of cleaving the peptide backbone multiple times are examined to evaluate the inherent propensity for generating internal versus terminal ions. The raw statistics reveal an inherent bias favoring production of terminal ions, which holds true regardless of protein size. Importantly, even if the full suite of internal ions is generated by statistical dissociation, terminal ions are predicted to account for at least 50% of the total ion current, regardless of protein size, if there are three backbone dissociations or fewer. Top-down analysis should therefore be a viable approach for examining proteins of significant size. Comparison of the purely statistical analysis with actual top-down data derived from ultraviolet photodissociation (UVPD) and higher-energy collisional dissociation (HCD) reveals that terminal ions account for much of the total ion current in both experiments. Terminal ion production is more favored in UVPD relative to HCD, which is likely due to differences in the mechanisms controlling fragmentation. Importantly, internal ions are not found to dominate from either the theoretical or experimental point of view. Graphical abstract ᅟ.
The Ups and Downs of Repeated Cleavage and Internal Fragment Production in Top-Down Proteomics
NASA Astrophysics Data System (ADS)
Lyon, Yana A.; Riggs, Dylan; Fornelli, Luca; Compton, Philip D.; Julian, Ryan R.
2018-01-01
Analysis of whole proteins by mass spectrometry, or top-down proteomics, has several advantages over methods relying on proteolysis. For example, proteoforms can be unambiguously identified and examined. However, from a gas-phase ion-chemistry perspective, proteins are enormous molecules that present novel challenges relative to peptide analysis. Herein, the statistics of cleaving the peptide backbone multiple times are examined to evaluate the inherent propensity for generating internal versus terminal ions. The raw statistics reveal an inherent bias favoring production of terminal ions, which holds true regardless of protein size. Importantly, even if the full suite of internal ions is generated by statistical dissociation, terminal ions are predicted to account for at least 50% of the total ion current, regardless of protein size, if there are three backbone dissociations or fewer. Top-down analysis should therefore be a viable approach for examining proteins of significant size. Comparison of the purely statistical analysis with actual top-down data derived from ultraviolet photodissociation (UVPD) and higher-energy collisional dissociation (HCD) reveals that terminal ions account for much of the total ion current in both experiments. Terminal ion production is more favored in UVPD relative to HCD, which is likely due to differences in the mechanisms controlling fragmentation. Importantly, internal ions are not found to dominate from either the theoretical or experimental point of view. [Figure not available: see fulltext.
Network Meta-Analysis Using R: A Review of Currently Available Automated Packages
Neupane, Binod; Richer, Danielle; Bonner, Ashley Joel; Kibret, Taddele; Beyene, Joseph
2014-01-01
Network meta-analysis (NMA) – a statistical technique that allows comparison of multiple treatments in the same meta-analysis simultaneously – has become increasingly popular in the medical literature in recent years. The statistical methodology underpinning this technique and software tools for implementing the methods are evolving. Both commercial and freely available statistical software packages have been developed to facilitate the statistical computations using NMA with varying degrees of functionality and ease of use. This paper aims to introduce the reader to three R packages, namely, gemtc, pcnetmeta, and netmeta, which are freely available software tools implemented in R. Each automates the process of performing NMA so that users can perform the analysis with minimal computational effort. We present, compare and contrast the availability and functionality of different important features of NMA in these three packages so that clinical investigators and researchers can determine which R packages to implement depending on their analysis needs. Four summary tables detailing (i) data input and network plotting, (ii) modeling options, (iii) assumption checking and diagnostic testing, and (iv) inference and reporting tools, are provided, along with an analysis of a previously published dataset to illustrate the outputs available from each package. We demonstrate that each of the three packages provides a useful set of tools, and combined provide users with nearly all functionality that might be desired when conducting a NMA. PMID:25541687
Network meta-analysis using R: a review of currently available automated packages.
Neupane, Binod; Richer, Danielle; Bonner, Ashley Joel; Kibret, Taddele; Beyene, Joseph
2014-01-01
Network meta-analysis (NMA)--a statistical technique that allows comparison of multiple treatments in the same meta-analysis simultaneously--has become increasingly popular in the medical literature in recent years. The statistical methodology underpinning this technique and software tools for implementing the methods are evolving. Both commercial and freely available statistical software packages have been developed to facilitate the statistical computations using NMA with varying degrees of functionality and ease of use. This paper aims to introduce the reader to three R packages, namely, gemtc, pcnetmeta, and netmeta, which are freely available software tools implemented in R. Each automates the process of performing NMA so that users can perform the analysis with minimal computational effort. We present, compare and contrast the availability and functionality of different important features of NMA in these three packages so that clinical investigators and researchers can determine which R packages to implement depending on their analysis needs. Four summary tables detailing (i) data input and network plotting, (ii) modeling options, (iii) assumption checking and diagnostic testing, and (iv) inference and reporting tools, are provided, along with an analysis of a previously published dataset to illustrate the outputs available from each package. We demonstrate that each of the three packages provides a useful set of tools, and combined provide users with nearly all functionality that might be desired when conducting a NMA.
Kubsik, Anna; Klimkiewicz, Paulina; Klimkiewicz, Robert; Jankowska, Katarzyna; Jankowska, Agnieszka; Woldańska-Okońska, Marta
2014-07-01
Multiple sclerosis is a chronic, inflammatory, demyelinating disease of the central nervous system, which is characterized by diverse symptomatology. Most often affects people at a young age gradually leading to their disability. Looking for new therapies to alleviate neurological deficits caused by the disease. One of the alternative methods of therapy is high - tone power therapy. The article is a comparison of high-tone power therapy and kinesis in improving patients with multiple sclerosis. The aim of this study was to evaluate the effectiveness of high-tone power therapy and exercises in kinesis on the functional status of patients with multiple sclerosis. The study involved 20 patients with multiple sclerosis, both sexes, treated at the Department of Rehabilitation and Physical Medicine in Lodz. Patients were randomly divided into two groups studied. In group high-tone power therapy applied for 60 minutes, while in group II were used exercises for kinesis. Treatment time for both groups of patients was 15 days. To assess the functional status scale was used: Expanded Disability Status Scale of Kurtzke (EDSS), as well as by Barthel ADL Index. Assessment of quality of life were made using MSQOL Questionnaire-54. For the evaluation of gait and balance using Tinetti scale, and pain VAS rated, and Laitinen. Changes in muscle tone was assessed on the basis of the Ashworth scale. Both group I and II improved on scales conducted before and after therapy. In group I, in which the applied high-tone power therapy, reported statistically significant results in 9 out of 10 tested parameters, while in group II, which was used in the exercises in kinesis an improvement in 6 out of 10 tested parameters. Correlating the results of both the test groups in relation to each other did not show statistically significant differences. High-Tone Power Therapy beneficial effect on the functional status of patients with multiple sclerosis. Obtaining results in terms of number of tested parameters allows for the use of this therapy in the comprehensive improvement of patients with multiple sclerosis. Exercises from the scheme kinesis favorable impact on the functional status of patients with MS and are essential in the rehabilitation of these patients. In any group, no adverse effects were observed.
[Development of an Excel spreadsheet for meta-analysis of indirect and mixed treatment comparisons].
Tobías, Aurelio; Catalá-López, Ferrán; Roqué, Marta
2014-01-01
Meta-analyses in clinical research usually aimed to evaluate treatment efficacy and safety in direct comparison with a unique comparator. Indirect comparisons, using the Bucher's method, can summarize primary data when information from direct comparisons is limited or nonexistent. Mixed comparisons allow combining estimates from direct and indirect comparisons, increasing statistical power. There is a need for simple applications for meta-analysis of indirect and mixed comparisons. These can easily be conducted using a Microsoft Office Excel spreadsheet. We developed a spreadsheet for indirect and mixed effects comparisons of friendly use for clinical researchers interested in systematic reviews, but non-familiarized with the use of more advanced statistical packages. The use of the proposed Excel spreadsheet for indirect and mixed comparisons can be of great use in clinical epidemiology to extend the knowledge provided by traditional meta-analysis when evidence from direct comparisons is limited or nonexistent.
Statistical detection of EEG synchrony using empirical bayesian inference.
Singh, Archana K; Asoh, Hideki; Takeda, Yuji; Phillips, Steven
2015-01-01
There is growing interest in understanding how the brain utilizes synchronized oscillatory activity to integrate information across functionally connected regions. Computing phase-locking values (PLV) between EEG signals is a popular method for quantifying such synchronizations and elucidating their role in cognitive tasks. However, high-dimensionality in PLV data incurs a serious multiple testing problem. Standard multiple testing methods in neuroimaging research (e.g., false discovery rate, FDR) suffer severe loss of power, because they fail to exploit complex dependence structure between hypotheses that vary in spectral, temporal and spatial dimension. Previously, we showed that a hierarchical FDR and optimal discovery procedures could be effectively applied for PLV analysis to provide better power than FDR. In this article, we revisit the multiple comparison problem from a new Empirical Bayes perspective and propose the application of the local FDR method (locFDR; Efron, 2001) for PLV synchrony analysis to compute FDR as a posterior probability that an observed statistic belongs to a null hypothesis. We demonstrate the application of Efron's Empirical Bayes approach for PLV synchrony analysis for the first time. We use simulations to validate the specificity and sensitivity of locFDR and a real EEG dataset from a visual search study for experimental validation. We also compare locFDR with hierarchical FDR and optimal discovery procedures in both simulation and experimental analyses. Our simulation results showed that the locFDR can effectively control false positives without compromising on the power of PLV synchrony inference. Our results from the application locFDR on experiment data detected more significant discoveries than our previously proposed methods whereas the standard FDR method failed to detect any significant discoveries.
Evaluating Transportation by Comparing Several uses of Rotary Endodontic Files.
Elemam, Ranya F; Capelas, J A; Vaz, Mário A P; Viriato, Nuno; Pereira, M L; Azevedo, A; West, John
2015-12-01
To evaluate the frequent use of ProTaper Next (PTN; Dentsply Maillefer, Ballaigues, Switzerland) systems on shaping ability of root canal utilizing Solidworks (2014, Dassault Systemes) software. Thirty-six root canals in clear resin blocks (Dentsply-Maillefer) were allocated into six experimental groups (n = 36). Six new sets of PTN instruments (Dentsply Maillefer, Ballaigues, Switzerland) were used six times to shape the resin blocks. A #15 K-file was inserted to the working length (WL), followed by ProGlider (PG) to create a glide path. Sequential use of PTN instrumentation in a crown-down technique was used to reach size (30/07) apically. Macroscopic photos of the blocks were taken before and after instrumentation, layered by Paint Shop Pro 9 from JascSoftware, and then canal transportation was measured using Solidwork 2014. The data were analyzed by SPSS software version 22. Multivariate statistical analysis general linear model (GLM) was also applied. Bonferroni correction test was used in multiple comparisons and the statistical significance was set to 0.05. There was no difference in canal transportation resulted from utilizing PTN files after six multiple uses; in addition, the PTN files showed ability to maintain the original canal anatomy, especially in the apical level, where lowest total mean value of canal center displacement was seen (3 mm level) (0.019 ± 0.017). ProTaper Next files can be used to prepare single and multiple canals in a single furcated tooth. ProTaper Next nickel-titanium (NiTi) file system is a safe instrument that respects the canal shape, allows practitioners to treat difficult cases with good results, and low risk of separation.
Malignant testicular tumour incidence and mortality trends
Wojtyła-Buciora, Paulina; Więckowska, Barbara; Krzywinska-Wiewiorowska, Małgorzata; Gromadecka-Sutkiewicz, Małgorzata
2016-01-01
Aim of the study In Poland testicular tumours are the most frequent cancer among men aged 20–44 years. Testicular tumour incidence since the 1980s and 1990s has been diversified geographically, with an increased risk of mortality in Wielkopolska Province, which was highlighted at the turn of the 1980s and 1990s. The aim of the study was the comparative analysis of the tendencies in incidence and death rates due to malignant testicular tumours observed among men in Poland and in Wielkopolska Province. Material and methods Data from the National Cancer Registry were used for calculations. The incidence/mortality rates among men due to malignant testicular cancer as well as the tendencies in incidence/death ratio observed in Poland and Wielkopolska were established based on regression equation. The analysis was deepened by adopting the multiple linear regression model. A p-value < 0.05 was arbitrarily adopted as the criterion of statistical significance, and for multiple comparisons it was modified according to the Bonferroni adjustment to a value of p < 0.0028. Calculations were performed with the use of PQStat v1.4.8 package. Results The incidence of malignant testicular neoplasms observed among men in Poland and in Wielkopolska Province indicated a significant rising tendency. The multiple linear regression model confirmed that the year variable is a strong incidence forecast factor only within the territory of Poland. A corresponding analysis of mortality rates among men in Poland and in Wielkopolska Province did not show any statistically significant correlations. Conclusions Late diagnosis of Polish patients calls for undertaking appropriate educational activities that would facilitate earlier reporting of the patients, thus increasing their chances for recovery. Introducing preventive examinations in the regions of increased risk of testicular tumour may allow earlier diagnosis. PMID:27095941
The Advanced Composition Explorer Shock Database and Application to Particle Acceleration Theory
NASA Technical Reports Server (NTRS)
Parker, L. Neergaard; Zank, G. P.
2015-01-01
The theory of particle acceleration via diffusive shock acceleration (DSA) has been studied in depth by Gosling et al. (1981), van Nes et al. (1984), Mason (2000), Desai et al. (2003), Zank et al. (2006), among many others. Recently, Parker and Zank (2012, 2014) and Parker et al. (2014) using the Advanced Composition Explorer (ACE) shock database at 1 AU explored two questions: does the upstream distribution alone have enough particles to account for the accelerated downstream distribution and can the slope of the downstream accelerated spectrum be explained using DSA? As was shown in this research, diffusive shock acceleration can account for a large population of the shocks. However, Parker and Zank (2012, 2014) and Parker et al. (2014) used a subset of the larger ACE database. Recently, work has successfully been completed that allows for the entire ACE database to be considered in a larger statistical analysis. We explain DSA as it applies to single and multiple shocks and the shock criteria used in this statistical analysis. We calculate the expected injection energy via diffusive shock acceleration given upstream parameters defined from the ACE Solar Wind Electron, Proton, and Alpha Monitor (SWEPAM) data to construct the theoretical upstream distribution. We show the comparison of shock strength derived from diffusive shock acceleration theory to observations in the 50 keV to 5 MeV range from an instrument on ACE. Parameters such as shock velocity, shock obliquity, particle number, and time between shocks are considered. This study is further divided into single and multiple shock categories, with an additional emphasis on forward-forward multiple shock pairs. Finally with regard to forward-forward shock pairs, results comparing injection energies of the first shock, second shock, and second shock with previous energetic population will be given.
The Advanced Composition Explorer Shock Database and Application to Particle Acceleration Theory
NASA Technical Reports Server (NTRS)
Parker, L. Neergaard; Zank, G. P.
2015-01-01
The theory of particle acceleration via diffusive shock acceleration (DSA) has been studied in depth by Gosling et al. (1981), van Nes et al. (1984), Mason (2000), Desai et al. (2003), Zank et al. (2006), among many others. Recently, Parker and Zank (2012, 2014) and Parker et al. (2014) using the Advanced Composition Explorer (ACE) shock database at 1 AU explored two questions: does the upstream distribution alone have enough particles to account for the accelerated downstream distribution and can the slope of the downstream accelerated spectrum be explained using DSA? As was shown in this research, diffusive shock acceleration can account for a large population of the shocks. However, Parker and Zank (2012, 2014) and Parker et al. (2014) used a subset of the larger ACE database. Recently, work has successfully been completed that allows for the entire ACE database to be considered in a larger statistical analysis. We explain DSA as it applies to single and multiple shocks and the shock criteria used in this statistical analysis. We calculate the expected injection energy via diffusive shock acceleration given upstream parameters defined from the ACE Solar Wind Electron, Proton, and Alpha Monitor (SWEPAM) data to construct the theoretical upstream distribution. We show the comparison of shock strength derived from diffusive shock acceleration theory to observations in the 50 keV to 5 MeV range from an instrument on ACE. Parameters such as shock velocity, shock obliquity, particle number, and time between shocks are considered. This study is further divided into single and multiple shock categories, with an additional emphasis on forward-forward multiple shock pairs. Finally with regard to forwardforward shock pairs, results comparing injection energies of the first shock, second shock, and second shock with previous energetic population will be given.
Radiation Risks of Leukemia, Lymphoma and Multiple Myeloma Incidence in the Mayak Cohort: 1948–2004
Kuznetsova, Irina S.; Labutina, Elena V.; Hunter, Nezahat
2016-01-01
Incidence of all types of lymphatic and hematopoietic cancers, including Hodgkin’s lymphoma, non-Hodgkin's lymphoma, multiple myeloma, acute and chronic myeloid leukemia (AML and CML respectively), chronic lymphocytic leukemia (CLL) and other forms of leukemia have been studied in a cohort of 22,373 workers employed at the Mayak Production Association (PA) main facilities during 536,126 person-years of follow-up from the start of employment between 1948 and 1982 to the end of 2004. Risk assessment was performed for both external gamma-radiation and internal alpha-exposure of red bone marrow due to incorporated Pu-239 using Mayak Workers Dosimetry System 2008 taking into account non-radiation factors. The incidence of leukemia excluding CLL showed a non-linear dose response relationship for external gamma exposure with exponential effect modifiers based on time since exposure and age at exposure. Among the major subtypes of leukemia, the excess risk of AML was the highest within the first 2–5 years of external exposure (ERR per Gy: 38.40; 90% CI: 13.92–121.4) and decreased substantially thereafter, but the risks remained statistically significant (ERR per Gy: 2.63; 90% CI: 0.07–12.55). In comparison, excess CML first occurred 5 years after exposure and decreased about 10 years after exposure, although the association was not statistically significant (ERR per Gy: 1.39; 90% CI: -0.22–7.32). The study found no evidence of an association between leukemia and occupational exposure to internal plutonium ERR per Gy 2.13; 90% CI: <0–9.45). There was also no indication of any relationship with either external gamma or internal plutonium radiation exposure for either incidence of Hodgkin or non-Hodgkin lymphoma or multiple myeloma. PMID:27631102
Predicting Flood Hazards in Systems with Multiple Flooding Mechanisms
NASA Astrophysics Data System (ADS)
Luke, A.; Schubert, J.; Cheng, L.; AghaKouchak, A.; Sanders, B. F.
2014-12-01
Delineating flood zones in systems that are susceptible to flooding from a single mechanism (riverine flooding) is a relatively well defined procedure with specific guidance from agencies such as FEMA and USACE. However, there is little guidance in delineating flood zones in systems that are susceptible to flooding from multiple mechanisms such as storm surge, waves, tidal influence, and riverine flooding. In this study, a new flood mapping method which accounts for multiple extremes occurring simultaneously is developed and exemplified. The study site in which the method is employed is the Tijuana River Estuary (TRE) located in Southern California adjacent to the U.S./Mexico border. TRE is an intertidal coastal estuary that receives freshwater flows from the Tijuana River. Extreme discharge from the Tijuana River is the primary driver of flooding within TRE, however tide level and storm surge also play a significant role in flooding extent and depth. A comparison between measured flows at the Tijuana River and ocean levels revealed a correlation between extreme discharge and ocean height. Using a novel statistical method based upon extreme value theory, ocean heights were predicted conditioned up extreme discharge occurring within the Tijuana River. This statistical technique could also be applied to other systems in which different factors are identified as the primary drivers of flooding, such as significant wave height conditioned upon tide level, for example. Using the predicted ocean levels conditioned upon varying return levels of discharge as forcing parameters for the 2D hydraulic model BreZo, the 100, 50, 20, and 10 year floodplains were delineated. The results will then be compared to floodplains delineated using the standard methods recommended by FEMA for riverine zones with a downstream ocean boundary.
Gommoll, Carl; Durgam, Suresh; Mathews, Maju; Forero, Giovanna; Nunez, Rene; Tang, Xiongwen; Thase, Michael E
2015-01-01
Background Vilazodone, a selective serotonin reuptake inhibitor and 5-HT1A receptor partial agonist, is approved for treating major depressive disorder in adults. This study (NCT01629966 ClinicalTrials.gov) evaluated the efficacy and safety of vilazodone in adults with generalized anxiety disorder (GAD). Methods A multicenter, double-blind, parallel-group, placebo-controlled, fixed-dose study in patients with GAD randomized (1:1:1) to placebo (n = 223), or vilazodone 20 mg/day (n = 230) or 40 mg/day (n = 227). Primary and secondary efficacy parameters were total score change from baseline to week 8 on the Hamilton Rating Scale for Anxiety (HAMA) and Sheehan Disability Scale (SDS), respectively, analyzed using a predefined mixed-effect model for repeated measures (MMRM). Safety outcomes were presented by descriptive statistics. Results The least squares mean difference (95% confidence interval) in HAMA total score change from baseline (MMRM) was statistically significant for vilazodone 40 mg/day versus placebo (–1.80 [–3.26, –0.34]; P = .0312 [adjusted for multiple comparisons]), but not for vilazodone 20 mg/day versus placebo. Mean change from baseline in SDS total score was not significantly different for either dose of vilazodone versus placebo when adjusted for multiplicity; significant improvement versus placebo was noted for vilazodone 40 mg/day without adjustment for multiplicity (P = .0349). The incidence of adverse events was similar for vilazodone 20 and 40 mg/day (∼71%) and slightly lower for placebo (62%). Nausea, diarrhea, dizziness, vomiting, and fatigue were reported in ≥5% of patients in either vilazodone group and at least twice the rate of placebo. Conclusions Vilazodone was effective in treating anxiety symptoms of GAD. No new safety concerns were identified. PMID:25891440
Detecting disease-predisposing variants: the haplotype method.
Valdes, A M; Thomson, G
1997-01-01
For many HLA-associated diseases, multiple alleles-- and, in some cases, multiple loci--have been suggested as the causative agents. The haplotype method for identifying disease-predisposing amino acids in a genetic region is a stratification analysis. We show that, for each haplotype combination containing all the amino acid sites involved in the disease process, the relative frequencies of amino acid variants at sites not involved in disease but in linkage disequilibrium with the disease-predisposing sites are expected to be the same in patients and controls. The haplotype method is robust to mode of inheritance and penetrance of the disease and can be used to determine unequivocally whether all amino acid sites involved in the disease have not been identified. Using a resampling technique, we developed a statistical test that takes account of the nonindependence of the sites sampled. Further, when multiple sites in the genetic region are involved in disease, the test statistic gives a closer fit to the null expectation when some--compared with none--of the true predisposing factors are included in the haplotype analysis. Although the haplotype method cannot distinguish between very highly correlated sites in one population, ethnic comparisons may help identify the true predisposing factors. The haplotype method was applied to insulin-dependent diabetes mellitus (IDDM) HLA class II DQA1-DQB1 data from Caucasian, African, and Japanese populations. Our results indicate that the combination DQA1#52 (Arg predisposing) DQB1#57 (Asp protective), which has been proposed as an important IDDM agent, does not include all the predisposing elements. With rheumatoid arthritis HLA class II DRB1 data, the results were consistent with the shared-epitope hypothesis. PMID:9042931
Gommoll, Carl; Durgam, Suresh; Mathews, Maju; Forero, Giovanna; Nunez, Rene; Tang, Xiongwen; Thase, Michael E
2015-06-01
Vilazodone, a selective serotonin reuptake inhibitor and 5-HT1A receptor partial agonist, is approved for treating major depressive disorder in adults. This study (NCT01629966 ClinicalTrials.gov) evaluated the efficacy and safety of vilazodone in adults with generalized anxiety disorder (GAD). A multicenter, double-blind, parallel-group, placebo-controlled, fixed-dose study in patients with GAD randomized (1:1:1) to placebo (n = 223), or vilazodone 20 mg/day (n = 230) or 40 mg/day (n = 227). Primary and secondary efficacy parameters were total score change from baseline to week 8 on the Hamilton Rating Scale for Anxiety (HAMA) and Sheehan Disability Scale (SDS), respectively, analyzed using a predefined mixed-effect model for repeated measures (MMRM). Safety outcomes were presented by descriptive statistics. The least squares mean difference (95% confidence interval) in HAMA total score change from baseline (MMRM) was statistically significant for vilazodone 40 mg/day versus placebo (-1.80 [-3.26, -0.34]; P = .0312 [adjusted for multiple comparisons]), but not for vilazodone 20 mg/day versus placebo. Mean change from baseline in SDS total score was not significantly different for either dose of vilazodone versus placebo when adjusted for multiplicity; significant improvement versus placebo was noted for vilazodone 40 mg/day without adjustment for multiplicity (P = .0349). The incidence of adverse events was similar for vilazodone 20 and 40 mg/day (∼71%) and slightly lower for placebo (62%). Nausea, diarrhea, dizziness, vomiting, and fatigue were reported in ≥5% of patients in either vilazodone group and at least twice the rate of placebo. Vilazodone was effective in treating anxiety symptoms of GAD. No new safety concerns were identified. © 2015 The Authors. Depression and Anxiety published by Wiley Periodicals, Inc.
Theodosiou, Theodosios; Efstathiou, Georgios; Papanikolaou, Nikolas; Kyrpides, Nikos C; Bagos, Pantelis G; Iliopoulos, Ioannis; Pavlopoulos, Georgios A
2017-07-14
Nowadays, due to the technological advances of high-throughput techniques, Systems Biology has seen a tremendous growth of data generation. With network analysis, looking at biological systems at a higher level in order to better understand a system, its topology and the relationships between its components is of a great importance. Gene expression, signal transduction, protein/chemical interactions, biomedical literature co-occurrences, are few of the examples captured in biological network representations where nodes represent certain bioentities and edges represent the connections between them. Today, many tools for network visualization and analysis are available. Nevertheless, most of them are standalone applications that often (i) burden users with computing and calculation time depending on the network's size and (ii) focus on handling, editing and exploring a network interactively. While such functionality is of great importance, limited efforts have been made towards the comparison of the topological analysis of multiple networks. Network Analysis Provider (NAP) is a comprehensive web tool to automate network profiling and intra/inter-network topology comparison. It is designed to bridge the gap between network analysis, statistics, graph theory and partially visualization in a user-friendly way. It is freely available and aims to become a very appealing tool for the broader community. It hosts a great plethora of topological analysis methods such as node and edge rankings. Few of its powerful characteristics are: its ability to enable easy profile comparisons across multiple networks, find their intersection and provide users with simplified, high quality plots of any of the offered topological characteristics against any other within the same network. It is written in R and Shiny, it is based on the igraph library and it is able to handle medium-scale weighted/unweighted, directed/undirected and bipartite graphs. NAP is available at http://bioinformatics.med.uoc.gr/NAP .
Jelínek, Tomáš; Maisnar, Vladimír; Pour, Luděk; Špička, Ivan; Minařík, Jiří; Gregora, Evžen; Kessler, Petr; Sýkora, Michal; Fraňková, Hana; Adamová, Dagmar; Wróbel, Marek; Mikula, Peter; Jarkovský, Jiří; Diels, Joris; Gatopoulou, Xenia; Veselá, Šárka; Besson, Hervé; Brožová, Lucie; Ito, Tetsuro; Hájek, Roman
2018-05-01
We conducted an adjusted comparison of progression-free survival (PFS) and overall survival (OS) for daratumumab monotherapy versus standard of care, as observed in a real-world historical cohort of heavily pretreated multiple myeloma patients from Czech Republic. Using longitudinal chart data from the Registry of Monoclonal Gammopathies (RMG) of the Czech Myeloma Group, patient-level data from the RMG was pooled with pivotal daratumumab monotherapy studies (GEN501 and SIRIUS; 16 mg/kg). From the RMG database, we identified 972 treatment lines in 463 patients previously treated with both a proteasome inhibitor and an immunomodulatory drug. Treatment initiation dates for RMG patients were between March 2006 and March 2015. The most frequently used treatment regimens were lenalidomide-based regimens (33.4%), chemotherapy (18.1%), bortezomib-based regimens (13.6%), thalidomide-based regimens (8.0%), and bortezomib plus thalidomide (5.3%). Few patients were treated with carfilzomib-based regimens (2.5%) and pomalidomide-based regimens (2.4%). Median observed PFS for daratumumab and the RMG cohort was 4.0 and 5.8 months (unadjusted hazard ratio [HR], 1.14; 95% confidence interval [CI], 0.94-1.39), respectively, and unadjusted median OS was 20.1 and 11.9 months (unadjusted HR, 0.61; 95% CI, 0.48-0.78), respectively. Statistical adjustments for differences in baseline characteristics were made using patient-level data. The adjusted HRs (95% CI) for PFS and OS for daratumumab versus the RMG cohort were 0.79 (0.56-1.12; p = .192) and 0.33 (0.21-0.52; p < .001), respectively. Adjusted comparisons between trial data and historical cohorts can provide useful insights to clinicians and reimbursement decision makers on relative treatment efficacies in the absence of head-to-head comparison studies for daratumumab monotherapy.
Leonardi, Michael J; McGory, Marcia L; Ko, Clifford Y
2007-09-01
To explore hospital comparison Web sites for general surgery based on: (1) a systematic Internet search, (2) Web site quality evaluation, and (3) exploration of possible areas of improvement. A systematic Internet search was performed to identify hospital quality comparison Web sites in September 2006. Publicly available Web sites were rated on accessibility, data/statistical transparency, appropriateness, and timeliness. A sample search was performed to determine ranking consistency. Six national hospital comparison Web sites were identified: 1 government (Hospital Compare [Centers for Medicare and Medicaid Services]), 2 nonprofit (Quality Check [Joint Commission on Accreditation of Healthcare Organizations] and Hospital Quality and Safety Survey Results [Leapfrog Group]), and 3 proprietary sites (names withheld). For accessibility and data transparency, the government and nonprofit Web sites were best. For appropriateness, the proprietary Web sites were best, comparing multiple surgical procedures using a combination of process, structure, and outcome measures. However, none of these sites explicitly defined terms such as complications. Two proprietary sites allowed patients to choose ranking criteria. Most data on these sites were 2 years old or older. A sample search of 3 surgical procedures at 4 hospitals demonstrated significant inconsistencies. Patients undergoing surgery are increasingly using the Internet to compare hospital quality. However, a review of available hospital comparison Web sites shows suboptimal measures of quality and inconsistent results. This may be partially because of a lack of complete and timely data. Surgeons should be involved with quality comparison Web sites to ensure appropriate methods and criteria.
An analysis of 1986 drug procurement practices in hospitals within the United States.
Smolarek, R T; Powell, M F; Solomon, D K; Boike, S C
1989-09-01
The purpose of this study was to statistically answer a set of predefined objectives concerning pharmaceutical procurement. The key indicators were assumed to be cost per patient day and turnover rate. Of the 5,911 surveys mailed, 709 surveys were returned for a 12% response rate. The following statements were based on attempts to answer the six predetermined objectives. Pharmaceutical purchasing is controlled by pharmacy departments to the extent that comparisons to pharmaceutical purchasing by materials management departments was not possible. Prime vendor purchasing is the procurement method of choice. Competitive bidding through a group process is so popular that a valid comparison to nongroup bidding could not be accomplished with the results of this survey. Certain variables of group purchasing such as group age, contract adherence, and volume commitment, do not appear to be correlated to purchasing outcomes in this study. When comparing government to private hospitals, the private sector seems to have an advantage in managing turnover rates. Cost per patient day results were less conclusive. As single and multiple hospital systems were compared for purchasing outcomes, the results were not totally conclusive. Although, multiple hospital systems had a significantly higher turnover rate. Finally, a comparison based on the use, or lack of use, of prime vendor arrangements demonstrated interesting results. The duration of contract did not significantly affect the purchasing outcomes. Other hospital variables such as size, type, ownership, and organization, demonstrated notable trends. The importance of examining hospitals based on case mix and mission seems to be most important. Also, the ability to relate purchasing outcomes with formulary management strategies needs further study before conclusive statements can be adopted.
NASA Astrophysics Data System (ADS)
Shiroishi, Mark S.; Gupta, Vikash; Bigjahan, Bavrina; Cen, Steven Y.; Rashid, Faisal; Hwang, Darryl H.; Lerner, Alexander; Boyko, Orest B.; Liu, Chia-Shang Jason; Law, Meng; Thompson, Paul M.; Jahanshad, Neda
2017-11-01
Background: Increases in cancer survival have made understanding the basis of cancer-related cognitive impairment (CRCI) more important. CRCI neuroimaging studies have traditionally used dedicated research brain MRIs in breast cancer survivors with small sample sizes; little is known about other non-CNS cancers. However, there is a wealth of unused data from clinically-indicated MRIs that could be used to study CRCI. Objective: Evaluate brain cortical structural differences in those with non-CNS cancers using clinically-indicated MRIs. Design: Cross-sectional Patients: Adult non-CNS cancer and non-cancer control (C) patients who underwent clinically-indicated MRIs. Methods: Brain cortical surface area and thickness were measured using 3D T1-weighted images. An age-adjusted linear regression model was used and the Benjamini and Hochberg false discovery rate (FDR) corrected for multiple comparisons. Group comparisons were: cancer cases with chemotherapy (Ch+), cancer cases without chemotherapy (Ch-) and subgroup of lung cancer cases with and without chemotherapy vs C. Results: Sixty-four subjects were analyzed: 22 Ch+, 23 Ch- and 19 C patients. Subgroup analysis of 16 LCa was also performed. Statistically significant decreases in either cortical surface area or thickness were found in multiple ROIs primarily within the frontal and temporal lobes for all comparisons. Limitations: Several limitations were apparent including a small sample size that precluded adjustment for other covariates. Conclusions: Our preliminary results suggest that various types of non-CNS cancers, both with and without chemotherapy, may result in brain structural abnormalities. Also, there is a wealth of untapped clinical MRIs that could be used for future CRCI studies.
Kim, Jae-Hun; Ha, Tae Lin; Im, Geun Ho; Yang, Jehoon; Seo, Sang Won; Chung, Julius Juhyun; Chae, Sun Young; Lee, In Su; Lee, Jung Hee
2014-03-05
In this study, we have shown the potential of a voxel-based analysis for imaging amyloid plaques and its utility in monitoring therapeutic response in Alzheimer's disease (AD) mice using manganese oxide nanoparticles conjugated with an antibody of Aβ1-40 peptide (HMON-abAβ40). T1-weighted MR brain images of a drug-treated AD group (n=7), a nontreated AD group (n=7), and a wild-type group (n=7) were acquired using a 7.0 T MRI system before (D-1), 24-h (D+1) after, and 72-h (D+3) after injection with an HMON-abAβ40 contrast agent. For the treatment of AD mice, DAPT was injected intramuscularly into AD transgenic mice (50 mg/kg of body weight). For voxel-based analysis, the skull-stripped mouse brain images were spatially normalized, and these voxels' intensities were corrected to reduce voxel intensity differences across scans in different mice. Statistical analysis showed higher normalized MR signal intensity in the frontal cortex and hippocampus of AD mice over wild-type mice on D+1 and D+3 (P<0.01, uncorrected for multiple comparisons). After the treatment of AD mice, the normalized MR signal intensity in the frontal cortex and hippocampus decreased significantly in comparison with nontreated AD mice on D+1 and D+3 (P<0.01, uncorrected for multiple comparisons). These results were confirmed by histological analysis using a thioflavin staining. This unique strategy allows us to detect brain regions that are subjected to amyloid plaque deposition and has the potential for human applications in monitoring therapeutic response for drug development in AD.
Signal transduction molecules in gliomas of all grades.
Ermoian, Ralph P; Kaprealian, Tania; Lamborn, Kathleen R; Yang, Xiaodong; Jelluma, Nannette; Arvold, Nils D; Zeidman, Ruth; Berger, Mitchel S; Stokoe, David; Haas-Kogan, Daphne A
2009-01-01
To interrogate grade II, III, and IV gliomas and characterize the critical effectors within the PI3-kinase pathway upstream and downstream of mTOR. Experimental design Tissues from 87 patients who were treated at UCSF between 1990 and 2004 were analyzed. Twenty-eight grade II, 17 grade III glioma, 26 grade IV gliomas, and 16 non-tumor brain specimens were analyzed. Protein levels were assessed by immunoblots; RNA levels were determined by polymerase chain reaction amplification. To address the multiple comparisons, first an overall analysis was done comparing the four groups using Spearman's Correlation Coefficient. Only if this analysis was statistically significant were individual pairwise comparisons done. Multiple comparison analyses revealed a significant correlation with grade for all variables examined, except phosphorylated-S6. Expression of phosphorylated-4E-BP1, phosphorylated-PKB/Akt, PTEN, TSC1, and TSC2 correlated with grade (P < 0.01 for all). We extended our analyses to ask whether decreases in TSC proteins levels were due to changes in mRNA levels, or due to changes in post-transcriptional alterations. We found significantly lower levels of TSC1 and TSC2 mRNA in GBMs than in grade II gliomas or non-tumor brain (P < 0.01). Expression levels of critical signaling molecules upstream and downstream of mTOR differ between non-tumor brain and gliomas of any grade. The single variable whose expression did not differ between non-tumor brain and gliomas was phosphorylated-S6, suggesting that other protein kinases, in addition to mTOR, contribute significantly to S6 phosphorylation. mTOR provides a rational therapeutic target in gliomas of all grades, and clinical benefit may emerge as mTOR inhibitors are combined with additional agents.
Paddock, Michael T; Bailitz, John; Horowitz, Russ; Khishfe, Basem; Cosby, Karen; Sergel, Michelle J
2015-03-01
Pre-hospital focused assessment with sonography in trauma (FAST) has been effectively used to improve patient care in multiple mass casualty events throughout the world. Although requisite FAST knowledge may now be learned remotely by disaster response team members, traditional live instructor and model hands-on FAST skills training remains logistically challenging. The objective of this pilot study was to compare the effectiveness of a novel portable ultrasound (US) simulator with traditional FAST skills training for a deployed mixed provider disaster response team. We randomized participants into one of three training groups stratified by provider role: Group A. Traditional Skills Training, Group B. US Simulator Skills Training, and Group C. Traditional Skills Training Plus US Simulator Skills Training. After skills training, we measured participants' FAST image acquisition and interpretation skills using a standardized direct observation tool (SDOT) with healthy models and review of FAST patient images. Pre- and post-course US and FAST knowledge were also assessed using a previously validated multiple-choice evaluation. We used the ANOVA procedure to determine the statistical significance of differences between the means of each group's skills scores. Paired sample t-tests were used to determine the statistical significance of pre- and post-course mean knowledge scores within groups. We enrolled 36 participants, 12 randomized to each training group. Randomization resulted in similar distribution of participants between training groups with respect to provider role, age, sex, and prior US training. For the FAST SDOT image acquisition and interpretation mean skills scores, there was no statistically significant difference between training groups. For US and FAST mean knowledge scores, there was a statistically significant improvement between pre- and post-course scores within each group, but again there was not a statistically significant difference between training groups. This pilot study of a deployed mixed-provider disaster response team suggests that a novel portable US simulator may provide equivalent skills training in comparison to traditional live instructor and model training. Further studies with a larger sample size and other measures of short- and long-term clinical performance are warranted.
Consensus building for interlaboratory studies, key comparisons, and meta-analysis
NASA Astrophysics Data System (ADS)
Koepke, Amanda; Lafarge, Thomas; Possolo, Antonio; Toman, Blaza
2017-06-01
Interlaboratory studies in measurement science, including key comparisons, and meta-analyses in several fields, including medicine, serve to intercompare measurement results obtained independently, and typically produce a consensus value for the common measurand that blends the values measured by the participants. Since interlaboratory studies and meta-analyses reveal and quantify differences between measured values, regardless of the underlying causes for such differences, they also provide so-called ‘top-down’ evaluations of measurement uncertainty. Measured values are often substantially over-dispersed by comparison with their individual, stated uncertainties, thus suggesting the existence of yet unrecognized sources of uncertainty (dark uncertainty). We contrast two different approaches to take dark uncertainty into account both in the computation of consensus values and in the evaluation of the associated uncertainty, which have traditionally been preferred by different scientific communities. One inflates the stated uncertainties by a multiplicative factor. The other adds laboratory-specific ‘effects’ to the value of the measurand. After distinguishing what we call recipe-based and model-based approaches to data reductions in interlaboratory studies, we state six guiding principles that should inform such reductions. These principles favor model-based approaches that expose and facilitate the critical assessment of validating assumptions, and give preeminence to substantive criteria to determine which measurement results to include, and which to exclude, as opposed to purely statistical considerations, and also how to weigh them. Following an overview of maximum likelihood methods, three general purpose procedures for data reduction are described in detail, including explanations of how the consensus value and degrees of equivalence are computed, and the associated uncertainty evaluated: the DerSimonian-Laird procedure; a hierarchical Bayesian procedure; and the Linear Pool. These three procedures have been implemented and made widely accessible in a Web-based application (NIST Consensus Builder). We illustrate principles, statistical models, and data reduction procedures in four examples: (i) the measurement of the Newtonian constant of gravitation; (ii) the measurement of the half-lives of radioactive isotopes of caesium and strontium; (iii) the comparison of two alternative treatments for carotid artery stenosis; and (iv) a key comparison where the measurand was the calibration factor of a radio-frequency power sensor.
Pounds, Stan; Cheng, Cheng; Cao, Xueyuan; Crews, Kristine R; Plunkett, William; Gandhi, Varsha; Rubnitz, Jeffrey; Ribeiro, Raul C; Downing, James R; Lamba, Jatinder
2009-08-15
In some applications, prior biological knowledge can be used to define a specific pattern of association of multiple endpoint variables with a genomic variable that is biologically most interesting. However, to our knowledge, there is no statistical procedure designed to detect specific patterns of association with multiple endpoint variables. Projection onto the most interesting statistical evidence (PROMISE) is proposed as a general procedure to identify genomic variables that exhibit a specific biologically interesting pattern of association with multiple endpoint variables. Biological knowledge of the endpoint variables is used to define a vector that represents the biologically most interesting values for statistics that characterize the associations of the endpoint variables with a genomic variable. A test statistic is defined as the dot-product of the vector of the observed association statistics and the vector of the most interesting values of the association statistics. By definition, this test statistic is proportional to the length of the projection of the observed vector of correlations onto the vector of most interesting associations. Statistical significance is determined via permutation. In simulation studies and an example application, PROMISE shows greater statistical power to identify genes with the interesting pattern of associations than classical multivariate procedures, individual endpoint analyses or listing genes that have the pattern of interest and are significant in more than one individual endpoint analysis. Documented R routines are freely available from www.stjuderesearch.org/depts/biostats and will soon be available as a Bioconductor package from www.bioconductor.org.
Li, Zhiguang; Kwekel, Joshua C; Chen, Tao
2012-01-01
Functional comparison across microarray platforms is used to assess the comparability or similarity of the biological relevance associated with the gene expression data generated by multiple microarray platforms. Comparisons at the functional level are very important considering that the ultimate purpose of microarray technology is to determine the biological meaning behind the gene expression changes under a specific condition, not just to generate a list of genes. Herein, we present a method named percentage of overlapping functions (POF) and illustrate how it is used to perform the functional comparison of microarray data generated across multiple platforms. This method facilitates the determination of functional differences or similarities in microarray data generated from multiple array platforms across all the functions that are presented on these platforms. This method can also be used to compare the functional differences or similarities between experiments, projects, or laboratories.
Grabitz, Clara R; Button, Katherine S; Munafò, Marcus R; Newbury, Dianne F; Pernet, Cyril R; Thompson, Paul A; Bishop, Dorothy V M
2018-01-01
Genetics and neuroscience are two areas of science that pose particular methodological problems because they involve detecting weak signals (i.e., small effects) in noisy data. In recent years, increasing numbers of studies have attempted to bridge these disciplines by looking for genetic factors associated with individual differences in behavior, cognition, and brain structure or function. However, different methodological approaches to guarding against false positives have evolved in the two disciplines. To explore methodological issues affecting neurogenetic studies, we conducted an in-depth analysis of 30 consecutive articles in 12 top neuroscience journals that reported on genetic associations in nonclinical human samples. It was often difficult to estimate effect sizes in neuroimaging paradigms. Where effect sizes could be calculated, the studies reporting the largest effect sizes tended to have two features: (i) they had the smallest samples and were generally underpowered to detect genetic effects, and (ii) they did not fully correct for multiple comparisons. Furthermore, only a minority of studies used statistical methods for multiple comparisons that took into account correlations between phenotypes or genotypes, and only nine studies included a replication sample or explicitly set out to replicate a prior finding. Finally, presentation of methodological information was not standardized and was often distributed across Methods sections and Supplementary Material, making it challenging to assemble basic information from many studies. Space limits imposed by journals could mean that highly complex statistical methods were described in only a superficial fashion. In summary, methods that have become standard in the genetics literature-stringent statistical standards, use of large samples, and replication of findings-are not always adopted when behavioral, cognitive, or neuroimaging phenotypes are used, leading to an increased risk of false-positive findings. Studies need to correct not just for the number of phenotypes collected but also for the number of genotypes examined, genetic models tested, and subsamples investigated. The field would benefit from more widespread use of methods that take into account correlations between the factors corrected for, such as spectral decomposition, or permutation approaches. Replication should become standard practice; this, together with the need for larger sample sizes, will entail greater emphasis on collaboration between research groups. We conclude with some specific suggestions for standardized reporting in this area.
Reconstruction of three-dimensional porous media using a single thin section
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman; Sahimi, Muhammad
2012-06-01
The purpose of any reconstruction method is to generate realizations of two- or multiphase disordered media that honor limited data for them, with the hope that the realizations provide accurate predictions for those properties of the media for which there are no data available, or their measurement is difficult. An important example of such stochastic systems is porous media for which the reconstruction technique must accurately represent their morphology—the connectivity and geometry—as well as their flow and transport properties. Many of the current reconstruction methods are based on low-order statistical descriptors that fail to provide accurate information on the properties of heterogeneous porous media. On the other hand, due to the availability of high resolution two-dimensional (2D) images of thin sections of a porous medium, and at the same time, the high cost, computational difficulties, and even unavailability of complete 3D images, the problem of reconstructing porous media from 2D thin sections remains an outstanding unsolved problem. We present a method based on multiple-point statistics in which a single 2D thin section of a porous medium, represented by a digitized image, is used to reconstruct the 3D porous medium to which the thin section belongs. The method utilizes a 1D raster path for inspecting the digitized image, and combines it with a cross-correlation function, a grid splitting technique for deciding the resolution of the computational grid used in the reconstruction, and the Shannon entropy as a measure of the heterogeneity of the porous sample, in order to reconstruct the 3D medium. It also utilizes an adaptive technique for identifying the locations and optimal number of hard (quantitative) data points that one can use in the reconstruction process. The method is tested on high resolution images for Berea sandstone and a carbonate rock sample, and the results are compared with the data. To make the comparison quantitative, two sets of statistical tests consisting of the autocorrelation function, histogram matching of the local coordination numbers, the pore and throat size distributions, multiple-points connectivity, and single- and two-phase flow permeabilities are used. The comparison indicates that the proposed method reproduces the long-range connectivity of the porous media, with the computed properties being in good agreement with the data for both porous samples. The computational efficiency of the method is also demonstrated.
Using LabView for real-time monitoring and tracking of multiple biological objects
NASA Astrophysics Data System (ADS)
Nikolskyy, Aleksandr I.; Krasilenko, Vladimir G.; Bilynsky, Yosyp Y.; Starovier, Anzhelika
2017-04-01
Today real-time studying and tracking of movement dynamics of various biological objects is important and widely researched. Features of objects, conditions of their visualization and model parameters strongly influence the choice of optimal methods and algorithms for a specific task. Therefore, to automate the processes of adaptation of recognition tracking algorithms, several Labview project trackers are considered in the article. Projects allow changing templates for training and retraining the system quickly. They adapt to the speed of objects and statistical characteristics of noise in images. New functions of comparison of images or their features, descriptors and pre-processing methods will be discussed. The experiments carried out to test the trackers on real video files will be presented and analyzed.
Detecting and removing multiplicative spatial bias in high-throughput screening technologies.
Caraus, Iurie; Mazoure, Bogdan; Nadon, Robert; Makarenkov, Vladimir
2017-10-15
Considerable attention has been paid recently to improve data quality in high-throughput screening (HTS) and high-content screening (HCS) technologies widely used in drug development and chemical toxicity research. However, several environmentally- and procedurally-induced spatial biases in experimental HTS and HCS screens decrease measurement accuracy, leading to increased numbers of false positives and false negatives in hit selection. Although effective bias correction methods and software have been developed over the past decades, almost all of these tools have been designed to reduce the effect of additive bias only. Here, we address the case of multiplicative spatial bias. We introduce three new statistical methods meant to reduce multiplicative spatial bias in screening technologies. We assess the performance of the methods with synthetic and real data affected by multiplicative spatial bias, including comparisons with current bias correction methods. We also describe a wider data correction protocol that integrates methods for removing both assay and plate-specific spatial biases, which can be either additive or multiplicative. The methods for removing multiplicative spatial bias and the data correction protocol are effective in detecting and cleaning experimental data generated by screening technologies. As our protocol is of a general nature, it can be used by researchers analyzing current or next-generation high-throughput screens. The AssayCorrector program, implemented in R, is available on CRAN. makarenkov.vladimir@uqam.ca. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Multi-pulse multi-delay (MPMD) multiple access modulation for UWB
Dowla, Farid U.; Nekoogar, Faranak
2007-03-20
A new modulation scheme in UWB communications is introduced. This modulation technique utilizes multiple orthogonal transmitted-reference pulses for UWB channelization. The proposed UWB receiver samples the second order statistical function at both zero and non-zero lags and matches the samples to stored second order statistical functions, thus sampling and matching the shape of second order statistical functions rather than just the shape of the received pulses.
Multiple-solution problems in a statistics classroom: an example
NASA Astrophysics Data System (ADS)
Chu, Chi Wing; Chan, Kevin L. T.; Chan, Wai-Sum; Kwong, Koon-Shing
2017-11-01
The mathematics education literature shows that encouraging students to develop multiple solutions for given problems has a positive effect on students' understanding and creativity. In this paper, we present an example of multiple-solution problems in statistics involving a set of non-traditional dice. In particular, we consider the exact probability mass distribution for the sum of face values. Four different ways of solving the problem are discussed. The solutions span various basic concepts in different mathematical disciplines (sample space in probability theory, the probability generating function in statistics, integer partition in basic combinatorics and individual risk model in actuarial science) and thus promotes upper undergraduate students' awareness of knowledge connections between their courses. All solutions of the example are implemented using the R statistical software package.
Voillet, Valentin; Besse, Philippe; Liaubet, Laurence; San Cristobal, Magali; González, Ignacio
2016-10-03
In omics data integration studies, it is common, for a variety of reasons, for some individuals to not be present in all data tables. Missing row values are challenging to deal with because most statistical methods cannot be directly applied to incomplete datasets. To overcome this issue, we propose a multiple imputation (MI) approach in a multivariate framework. In this study, we focus on multiple factor analysis (MFA) as a tool to compare and integrate multiple layers of information. MI involves filling the missing rows with plausible values, resulting in M completed datasets. MFA is then applied to each completed dataset to produce M different configurations (the matrices of coordinates of individuals). Finally, the M configurations are combined to yield a single consensus solution. We assessed the performance of our method, named MI-MFA, on two real omics datasets. Incomplete artificial datasets with different patterns of missingness were created from these data. The MI-MFA results were compared with two other approaches i.e., regularized iterative MFA (RI-MFA) and mean variable imputation (MVI-MFA). For each configuration resulting from these three strategies, the suitability of the solution was determined against the true MFA configuration obtained from the original data and a comprehensive graphical comparison showing how the MI-, RI- or MVI-MFA configurations diverge from the true configuration was produced. Two approaches i.e., confidence ellipses and convex hulls, to visualize and assess the uncertainty due to missing values were also described. We showed how the areas of ellipses and convex hulls increased with the number of missing individuals. A free and easy-to-use code was proposed to implement the MI-MFA method in the R statistical environment. We believe that MI-MFA provides a useful and attractive method for estimating the coordinates of individuals on the first MFA components despite missing rows. MI-MFA configurations were close to the true configuration even when many individuals were missing in several data tables. This method takes into account the uncertainty of MI-MFA configurations induced by the missing rows, thereby allowing the reliability of the results to be evaluated.
A SAS(®) macro implementation of a multiple comparison post hoc test for a Kruskal-Wallis analysis.
Elliott, Alan C; Hynan, Linda S
2011-04-01
The Kruskal-Wallis (KW) nonparametric analysis of variance is often used instead of a standard one-way ANOVA when data are from a suspected non-normal population. The KW omnibus procedure tests for some differences between groups, but provides no specific post hoc pair wise comparisons. This paper provides a SAS(®) macro implementation of a multiple comparison test based on significant Kruskal-Wallis results from the SAS NPAR1WAY procedure. The implementation is designed for up to 20 groups at a user-specified alpha significance level. A Monte-Carlo simulation compared this nonparametric procedure to commonly used parametric multiple comparison tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
The limits of protein sequence comparison?
Pearson, William R; Sierk, Michael L
2010-01-01
Modern sequence alignment algorithms are used routinely to identify homologous proteins, proteins that share a common ancestor. Homologous proteins always share similar structures and often have similar functions. Over the past 20 years, sequence comparison has become both more sensitive, largely because of profile-based methods, and more reliable, because of more accurate statistical estimates. As sequence and structure databases become larger, and comparison methods become more powerful, reliable statistical estimates will become even more important for distinguishing similarities that are due to homology from those that are due to analogy (convergence). The newest sequence alignment methods are more sensitive than older methods, but more accurate statistical estimates are needed for their full power to be realized. PMID:15919194
Statistical methods and neural network approaches for classification of data from multiple sources
NASA Technical Reports Server (NTRS)
Benediktsson, Jon Atli; Swain, Philip H.
1990-01-01
Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.
RooStatsCms: A tool for analysis modelling, combination and statistical studies
NASA Astrophysics Data System (ADS)
Piparo, D.; Schott, G.; Quast, G.
2010-04-01
RooStatsCms is an object oriented statistical framework based on the RooFit technology. Its scope is to allow the modelling, statistical analysis and combination of multiple search channels for new phenomena in High Energy Physics. It provides a variety of methods described in literature implemented as classes, whose design is oriented to the execution of multiple CPU intensive jobs on batch systems or on the Grid.
Statistical Methods in Integrative Genomics
Richardson, Sylvia; Tseng, George C.; Sun, Wei
2016-01-01
Statistical methods in integrative genomics aim to answer important biology questions by jointly analyzing multiple types of genomic data (vertical integration) or aggregating the same type of data across multiple studies (horizontal integration). In this article, we introduce different types of genomic data and data resources, and then review statistical methods of integrative genomics, with emphasis on the motivation and rationale of these methods. We conclude with some summary points and future research directions. PMID:27482531
Kobayashi, Katsuhiro; Jacobs, Julia; Gotman, Jean
2013-01-01
Objective A novel type of statistical time-frequency analysis was developed to elucidate changes of high-frequency EEG activity associated with epileptic spikes. Methods The method uses the Gabor Transform and detects changes of power in comparison to background activity using t-statistics that are controlled by the false discovery rate (FDR) to correct type I error of multiple testing. The analysis was applied to EEGs recorded at 2000 Hz from three patients with mesial temporal lobe epilepsy. Results Spike-related increase of high-frequency oscillations (HFOs) was clearly shown in the FDR-controlled t-spectra: it was most dramatic in spikes recorded from the hippocampus when the hippocampus was the seizure onset zone (SOZ). Depression of fast activity was observed immediately after the spikes, especially consistently in the discharges from the hippocampal SOZ. It corresponded to the slow wave part in case of spike-and-slow-wave complexes, but it was noted even in spikes without apparent slow waves. In one patient, a gradual increase of power above 200 Hz preceded spikes. Conclusions FDR-controlled t-spectra clearly detected the spike-related changes of HFOs that were unclear in standard power spectra. Significance We developed a promising tool to study the HFOs that may be closely linked to the pathophysiology of epileptogenesis. PMID:19394892
P.V., Ravichandra; Vemisetty, Harikumar; K., Deepthi; Reddy S, Jayaprada; D., Ramkiran; Krishna M., Jaya Nagendra; Malathi, Gita
2014-01-01
Aim: The purpose of this investigation was to evaluate the marginal adaptation of three root-end filling materials Glass ionomer cement, Mineral trioxide aggregate and BiodentineTM. Methodology: Thirty human single-rooted teeth were resected 3 mm from the apex. Root-end cavities were then prepared using an ultrasonic tip and filled with one of the following materials Glass ionomer cement (GIC), Mineral trioxide aggregate (MTA) and a bioactive cement BiodentineTM. The apical portions of the roots were then sectioned to obtain three 1 mm thick transversal sections. Confocal laser scanning microscopy (CLSM) was used to determine area of gaps and adaptation of the root-end filling materials with the dentin. The Post hoc test, a multiple comparison test was used for statistical data analysis. Results: Statistical analysis showed lowest marginal gaps (11143.42±967.753m2) and good marginal adaptation with BiodentineTM followed by MTA (22300.97±3068.883m2) and highest marginal gaps with GIC (33388.17±12155.903m2) which were statistically significant (p<0.0001). Conclusion: A new root end filling material BiodentineTM showed better marginal adaptation than commonly used root end filling materials PMID:24783148
Variance estimates and confidence intervals for the Kappa measure of classification accuracy
M. A. Kalkhan; R. M. Reich; R. L. Czaplewski
1997-01-01
The Kappa statistic is frequently used to characterize the results of an accuracy assessment used to evaluate land use and land cover classifications obtained by remotely sensed data. This statistic allows comparisons of alternative sampling designs, classification algorithms, photo-interpreters, and so forth. In order to make these comparisons, it is...
Pounds, Stan; Cheng, Cheng; Cao, Xueyuan; Crews, Kristine R.; Plunkett, William; Gandhi, Varsha; Rubnitz, Jeffrey; Ribeiro, Raul C.; Downing, James R.; Lamba, Jatinder
2009-01-01
Motivation: In some applications, prior biological knowledge can be used to define a specific pattern of association of multiple endpoint variables with a genomic variable that is biologically most interesting. However, to our knowledge, there is no statistical procedure designed to detect specific patterns of association with multiple endpoint variables. Results: Projection onto the most interesting statistical evidence (PROMISE) is proposed as a general procedure to identify genomic variables that exhibit a specific biologically interesting pattern of association with multiple endpoint variables. Biological knowledge of the endpoint variables is used to define a vector that represents the biologically most interesting values for statistics that characterize the associations of the endpoint variables with a genomic variable. A test statistic is defined as the dot-product of the vector of the observed association statistics and the vector of the most interesting values of the association statistics. By definition, this test statistic is proportional to the length of the projection of the observed vector of correlations onto the vector of most interesting associations. Statistical significance is determined via permutation. In simulation studies and an example application, PROMISE shows greater statistical power to identify genes with the interesting pattern of associations than classical multivariate procedures, individual endpoint analyses or listing genes that have the pattern of interest and are significant in more than one individual endpoint analysis. Availability: Documented R routines are freely available from www.stjuderesearch.org/depts/biostats and will soon be available as a Bioconductor package from www.bioconductor.org. Contact: stanley.pounds@stjude.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19528086
Association between personality traits and Escitalopram treatment efficacy in panic disorder.
Võhma, Ülle; Raag, Mait; Tõru, Innar; Aluoja, Anu; Maron, Eduard
2017-08-01
There is strong evidence to suggest that personality factors may interact with the development and clinical expression of panic disorder (PD). A greater understanding of these relationships may have important implications for clinical practice and implications for searching reliable predictors of treatment outcome. The study aimed to examine the effect of escitalopram treatment on personality traits in PD patients, and to identify whether the treatment outcome could be predicted by any personality trait. A study sample consisting of 110 outpatients with PD treated with 10-20 mg/day of escitalopram for 12 weeks. The personality traits were evaluated before and after 12 weeks of medication by using the Swedish universities Scales of Personality (SSP). Although almost all personality traits on the SSP measurement were improved after 12 weeks of medication in comparison with the baseline scores, none of these changes reached a statistically significant level. Only higher impulsivity at baseline SSP predicted non-remission to 12-weeks treatment with escitalopram; however, this association did not withstand the Bonferroni correction in multiple comparisons. All patients were treated in a naturalistic way using an open-label drug, so placebo responses cannot be excluded. The sample size can still be considered not large enough to reveal statistically significant findings. Maladaptive personality disposition in patients with PD seems to have a trait character and shows little trend toward normalization after 12-weeks treatment with the antidepressant, while the association between impulsivity and treatment response needs further investigation.
Hamel, Jean-Francois; Saulnier, Patrick; Pe, Madeline; Zikos, Efstathios; Musoro, Jammbe; Coens, Corneel; Bottomley, Andrew
2017-09-01
Over the last decades, Health-related Quality of Life (HRQoL) end-points have become an important outcome of the randomised controlled trials (RCTs). HRQoL methodology in RCTs has improved following international consensus recommendations. However, no international recommendations exist concerning the statistical analysis of such data. The aim of our study was to identify and characterise the quality of the statistical methods commonly used for analysing HRQoL data in cancer RCTs. Building on our recently published systematic review, we analysed a total of 33 published RCTs studying the HRQoL methods reported in RCTs since 1991. We focussed on the ability of the methods to deal with the three major problems commonly encountered when analysing HRQoL data: their multidimensional and longitudinal structure and the commonly high rate of missing data. All studies reported HRQoL being assessed repeatedly over time for a period ranging from 2 to 36 months. Missing data were common, with compliance rates ranging from 45% to 90%. From the 33 studies considered, 12 different statistical methods were identified. Twenty-nine studies analysed each of the questionnaire sub-dimensions without type I error adjustment. Thirteen studies repeated the HRQoL analysis at each assessment time again without type I error adjustment. Only 8 studies used methods suitable for repeated measurements. Our findings show a lack of consistency in statistical methods for analysing HRQoL data. Problems related to multiple comparisons were rarely considered leading to a high risk of false positive results. It is therefore critical that international recommendations for improving such statistical practices are developed. Copyright © 2017. Published by Elsevier Ltd.
Pincus, Steven M; Schmidt, Peter J; Palladino-Negro, Paula; Rubinow, David R
2008-04-01
Enhanced statistical characterization of mood-rating data holds the potential to more precisely classify and sub-classify recurrent mood disorders like premenstrual dysphoric disorder (PMDD) and recurrent brief depressive disorder (RBD). We applied several complementary statistical methods to differentiate mood rating dynamics among women with PMDD, RBD, and normal controls (NC). We compared three subgroups of women: NC (n=8); PMDD (n=15); and RBD (n=9) on the basis of daily self-ratings of sadness, study lengths between 50 and 120 days. We analyzed mean levels; overall variability, SD; sequential irregularity, approximate entropy (ApEn); and a quantification of the extent of brief and staccato dynamics, denoted 'Spikiness'. For each of SD, irregularity (ApEn), and Spikiness, we showed highly significant subgroup differences, ANOVA0.001 for each statistic; additionally, many paired subgroup comparisons showed highly significant differences. In contrast, mean levels were indistinct among the subgroups. For SD, normal controls had much smaller levels than the other subgroups, with RBD intermediate. ApEn showed PMDD to be significantly more regular than the other subgroups. Spikiness showed NC and RBD data sets to be much more staccato than their PMDD counterparts, and appears to suitably characterize the defining feature of RBD dynamics. Compound criteria based on these statistical measures discriminated diagnostic subgroups with high sensitivity and specificity. Taken together, the statistical suite provides well-defined specifications of each subgroup. This can facilitate accurate diagnosis, and augment the prediction and evaluation of response to treatment. The statistical methodologies have broad and direct applicability to behavioral studies for many psychiatric disorders, and indeed to similar analyses of associated biological signals across multiple axes.
SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.
Chu, Annie; Cui, Jenny; Dinov, Ivo D
2009-03-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.
RAId_aPS: MS/MS Analysis with Multiple Scoring Functions and Spectrum-Specific Statistics
Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo
2010-01-01
Statistically meaningful comparison/combination of peptide identification results from various search methods is impeded by the lack of a universal statistical standard. Providing an -value calibration protocol, we demonstrated earlier the feasibility of translating either the score or heuristic -value reported by any method into the textbook-defined -value, which may serve as the universal statistical standard. This protocol, although robust, may lose spectrum-specific statistics and might require a new calibration when changes in experimental setup occur. To mitigate these issues, we developed a new MS/MS search tool, RAId_aPS, that is able to provide spectrum-specific -values for additive scoring functions. Given a selection of scoring functions out of RAId score, K-score, Hyperscore and XCorr, RAId_aPS generates the corresponding score histograms of all possible peptides using dynamic programming. Using these score histograms to assign -values enables a calibration-free protocol for accurate significance assignment for each scoring function. RAId_aPS features four different modes: (i) compute the total number of possible peptides for a given molecular mass range, (ii) generate the score histogram given a MS/MS spectrum and a scoring function, (iii) reassign -values for a list of candidate peptides given a MS/MS spectrum and the scoring functions chosen, and (iv) perform database searches using selected scoring functions. In modes (iii) and (iv), RAId_aPS is also capable of combining results from different scoring functions using spectrum-specific statistics. The web link is http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/raid_aps/index.html. Relevant binaries for Linux, Windows, and Mac OS X are available from the same page. PMID:21103371
Dada, Ayokunle Christopher; Ahmad, Asmat; Usup, Gires; Heng, Lee Yook
2013-02-01
We report the first study on the occurrence of antibiotic-resistant enterococci in coastal bathing waters in Malaysia. One hundred and sixty-five enterococci isolates recovered from two popular recreational beaches in Malaysia were speciated and screened for antibiotic resistance to a total of eight antibiotics. Prevalence of Enterococcus faecalis and Enterococcus faecium was highest in both beaches. E. faecalis/E. faecium ratio was 0.384:1 and 0.375:1, respectively, for isolates from Port Dickson (PD) and Bagan Lalang (BL). Analysis of Fisher's exact test showed that association of prevalence of E. faecalis and E. faecium with considered locations was not statistically significant (p < 0.05). Chi-square test revealed significant differences (χ(2) = 82.630, df = 20, p < 0.001) in the frequency of occurrence of enterococci isolates from the considered sites. Resistance was highest to nalidixic acid (94.84 %) and least for chloramphenicol (8.38 %). One-way ANOVA using Tukey-Kramer multiple comparison test showed that resistance to ampicillin was higher in PD beach isolates than BL isolates and the difference was extremely statistically significant (p < 0.0001). Frequency of occurrence of multiple antibiotic resistance (MAR) isolates were higher for PD beach water (64.29 %) as compared to BL beach water (13.51 %), while MAR indices ranged between 0.198 and 0.48. The results suggest that samples from Port Dickson may contain MAR bacteria and that this could be due to high-risk faecal contamination from sewage discharge pipes that drain into the sea water.
Assessing Effects of Prenatal Alcohol Exposure Using Group-wise Sparse Representation of FMRI Data
Lv, Jinglei; Jiang, Xi; Li, Xiang; Zhu, Dajiang; Zhao, Shijie; Zhang, Tuo; Hu, Xintao; Han, Junwei; Guo, Lei; Li, Zhihao; Coles, Claire; Hu, Xiaoping; Liu, Tianming
2015-01-01
Task-based fMRI activation mapping has been widely used in clinical neuroscience in order to assess different functional activity patterns in conditions such as prenatal alcohol exposure (PAE) affected brains and healthy controls. In this paper, we propose a novel, alternative approach of group-wise sparse representation of the fMRI data of multiple groups of subjects (healthy control, exposed non-dysmorphic PAE and exposed dysmorphic PAE) and assess the systematic functional activity differences among these three populations. Specifically, a common time series signal dictionary is learned from the aggregated fMRI signals of all three groups of subjects, and then the weight coefficient matrices (named statistical coefficient map (SCM)) associated with each common dictionary were statistically assessed for each group separately. Through inter-group comparisons based on the correspondence established by the common dictionary, our experimental results have demonstrated that the group-wise sparse coding strategy and the SCM can effectively reveal a collection of brain networks/regions that were affected by different levels of severity of PAE. PMID:26195294
Bartlett, Jonathan W; Keogh, Ruth H
2018-06-01
Bayesian approaches for handling covariate measurement error are well established and yet arguably are still relatively little used by researchers. For some this is likely due to unfamiliarity or disagreement with the Bayesian inferential paradigm. For others a contributory factor is the inability of standard statistical packages to perform such Bayesian analyses. In this paper, we first give an overview of the Bayesian approach to handling covariate measurement error, and contrast it with regression calibration, arguably the most commonly adopted approach. We then argue why the Bayesian approach has a number of statistical advantages compared to regression calibration and demonstrate that implementing the Bayesian approach is usually quite feasible for the analyst. Next, we describe the closely related maximum likelihood and multiple imputation approaches and explain why we believe the Bayesian approach to generally be preferable. We then empirically compare the frequentist properties of regression calibration and the Bayesian approach through simulation studies. The flexibility of the Bayesian approach to handle both measurement error and missing data is then illustrated through an analysis of data from the Third National Health and Nutrition Examination Survey.
Impact of ecological factors on concern and awareness about disability: a statistical analysis.
Walker, Gabriela
2014-11-01
The barriers that people with disabilities face around the world are not only inherent in the limitations resulting from the disability itself, but, more importantly, these barriers rest with the societal technologies of exclusion. A multiple regression analysis was conducted to examine the statistical relationship between the national level of development, the level of democratization, and the level of education of a country's population on one hand, and expressed concern for people with disabilities on another hand. The results reveal that a greater worry for the well-being of people with disabilities is correlated with a high level of country development, a decreased value of political stability and absence of violence, a decreased level of government effectiveness, and a greater level of law enforcement. There is a direct correlation between concern for people with disabilities and people's awareness about disabilities. Surprisingly, the level of education has no impact on the compassion toward people with disabilities. A comparison case for in depth illustration is discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Salmahaminati; Husnaqilati, Atina; Yahya, Amri
2017-01-01
Trash management is one of the society participation to have a good hygiene for each area or nationally. Trash is known as the remainder of regular consumption that should be disposed to do waste processing which will be beneficial and improve the hygiene. The way to do is by sorting plastic which is processed into goods in accordance with the waste. In this study, we will know what are the factors that affect the desire of citizens to process the waste. The factors would have the identity and the state of being of each resident, having known of these factors will be the education about waste management, so it can be compared how the results of the extension by using preliminary data prior to the extension and the final data after extension. The analysis uses multiple logistic regression is the identify factors that influence people’s to desire the waste while the comparison results using t analysis. Data is derived from statistical instrument in the form of a questionnaire.
Lotfy, Hayam Mahmoud; Hegazy, Maha A; Rezk, Mamdouh R; Omran, Yasmin Rostom
2014-05-21
Two smart and novel spectrophotometric methods namely; absorbance subtraction (AS) and amplitude modulation (AM) were developed and validated for the determination of a binary mixture of timolol maleate (TIM) and dorzolamide hydrochloride (DOR) in presence of benzalkonium chloride without prior separation, using unified regression equation. Additionally, simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra were developed and validated for simultaneous determination of the binary mixture namely; simultaneous ratio subtraction (SRS), ratio difference (RD), ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), constant multiplication method (CM) and mean centering of ratio spectra (MCR). The proposed spectrophotometric procedures do not require any separation steps. Accuracy, precision and linearity ranges of the proposed methods were determined and the specificity was assessed by analyzing synthetic mixtures of both drugs. They were applied to their pharmaceutical formulation and the results obtained were statistically compared to that of a reported spectrophotometric method. The statistical comparison showed that there is no significant difference between the proposed methods and the reported one regarding both accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.
Robustness of the sequential lineup advantage.
Gronlund, Scott D; Carlson, Curt A; Dailey, Sarah B; Goodsell, Charles A
2009-06-01
A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup advantages and 3 significant simultaneous advantages. Both sequential advantages occurred when the good photograph of the guilty suspect or either innocent suspect was in the fifth position in the sequential lineup; all 3 simultaneous advantages occurred when the poorer quality photograph of the guilty suspect or either innocent suspect was in the second position. Adjusting the statistical criterion to control for the multiple tests (.05/24) revealed no significant sequential advantages. Moreover, despite finding more conservative overall choosing for the sequential lineup, no support was found for the proposal that a sequential advantage was due to that conservative criterion shift. Unless lineups with particular characteristics predominate in the real world, there appears to be no strong preference for conducting lineups in either a sequential or a simultaneous manner. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
A comparison of two surveillance systems for deaths related to violent injury
Comstock, R; Mallonee, S; Jordan, F
2005-01-01
Objective: To compare violent injury death reporting by the statewide Medical Examiner and Vital Statistics Office surveillance systems in Oklahoma. Methods: Using a standard study definition for violent injury death, the sensitivity and predictive value positive (PVP) of the Medical Examiner and Vital Statistics violent injury death reporting systems in Oklahoma in 2001 were evaluated. Results: Altogether 776 violent injury deaths were identified (violent injury death rate: 22.4 per 100 000 population) including 519 (66.9%) suicides, 248 (32.0%) homicides, and nine (1.2%) unintentional firearm deaths. The Medical Examiner system over-reported homicides and the Vital Statistics system under-reported homicides and suicides and over-reported unintentional firearm injury deaths. When compared with the standard, the Medical Examiner and Vital Statistics systems had sensitivities of 99.2% and 90.7% (respectively) and PVPs of 95.0% and 99.1% for homicide, sensitivities of 99.2% and 93.1% and PVPs of 100% and 99.0% for suicide, and sensitivities of 100% and 100% and PVPs of 100% and 31.0% for unintentional firearm deaths. Conclusions: Both the Vital Statistics and Medical Examiner systems contain valuable data and when combined can work synergistically to provide violent injury death information while also serving as quality control checks for each other. Preventable errors within both systems can be reduced by increasing training, addressing sources of human error, and expanding computer quality assurance programming. A standardized nationwide Medical Examiners' coding system and a national violent death reporting system that merges multiple public health and criminal justice datasets would enhance violent injury surveillance and prevention efforts. PMID:15691992
Multibaseline gravitational wave radiometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talukder, Dipongkar; Bose, Sukanta; Mitra, Sanjit
2011-03-15
We present a statistic for the detection of stochastic gravitational wave backgrounds (SGWBs) using radiometry with a network of multiple baselines. We also quantitatively compare the sensitivities of existing baselines and their network to SGWBs. We assess how the measurement accuracy of signal parameters, e.g., the sky position of a localized source, can improve when using a network of baselines, as compared to any of the single participating baselines. The search statistic itself is derived from the likelihood ratio of the cross correlation of the data across all possible baselines in a detector network and is optimal in Gaussian noise.more » Specifically, it is the likelihood ratio maximized over the strength of the SGWB and is called the maximized-likelihood ratio (MLR). One of the main advantages of using the MLR over past search strategies for inferring the presence or absence of a signal is that the former does not require the deconvolution of the cross correlation statistic. Therefore, it does not suffer from errors inherent to the deconvolution procedure and is especially useful for detecting weak sources. In the limit of a single baseline, it reduces to the detection statistic studied by Ballmer [Classical Quantum Gravity 23, S179 (2006).] and Mitra et al.[Phys. Rev. D 77, 042002 (2008).]. Unlike past studies, here the MLR statistic enables us to compare quantitatively the performances of a variety of baselines searching for a SGWB signal in (simulated) data. Although we use simulated noise and SGWB signals for making these comparisons, our method can be straightforwardly applied on real data.« less
Oono, Ryoko
2017-01-01
High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions 'how and why are communities different?' This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences.
Effects of a school-based sexuality education program on peer educators: the Teen PEP model.
Jennings, J M; Howard, S; Perotte, C L
2014-04-01
This study evaluated the impact of the Teen Prevention Education Program (Teen PEP), a peer-led sexuality education program designed to prevent unintended pregnancy and sexually transmitted infections (STIs) including HIV among high school students. The study design was a quasi-experimental, nonrandomized design conducted from May 2007 to May 2008. The sample consisted of 96 intervention (i.e. Teen PEP peer educators) and 61 comparison students from five high schools in New Jersey. Baseline and 12-month follow-up surveys were conducted. Summary statistics were generated and multiple regression analyses were conducted. In the primary intent-to-treat analyses, and secondary non-intent-to-treat analyses, Teen PEP peer educators (versus comparison students) reported significantly greater opportunities to practice sexual risk reduction skills and higher intentions to talk with friends, parents, and sex partners about sex and birth control, set boundaries with sex partners, and ask a partner to be tested for STIs including HIV. In addition in the secondary analysis, Teen PEP peer educators (as compared with the comparison students) had significantly higher scores on knowledge of sexual health issues and ability to refuse risky sexual situations. School-based sexuality education programs offering comprehensive training to peer educators may improve sexual risk behavior knowledge, attitudes and behaviors among high school students.
2017-01-01
High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions ‘how and why are communities different?’ This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences. PMID:29253889
Effects of a school-based sexuality education program on peer educators: the Teen PEP model
Jennings, J. M.; Howard, S.; Perotte, C. L.
2014-01-01
This study evaluated the impact of the Teen Prevention Education Program (Teen PEP), a peer-led sexuality education program designed to prevent unintended pregnancy and sexually transmitted infections (STIs) including HIV among high school students. The study design was a quasi-experimental, nonrandomized design conducted from May 2007 to May 2008. The sample consisted of 96 intervention (i.e. Teen PEP peer educators) and 61 comparison students from five high schools in New Jersey. Baseline and 12-month follow-up surveys were conducted. Summary statistics were generated and multiple regression analyses were conducted. In the primary intent-to-treat analyses, and secondary non-intent-to-treat analyses, Teen PEP peer educators (versus comparison students) reported significantly greater opportunities to practice sexual risk reduction skills and higher intentions to talk with friends, parents, and sex partners about sex and birth control, set boundaries with sex partners, and ask a partner to be tested for STIs including HIV. In addition in the secondary analysis, Teen PEP peer educators (as compared with the comparison students) had significantly higher scores on knowledge of sexual health issues and ability to refuse risky sexual situations. School-based sexuality education programs offering comprehensive training to peer educators may improve sexual risk behavior knowledge, attitudes and behaviors among high school students. PMID:24488649
Kanaya, Shoko; Kariya, Kenji; Fujisaki, Waka
2016-10-01
Certain systematic relationships are often assumed between information conveyed from multiple sensory modalities; for instance, a small figure and a high pitch may be perceived as more harmonious. This phenomenon, termed cross-modal correspondence, may result from correlations between multi-sensory signals learned in daily experience of the natural environment. If so, we would observe cross-modal correspondences not only in the perception of artificial stimuli but also in perception of natural objects. To test this hypothesis, we reanalyzed data collected previously in our laboratory examining perceptions of the material properties of wood using vision, audition, and touch. We compared participant evaluations of three perceptual properties (surface brightness, sharpness of sound, and smoothness) of the wood blocks obtained separately via vision, audition, and touch. Significant positive correlations were identified for all properties in the audition-touch comparison, and for two of the three properties regarding in the vision-touch comparison. By contrast, no properties exhibited significant positive correlations in the vision-audition comparison. These results suggest that we learn correlations between multi-sensory signals through experience; however, the strength of this statistical learning is apparently dependent on the particular combination of sensory modalities involved. © The Author(s) 2016.
Cartier, Vanessa; Inan, Cigdem; Zingg, Walter; Delhumeau, Cecile; Walder, Bernard; Savoldelli, Georges L
2016-08-01
Multimodal educational interventions have been shown to improve short-term competency in, and knowledge of central venous catheter (CVC) insertion. To evaluate the effectiveness of simulation-based medical education training in improving short and long-term competency in, and knowledge of CVC insertion. Before and after intervention study. University Geneva Hospital, Geneva, Switzerland, between May 2008 and January 2012. Residents in anaesthesiology aware of the Seldinger technique for vascular puncture. Participants attended a half-day course on CVC insertion. Learning objectives included work organization, aseptic technique and prevention of CVC complications. CVC insertion competency was tested pretraining, posttraining and then more than 2 years after training (sustainability phase). The primary study outcome was competency as measured by a global rating scale of technical skills, a hand hygiene compliance score and a checklist compliance score. Secondary outcome was knowledge as measured by a standardised pretraining and posttraining multiple-choice questionnaire. Statistical analyses were performed using paired Student's t test or Wilcoxon signed-rank test. Thirty-seven residents were included; 18 were tested in the sustainability phase (on average 34 months after training). The average global rating of skills was 23.4 points (±SD 4.08) before training, 32.2 (±4.51) after training (P < 0.001 for comparison with pretraining scores) and 26.5 (±5.34) in the sustainability phase (P = 0.040 for comparison with pretraining scores). The average hand hygiene compliance score was 2.8 (±1.0) points before training, 5.0 (±1.04) after training (P < 0.001 for comparison with pretraining scores) and 3.7 (±1.75) in the sustainability phase (P = 0.038 for comparison with pretraining scores). The average checklist compliance was 14.9 points (±2.3) before training, 19.9 (±1.06) after training (P < 0.001 for comparison with pretraining scores) and 17.4 (±1.41) (P = 0.002 for comparison with pretraining scores). The percentage of correct answers in the multiple-choice questionnaire increased from 76.0% (±7.9) before training to 87.7% (±4.4) after training (P < 0.001). Simulation-based medical education training was effective in improving short and long-term competency in, and knowledge of CVC insertion.
Biostatistics Series Module 3: Comparing Groups: Numerical Variables.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Numerical data that are normally distributed can be analyzed with parametric tests, that is, tests which are based on the parameters that define a normal distribution curve. If the distribution is uncertain, the data can be plotted as a normal probability plot and visually inspected, or tested for normality using one of a number of goodness of fit tests, such as the Kolmogorov-Smirnov test. The widely used Student's t-test has three variants. The one-sample t-test is used to assess if a sample mean (as an estimate of the population mean) differs significantly from a given population mean. The means of two independent samples may be compared for a statistically significant difference by the unpaired or independent samples t-test. If the data sets are related in some way, their means may be compared by the paired or dependent samples t-test. The t-test should not be used to compare the means of more than two groups. Although it is possible to compare groups in pairs, when there are more than two groups, this will increase the probability of a Type I error. The one-way analysis of variance (ANOVA) is employed to compare the means of three or more independent data sets that are normally distributed. Multiple measurements from the same set of subjects cannot be treated as separate, unrelated data sets. Comparison of means in such a situation requires repeated measures ANOVA. It is to be noted that while a multiple group comparison test such as ANOVA can point to a significant difference, it does not identify exactly between which two groups the difference lies. To do this, multiple group comparison needs to be followed up by an appropriate post hoc test. An example is the Tukey's honestly significant difference test following ANOVA. If the assumptions for parametric tests are not met, there are nonparametric alternatives for comparing data sets. These include Mann-Whitney U-test as the nonparametric counterpart of the unpaired Student's t-test, Wilcoxon signed-rank test as the counterpart of the paired Student's t-test, Kruskal-Wallis test as the nonparametric equivalent of ANOVA and the Friedman's test as the counterpart of repeated measures ANOVA.
Pounds, Stan; Cao, Xueyuan; Cheng, Cheng; Yang, Jun; Campana, Dario; Evans, William E.; Pui, Ching-Hon; Relling, Mary V.
2010-01-01
Powerful methods for integrated analysis of multiple biological data sets are needed to maximize interpretation capacity and acquire meaningful knowledge. We recently developed Projection Onto the Most Interesting Statistical Evidence (PROMISE). PROMISE is a statistical procedure that incorporates prior knowledge about the biological relationships among endpoint variables into an integrated analysis of microarray gene expression data with multiple biological and clinical endpoints. Here, PROMISE is adapted to the integrated analysis of pharmacologic, clinical, and genome-wide genotype data that incorporating knowledge about the biological relationships among pharmacologic and clinical response data. An efficient permutation-testing algorithm is introduced so that statistical calculations are computationally feasible in this higher-dimension setting. The new method is applied to a pediatric leukemia data set. The results clearly indicate that PROMISE is a powerful statistical tool for identifying genomic features that exhibit a biologically meaningful pattern of association with multiple endpoint variables. PMID:21516175
Joseph, Agnel Praveen; Srinivasan, Narayanaswamy; de Brevern, Alexandre G
2012-09-01
Comparison of multiple protein structures has a broad range of applications in the analysis of protein structure, function and evolution. Multiple structure alignment tools (MSTAs) are necessary to obtain a simultaneous comparison of a family of related folds. In this study, we have developed a method for multiple structure comparison largely based on sequence alignment techniques. A widely used Structural Alphabet named Protein Blocks (PBs) was used to transform the information on 3D protein backbone conformation as a 1D sequence string. A progressive alignment strategy similar to CLUSTALW was adopted for multiple PB sequence alignment (mulPBA). Highly similar stretches identified by the pairwise alignments are given higher weights during the alignment. The residue equivalences from PB based alignments are used to obtain a three dimensional fit of the structures followed by an iterative refinement of the structural superposition. Systematic comparisons using benchmark datasets of MSTAs underlines that the alignment quality is better than MULTIPROT, MUSTANG and the alignments in HOMSTRAD, in more than 85% of the cases. Comparison with other rigid-body and flexible MSTAs also indicate that mulPBA alignments are superior to most of the rigid-body MSTAs and highly comparable to the flexible alignment methods. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Vazquez, Bruna Perez; Vazquez, Thaís Perez; Miguel, Camila Botelho; Rodrigues, Wellington Francisco; Mendes, Maria Tays; de Oliveira, Carlo José Freire; Chica, Javier Emílio Lazo
2015-04-03
Chagas disease is caused by the protozoan Trypanosoma cruzi and is characterized by cardiac, gastrointestinal, and nervous system disorders. Although much about the pathophysiological process of Chagas disease is already known, the influence of the parasite burden on the inflammatory process and disease progression remains uncertain. We used an acute experimental disease model to evaluate the effect of T. cruzi on intestinal lesions and assessed correlations between parasite load and inflammation and intestinal injury at 7 and 14 days post-infection. Low (3 × 10(2)), medium (3 × 10(3)), and high (3 × 10(4)) parasite loads were generated by infecting C57BL/6 mice with "Y"-strain trypomastigotes. Statistical analysis was performed using analysis of variance with Tukey's multiple comparison post-test, Kruskal-Wallis test with Dunn's multiple comparison, χ2 test and Spearman correlation. High parasite load-bearing mice more rapidly and strongly developed parasitemia. Increased colon width, inflammatory infiltration, myositis, periganglionitis, ganglionitis, pro-inflammatory cytokines (e.g., TNF-α, INF-γ, IL-2, IL-17, IL-6), and intestinal amastigote nests were more pronounced in high parasite load-bearing animals. These results were remarkable because a positive correlation was observed between parasite load, inflammatory infiltrate, amastigote nests, and investigated cytokines. These experimental data support the idea that the parasite load considerably influences the T. cruzi-induced intestinal inflammatory response and contributes to the development of the digestive form of the disease.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John R
R code that performs the analysis of a data set presented in the paper ‘Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications’ by Lewis, J., Zhang, A., Anderson-Cook, C. It provides functions for doing inverse predictions in this setting using several different statistical methods. The data set is a publicly available data set from a historical Plutonium production experiment.
NASA Astrophysics Data System (ADS)
Grossman, S.
2015-05-01
Since the events of September 11, 2001, the intelligence focus has moved from large order-of-battle targets to small targets of opportunity. Additionally, the business community has discovered the use of remotely sensed data to anticipate demand and derive data on their competition. This requires the finer spectral and spatial fidelity now available to recognize those targets. This work hypothesizes that directed searches using calibrated data perform at least as well as inscene manually intensive target detection searches. It uses calibrated Worldview-2 multispectral images with NEF generated signatures and standard detection algorithms to compare bespoke directed search capabilities against ENVI™ in-scene search capabilities. Multiple execution runs are performed at increasing thresholds to generate detection rates. These rates are plotted and statistically analyzed. While individual head-to-head comparison results vary, 88% of the directed searches performed at least as well as in-scene searches with 50% clearly outperforming in-scene methods. The results strongly support the premise that directed searches perform at least as well as comparable in-scene searches.
Morgan-Followell, Bethanie; Aylward, Shawn C
2017-03-01
The authors aimed to compare the opening pressures of children with demyelinating disease to children with primary intracranial hypertension. Medical records were reviewed for a primary diagnosis of demyelinating disease, or primary intracranial hypertension. Diagnosis of demyelinating disease was made according to either the 2007 or 2012 International Pediatric Multiple Sclerosis Study Group criteria. Primary intracranial hypertension diagnosis was confirmed by presence of elevated opening pressure, normal cerebrospinal fluid composition and neuroimaging. The authors compared 14 children with demyelinating disease to children with primary intracranial hypertension in 1:1 and 1:2 fashions. There was a statistically significant higher BMI in the primary intracranial hypertension group compared to the demyelinating group ( P = .0203). The mean cerebrospinal fluid white blood cell count was higher in the demyelinating disease group compared to primary intracranial hypertension ( P = .0002). Among both comparisons, the cerebrospinal fluid opening pressure, glucose, protein and red blood cell counts in children with demyelinating disease were comparable to age- and sex-matched controls with primary intracranial hypertension.
Continuously updated network meta-analysis and statistical monitoring for timely decision-making
Nikolakopoulou, Adriani; Mavridis, Dimitris; Egger, Matthias; Salanti, Georgia
2016-01-01
Pairwise and network meta-analysis (NMA) are traditionally used retrospectively to assess existing evidence. However, the current evidence often undergoes several updates as new studies become available. In each update recommendations about the conclusiveness of the evidence and the need of future studies need to be made. In the context of prospective meta-analysis future studies are planned as part of the accumulation of the evidence. In this setting, multiple testing issues need to be taken into account when the meta-analysis results are interpreted. We extend ideas of sequential monitoring of meta-analysis to provide a methodological framework for updating NMAs. Based on the z-score for each network estimate (the ratio of effect size to its standard error) and the respective information gained after each study enters NMA we construct efficacy and futility stopping boundaries. A NMA treatment effect is considered conclusive when it crosses an appended stopping boundary. The methods are illustrated using a recently published NMA where we show that evidence about a particular comparison can become conclusive via indirect evidence even if no further trials address this comparison. PMID:27587588
Cross-comparison of the IRS-P6 AWiFS sensor with the L5 TM, L7 ETM+, & Terra MODIS sensors
Chander, G.; Xiong, X.; Angal, A.; Choi, T.; Malla, R.
2009-01-01
As scientists and decision makers increasingly rely on multiple Earth-observing satellites to address urgent global issues, it is imperative that they can rely on the accuracy of Earth-observing data products. This paper focuses on the crosscomparison of the Indian Remote Sensing (IRS-P6) Advanced Wide Field Sensor (AWiFS) with the Landsat 5 (L5) Thematic Mapper (TM), Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+), and Terra Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. The cross-comparison was performed using image statistics based on large common areas observed by the sensors within 30 minutes. Because of the limited availability of simultaneous observations between the AWiFS and the Landsat and MODIS sensors, only a few images were analyzed. These initial results are presented. Regression curves and coefficients of determination for the top-of-atmosphere (TOA) trends from these sensors were generated to quantify the uncertainty in these relationships and to provide an assessment of the calibration differences between these sensors. ?? 2009 SPIE.
Killgrove, Kristina; Montgomery, Janet
2016-01-01
Migration within the Roman Empire occurred at multiple scales and was engaged in both voluntarily and involuntarily. Because of the lengthy tradition of classical studies, bioarchaeological analyses must be fully contextualized within the bounds of history, material culture, and epigraphy. In order to assess migration to Rome within an updated contextual framework, strontium isotope analysis was performed on 105 individuals from two cemeteries associated with Imperial Rome—Casal Bertone and Castellaccio Europarco—and oxygen and carbon isotope analyses were performed on a subset of 55 individuals. Statistical analysis and comparisons with expected local ranges found several outliers who likely immigrated to Rome from elsewhere. Demographics of the immigrants show men and children migrated, and a comparison of carbon isotopes from teeth and bone samples suggests the immigrants may have significantly changed their diet. These data represent the first physical evidence of individual migrants to Imperial Rome. This case study demonstrates the importance of employing bioarchaeology to generate a deeper understanding of a complex ancient urban center. PMID:26863610
Killgrove, Kristina; Montgomery, Janet
2016-01-01
Migration within the Roman Empire occurred at multiple scales and was engaged in both voluntarily and involuntarily. Because of the lengthy tradition of classical studies, bioarchaeological analyses must be fully contextualized within the bounds of history, material culture, and epigraphy. In order to assess migration to Rome within an updated contextual framework, strontium isotope analysis was performed on 105 individuals from two cemeteries associated with Imperial Rome-Casal Bertone and Castellaccio Europarco-and oxygen and carbon isotope analyses were performed on a subset of 55 individuals. Statistical analysis and comparisons with expected local ranges found several outliers who likely immigrated to Rome from elsewhere. Demographics of the immigrants show men and children migrated, and a comparison of carbon isotopes from teeth and bone samples suggests the immigrants may have significantly changed their diet. These data represent the first physical evidence of individual migrants to Imperial Rome. This case study demonstrates the importance of employing bioarchaeology to generate a deeper understanding of a complex ancient urban center.
A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*
Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.
2013-01-01
This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186
Multiple statistical tests: Lessons from a d20.
Madan, Christopher R
2016-01-01
Statistical analyses are often conducted with α= .05. When multiple statistical tests are conducted, this procedure needs to be adjusted to compensate for the otherwise inflated Type I error. In some instances in tabletop gaming, sometimes it is desired to roll a 20-sided die (or 'd20') twice and take the greater outcome. Here I draw from probability theory and the case of a d20, where the probability of obtaining any specific outcome is (1)/ 20, to determine the probability of obtaining a specific outcome (Type-I error) at least once across repeated, independent statistical tests.
Gastroduodenitis associated with ulcerative colitis.
Hori, Kazutoshi; Ikeuchi, Hiroki; Nakano, Hiroki; Uchino, Motoi; Tomita, Toshihiko; Ohda, Yoshio; Hida, Nobuyuki; Matsumoto, Takayuki; Fukuda, Yoshihiro; Miwa, Hiroto
2008-01-01
Ulcerative colitis (UC) is regarded as confined to the colorectum; however, there are several case reports showing upper gastrointestinal involvement. The aim of this study was to examine the prevalence and characteristics of gastroduodenitis associated with UC (GDUC). Esophagogastroduodenoscopy with biopsies was prospectively performed on 250 UC patients (134 men, 116 women; mean age, 42 years; 162 with colectomy, 163 with pancolitis). Criteria for GDUC were created on the basis of endoscopic and histological comparisons with non-UC controls, and the prevalence and characteristics were statistically analyzed. GDUC was defined endoscopically as friable mucosa (erosive or ulcerative mucosa with contact or spontaneous bleeding), granular mucosa (multiple white spots almost without a red halo), or, conditionally, multiple aphthae (multiple white spots surrounded by a red halo, clinically excluding other disorders such as Crohn's disease). The prevalence of GDUC was 19/250 (7.6%). The clinical characteristics included more extensive colitis, lower dose of prednisolone, higher prevalence of pouchitis, and longer postoperative period. In our population, the presence of pancolitis and a lower dose of prednisolone were significant risk factors for developing GDUC in multivariate analysis. The high prevalence of GDUC suggests that the gut inflammatory reaction in UC may not be restricted to the large intestine. Administered steroids might conceal GDUC, and more aggressive UC such as active pancolitis may be related to the development of GDUC.
A comparison of single-cycle versus multiple-cycle proof testing strategies
NASA Technical Reports Server (NTRS)
Hudak, S. J., Jr.; Mcclung, R. C.; Bartlett, M. L.; Fitzgerald, J. H.; Russell, D. A.
1990-01-01
An evaluation of single-cycle and multiple-cycle proof testing (MCPT) strategies for SSME components is described. Data for initial sizes and shapes of actual SSME hardware defects are analyzed statistically. Closed-form estimates of the J-integral for surface flaws are derived with a modified reference stress method. The results of load- and displacement-controlled stable crack growth tests on thin IN-718 plates with deep surface flaws are summarized. A J-resistance curve for the surface-cracked configuration is developed and compared with data from thick compact tension specimens. The potential for further crack growth during large unload/reload cycles is discussed, highlighting conflicting data in the literature. A simple model for ductile crack growth during MCPT based on the J-resistance curve is used to study the potential effects of key variables. The projected changes in the crack size distribution during MCPT depend on the interactions between several key parameters, including the number of proof cycles, the nature of the resistance curve, the initial crack size distribution, the component boundary conditions (load vs. displacement control), and the magnitude of the applied load or displacement. The relative advantages of single-cycle and multiple-cycle proof testing appear to be specific, therefore, to individual component geometry, material, and loading.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klumpp, John
We propose a radiation detection system which generates its own discrete sampling distribution based on past measurements of background. The advantage to this approach is that it can take into account variations in background with respect to time, location, energy spectra, detector-specific characteristics (i.e. different efficiencies at different count rates and energies), etc. This would therefore be a 'machine learning' approach, in which the algorithm updates and improves its characterization of background over time. The system would have a 'learning mode,' in which it measures and analyzes background count rates, and a 'detection mode,' in which it compares measurements frommore » an unknown source against its unique background distribution. By characterizing and accounting for variations in the background, general purpose radiation detectors can be improved with little or no increase in cost. The statistical and computational techniques to perform this kind of analysis have already been developed. The necessary signal analysis can be accomplished using existing Bayesian algorithms which account for multiple channels, multiple detectors, and multiple time intervals. Furthermore, Bayesian machine-learning techniques have already been developed which, with trivial modifications, can generate appropriate decision thresholds based on the comparison of new measurements against a nonparametric sampling distribution. (authors)« less
Study of the Photon Strength Functions for Gadolinium Isotopes with the DANCE Array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dashdorj, D.; Mitchell, G. E.; Baramsai, B.
2009-03-10
The gadolinium isotopes are interesting for reactor applications as well as for medicine and astrophysics. The gadolinium isotopes have some of the largest neutron capture cross sections. As a consequence they are used in the control rod in reactor fuel assembly. From the basic science point of view, there are seven stable isotopes of gadolinium with varying degrees of deformation. Therefore they provide a good testing ground for the study of deformation dependent structure such as the scissors mode. Decay gamma rays following neutron capture on Gd isotopes are detected by the DANCE array, which is located at flight pathmore » 14 at the Lujan Neutron Scattering Center at Los Alamos National Laboratory. The high segmentation and close packing of the detector array enable gamma-ray multiplicity measurements. The calorimetric properties of the DANCE array coupled with the neutron time-of-flight technique enables one to gate on a specific resonance of a specific isotope in the time-of-flight spectrum and obtain the summed energy spectrum for that isotope. The singles gamma-ray spectrum for each multiplicity can be separated by their DANCE cluster multiplicity. Various photon strength function models are used for comparison with experimentally measured DANCE data and provide insight for understanding the statistical decay properties of deformed nuclei.« less
French, David; Nadji, Nabil; Liu, Shawn X; Larjava, Hannu
2015-06-01
A novel osteotome trifactorial classification system is proposed for transcrestal osteotome-mediated sinus floor elevation (OSFE) sites that includes residual bone height (RBH), sinus floor anatomy (contour), and multiple versus single sites OSFE (tenting). An analysis of RBH, contour, and tenting was retrospectively applied to a cohort of 926 implants placed using OSFE without added bone graft and followed up to 10 years. RBH was divided into three groups: high (RBH > 6 mm), mid (RBH = 4.1 to 6 mm), and low (RBH = 2 to 4 mm). The sinus "contour" was divided into four groups: flat, concave, angle, and septa. For "tenting", single versus multiple adjacent OSFE sites were compared. The prevalence of flat sinus floors increased as RBH decreased. RBH was a significant predictor of failure with rates as follows: low- RBH = 5.1%, mid-RBH = 1.5%, and high-RBH = 0.4%. Flat sinus floors and single sites as compared to multiple sites had higher observed failure rates but neither achieved statistical significance; however, the power of the study was limited by low numbers of failures. The osteotome trifactorial classification system as proposed can assist planning OSFE cases and may allow better comparison of future OSFE studies.
Statistical comparisons of AGDISP prediction with Mission III data
Baozhong Duan; Karl Mierzejewski; William G. Yendol
1991-01-01
Statistical comparison of AGDISP prediction were made against data obtained during aerial spray field trials ("Mission III") conducted in March 1987 at the APHIS Facility, Moore Air Base, Edinburg, Texas, by the NEFAAT group (Northeast Forest Aerial Application Technology). Seven out of twenty one runs were observed and predicted means (O and P), mean bias...
Rodrigues, Paulo Rogério Melo; de Souza, Rita Adriana Gomes; De Cnop, Mara Lima; Monteiro, Luana Silva; Coura, Camila Pinheiro; Brito, Alessandra Page; Pereira, Rosangela Alves
2016-02-01
The objective of this study was to assess the agreement between the Brazilian Healthy Eating Index - Revised (BHEI-R), estimated by a food frequency questionnaire (FFQ) and multiple 24-hour recalls (24h-R). The Wilcoxon paired test, partial correlations (PC), intraclass correlation coefficient (ICC), and Bland-Altman method were used. The total BHEI-R scores and its components ("total fruits", "whole fruits", "total vegetables", "integral cereals", "saturated fat", "sodium", and "energy intake derived from solid fat, added sugar, and alcoholic beverages") were statistically different, with the ICC and PC indicating poor concordance and correlation. The mean concordance estimated for the total BHEI-R and its components varied from 68% for "integral cereals" to 147% for "whole fruits". The suitable concordance limits were violated for most of the components of the BHEI-R. Poor concordance was observed between the BHEI-R estimated by the FFQ and by multiple 24h-R, which indicated a strong reliability of the BHEI-R on the instrument used to collect information on food consumption.
A comparison of vowel normalization procedures for language variation research
NASA Astrophysics Data System (ADS)
Adank, Patti; Smits, Roel; van Hout, Roeland
2004-11-01
An evaluation of vowel normalization procedures for the purpose of studying language variation is presented. The procedures were compared on how effectively they (a) preserve phonemic information, (b) preserve information about the talker's regional background (or sociolinguistic information), and (c) minimize anatomical/physiological variation in acoustic representations of vowels. Recordings were made for 80 female talkers and 80 male talkers of Dutch. These talkers were stratified according to their gender and regional background. The normalization procedures were applied to measurements of the fundamental frequency and the first three formant frequencies for a large set of vowel tokens. The normalization procedures were evaluated through statistical pattern analysis. The results show that normalization procedures that use information across multiple vowels (``vowel-extrinsic'' information) to normalize a single vowel token performed better than those that include only information contained in the vowel token itself (``vowel-intrinsic'' information). Furthermore, the results show that normalization procedures that operate on individual formants performed better than those that use information across multiple formants (e.g., ``formant-extrinsic'' F2-F1). .
RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection.
Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S
Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.
Nuchal translucency in dichorionic twins conceived after assisted reproduction.
Hui, P W; Tang, M H Y; Ng, E H Y; Yeung, W S B; Ho, P C
2006-06-01
As opposed to biochemical markers of Down syndrome, nuchal translucency (NT) was once thought to be a more reliable screening marker for high order multiple pregnancies and pregnancies conceived after assisted conception. Recent data suggested that NT in singleton fetuses from assisted reproduction technology (ART) was thicker than those from singleton pregnancies. The present study compared the thickness of NT in dichorionic twins from natural conception and assisted reproduction. A retrospective analysis for comparison of NT thickness on 3319 spontaneous singletons, 19 pairs of spontaneous twins and 27 pairs of assisted reproduction twins was performed. The median NT multiple of median (MoM) of spontaneous singletons was 1.00. For twins, the median NT MoM for pregnancies after assisted reproduction and natural conception were 1.02 and 1.07 respectively. There was no statistical difference in the NT thickness among the three pregnancy groups. Contrary to the observed increase in NT in singleton pregnancies from assisted reproduction, the NT in dichorionic twins was comparable to the spontaneous ones. The mode of conception appears to impose differential influence on singletons and twins. Copyright (c) 2006 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Bonelli, Maria Grazia; Ferrini, Mauro; Manni, Andrea
2016-12-01
The assessment of metals and organic micropollutants contamination in agricultural soils is a difficult challenge due to the extensive area used to collect and analyze a very large number of samples. With Dioxins and dioxin-like PCBs measurement methods and subsequent the treatment of data, the European Community advises the develop low-cost and fast methods allowing routing analysis of a great number of samples, providing rapid measurement of these compounds in the environment, feeds and food. The aim of the present work has been to find a method suitable to describe the relations occurring between organic and inorganic contaminants and use the value of the latter in order to forecast the former. In practice, the use of a metal portable soil analyzer coupled with an efficient statistical procedure enables the required objective to be achieved. Compared to Multiple Linear Regression, the Artificial Neural Networks technique has shown to be an excellent forecasting method, though there is no linear correlation between the variables to be analyzed.
A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.; Watson, Layne T.
1998-01-01
Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.
NASA Astrophysics Data System (ADS)
Zelený, J.; Pérez-Fontán, F.; Pechac, P.; Mariño-Espiñeira, P.
2017-05-01
In civil surveillance applications, unmanned aerial vehicles (UAV) are being increasingly used in floods, fires, and law enforcement scenarios. In order to transfer large amounts of information from UAV-mounted cameras, relays, or sensors, large bandwidths are needed in comparison to those required for remotely commanding the UAV. This demands the use of higher-frequency bands, in all probability in the vicinity of 2 or 5 GHz. Novel hardware developments need propagation channel models for the ample range of operational scenarios envisaged, including multiple-input, multiple-output (MIMO) deployments. These configurations may enable a more robust transmission by increasing either the carrier-to-noise ratio statistics or the achievable capacity. In this paper, a 2 × 2 MIMO propagation channel model for an open-field environment capable of synthesizing a narrowband time series at 2 GHz is described. Maximal ratio combining diversity and capacity improvements are also evaluated through synthetic series and compared with measurement results. A simple flat, open scenario was evaluated based on which other, more complex environments can be modeled.
RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection
Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.
2015-01-01
Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112
A comparison of vowel normalization procedures for language variation research.
Adank, Patti; Smits, Roel; van Hout, Roeland
2004-11-01
An evaluation of vowel normalization procedures for the purpose of studying language variation is presented. The procedures were compared on how effectively they (a) preserve phonemic information, (b) preserve information about the talker's regional background (or sociolinguistic information), and (c) minimize anatomical/physiological variation in acoustic representations of vowels. Recordings were made for 80 female talkers and 80 male talkers of Dutch. These talkers were stratified according to their gender and regional background. The normalization procedures were applied to measurements of the fundamental frequency and the first three formant frequencies for a large set of vowel tokens. The normalization procedures were evaluated through statistical pattern analysis. The results show that normalization procedures that use information across multiple vowels ("vowel-extrinsic" information) to normalize a single vowel token performed better than those that include only information contained in the vowel token itself ("vowel-intrinsic" information). Furthermore, the results show that normalization procedures that operate on individual formants performed better than those that use information across multiple formants (e.g., "formant-extrinsic" F2-F1).
Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall
2016-01-01
Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.
Chaibub Neto, Elias
2015-01-01
In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965
NASA Astrophysics Data System (ADS)
Scudder, Rachel P.; Murray, Richard W.; Schindlbeck, Julie C.; Kutterolf, Steffen; Hauff, Folkmar; McKinley, Claire C.
2014-11-01
We have geochemically and statistically characterized bulk marine sediment and ash layers at Ocean Drilling Program Site 1149 (Izu-Bonin Arc) and Deep Sea Drilling Project Site 52 (Mariana Arc), and have quantified that multiple dispersed ash sources collectively comprise ˜30-35% of the hemipelagic sediment mass entering the Izu-Bonin-Mariana subduction system. Multivariate statistical analyses indicate that the bulk sediment at Site 1149 is a mixture of Chinese Loess, a second compositionally distinct eolian source, a dispersed mafic ash, and a dispersed felsic ash. We interpret the source of these ashes as, respectively, being basalt from the Izu-Bonin Front Arc (IBFA) and rhyolite from the Honshu Arc. Sr-, Nd-, and Pb isotopic analyses of the bulk sediment are consistent with the chemical/statistical-based interpretations. Comparison of the mass accumulation rate of the dispersed ash component to discrete ash layer parameters (thickness, sedimentation rate, and number of layers) suggests that eruption frequency, rather than eruption size, drives the dispersed ash record. At Site 52, the geochemistry and statistical modeling indicates that Chinese Loess, IBFA, dispersed BNN (boninite from Izu-Bonin), and a dispersed felsic ash of unknown origin are the sources. At Site 1149, the ash layers and the dispersed ash are compositionally coupled, whereas at Site 52 they are decoupled in that there are no boninite layers, yet boninite is dispersed within the sediment. Changes in the volcanic and eolian inputs through time indicate strong arc-related and climate-related controls.
NASA Astrophysics Data System (ADS)
Byrd, Gene G.; Byrd, Dana
2017-06-01
The two main purposes of this paper on improving Ay101 courses are presentations of (1) some very effective single changes and (2) a method to improve teaching by making just single changes which are evaluated statistically versus a control group class. We show how simple statistical comparison can be done even with Excel in Windows. Of course, other more sophisticated and powerful methods could be used if available. One of several examples to be discussed on our poster is our modification of an online introductory astronomy lab course evaluated by the multiple choice final exam. We composed questions related to the learning objectives of the course modules (LOQs). Students could “talk to themselves” by discursively answering these for extra credit prior to the final. Results were compared to an otherwise identical previous unmodified class. Modified classes showed statistically much better final exam average scores (78% vs. 66%). This modification helped those students who most need help. Students in the lower third of the class preferentially answered the LOQs to improve their scores and the class average on the exam. These results also show the effectiveness of relevant extra credit work. Other examples will be discussed as specific examples of evaluating improvement by making one change and then testing it versus a control. Essentially, this is an evolutionary approach in which single favorable “mutations” are retained and the unfavorable removed. The temptation to make more than one change each time must be resisted!
A global approach to estimate irrigated areas - a comparison between different data and statistics
NASA Astrophysics Data System (ADS)
Meier, Jonas; Zabel, Florian; Mauser, Wolfram
2018-02-01
Agriculture is the largest global consumer of water. Irrigated areas constitute 40 % of the total area used for agricultural production (FAO, 2014a) Information on their spatial distribution is highly relevant for regional water management and food security. Spatial information on irrigation is highly important for policy and decision makers, who are facing the transition towards more efficient sustainable agriculture. However, the mapping of irrigated areas still represents a challenge for land use classifications, and existing global data sets differ strongly in their results. The following study tests an existing irrigation map based on statistics and extends the irrigated area using ancillary data. The approach processes and analyzes multi-temporal normalized difference vegetation index (NDVI) SPOT-VGT data and agricultural suitability data - both at a spatial resolution of 30 arcsec - incrementally in a multiple decision tree. It covers the period from 1999 to 2012. The results globally show a 18 % larger irrigated area than existing approaches based on statistical data. The largest differences compared to the official national statistics are found in Asia and particularly in China and India. The additional areas are mainly identified within already known irrigated regions where irrigation is more dense than previously estimated. The validation with global and regional products shows the large divergence of existing data sets with respect to size and distribution of irrigated areas caused by spatial resolution, the considered time period and the input data and assumption made.
Wilderness adventure therapy effects on the mental health of youth participants.
Bowen, Daniel J; Neill, James T; Crisp, Simon J R
2016-10-01
Adventure therapy offers a prevention, early intervention, and treatment modality for people with behavioural, psychological, and psychosocial issues. It can appeal to youth-at-risk who are often less responsive to traditional psychotherapeutic interventions. This study evaluated Wilderness Adventure Therapy (WAT) outcomes based on participants' pre-program, post-program, and follow-up responses to self-report questionnaires. The sample consisted of 36 adolescent out-patients with mixed mental health issues who completed a 10-week, manualised WAT intervention. The overall short-term standardised mean effect size was small, positive, and statistically significant (0.26), with moderate, statistically significant improvements in psychological resilience and social self-esteem. Total short-term effects were within age-based adventure therapy meta-analytic benchmark 90% confidence intervals, except for the change in suicidality which was lower than the comparable benchmark. The short-term changes were retained at the three-month follow-up, except for family functioning (significant reduction) and suicidality (significant improvement). For participants in clinical ranges pre-program, there was a large, statistically significant reduction in depressive symptomology, and large to very large, statistically significant improvements in behavioural and emotional functioning. These changes were retained at the three-month follow-up. These findings indicate that WAT is as effective as traditional psychotherapy techniques for clinically symptomatic people. Future research utilising a comparison or wait-list control group, multiple sources of data, and a larger sample, could help to qualify and extend these findings. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Chung, Sang M; Lee, David J; Hand, Austin; Young, Philip; Vaidyanathan, Jayabharathi; Sahajwalla, Chandrahas
2015-12-01
The study evaluated whether the renal function decline rate per year with age in adults varies based on two primary statistical analyses: cross-section (CS), using one observation per subject, and longitudinal (LT), using multiple observations per subject over time. A total of 16628 records (3946 subjects; age range 30-92 years) of creatinine clearance and relevant demographic data were used. On average, four samples per subject were collected for up to 2364 days (mean: 793 days). A simple linear regression and random coefficient models were selected for CS and LT analyses, respectively. The renal function decline rates per year were 1.33 and 0.95 ml/min/year for CS and LT analyses, respectively, and were slower when the repeated individual measurements were considered. The study confirms that rates are different based on statistical analyses, and that a statistically robust longitudinal model with a proper sampling design provides reliable individual as well as population estimates of the renal function decline rates per year with age in adults. In conclusion, our findings indicated that one should be cautious in interpreting the renal function decline rate with aging information because its estimation was highly dependent on the statistical analyses. From our analyses, a population longitudinal analysis (e.g. random coefficient model) is recommended if individualization is critical, such as a dose adjustment based on renal function during a chronic therapy. Copyright © 2015 John Wiley & Sons, Ltd.
MICROARRAY DATA ANALYSIS USING MULTIPLE STATISTICAL MODELS
Microarray Data Analysis Using Multiple Statistical Models
Wenjun Bao1, Judith E. Schmid1, Amber K. Goetz1, Ming Ouyang2, William J. Welsh2,Andrew I. Brooks3,4, ChiYi Chu3,Mitsunori Ogihara3,4, Yinhe Cheng5, David J. Dix1. 1National Health and Environmental Effects Researc...
An Adaptive Association Test for Multiple Phenotypes with GWAS Summary Statistics.
Kim, Junghi; Bai, Yun; Pan, Wei
2015-12-01
We study the problem of testing for single marker-multiple phenotype associations based on genome-wide association study (GWAS) summary statistics without access to individual-level genotype and phenotype data. For most published GWASs, because obtaining summary data is substantially easier than accessing individual-level phenotype and genotype data, while often multiple correlated traits have been collected, the problem studied here has become increasingly important. We propose a powerful adaptive test and compare its performance with some existing tests. We illustrate its applications to analyses of a meta-analyzed GWAS dataset with three blood lipid traits and another with sex-stratified anthropometric traits, and further demonstrate its potential power gain over some existing methods through realistic simulation studies. We start from the situation with only one set of (possibly meta-analyzed) genome-wide summary statistics, then extend the method to meta-analysis of multiple sets of genome-wide summary statistics, each from one GWAS. We expect the proposed test to be useful in practice as more powerful than or complementary to existing methods. © 2015 WILEY PERIODICALS, INC.
Estimating Statistical Power When Making Adjustments for Multiple Tests
ERIC Educational Resources Information Center
Porter, Kristin E.
2016-01-01
In recent years, there has been increasing focus on the issue of multiple hypotheses testing in education evaluation studies. In these studies, researchers are typically interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time or across multiple treatment groups. When…
Sarmast, Nima D; Angelov, Nikola; Ghinea, Razvan; Powers, John M; Paravina, Rade D
The CIELab and CIEDE2000 coverage error (ΔE* COV and ΔE' COV , respectively) of basic shades of different gingival shade guides and gingiva-colored restorative dental materials (n = 5) was calculated as compared to a previously compiled database on healthy human gingiva. Data were analyzed using analysis of variance with Tukey-Kramer multiple-comparison test (P < .05). A 50:50% acceptability threshold of 4.6 for ΔE* and 4.1 for ΔE' was used to interpret the results. ΔE* COV / ΔE' COV ranged from 4.4/3.5 to 8.6/6.9. The majority of gingival shade guides and gingiva-colored restorative materials exhibited statistically significant coverage errors above the 50:50% acceptability threshold and uneven shade distribution.
Joshi, Nabin R; Ly, Emma; Viswanathan, Suresh
2017-08-01
To assess the effect of age and test-retest reliability of the intensity response function of the full-field photopic negative response (PhNR) in normal healthy human subjects. Full-field electroretinograms (ERGs) were recorded from one eye of 45 subjects, and 39 of these subjects were tested on two separate days with a Diagnosys Espion System (Lowell, MA, USA). The visual stimuli consisted of brief (<5 ms) red flashes ranging from 0.00625 to 6.4 phot cd.s/m 2 , delivered on a constant 7 cd/m 2 blue background. PhNR amplitudes were measured at its trough from baseline (BT) and from the preceding b-wave peak (PT), and b-wave amplitude was measured at its peak from the preceding a-wave trough or baseline if the a-wave was not present. The intensity response data of all three ERG measures were fitted with a generalized Naka-Rushton function to derive the saturated amplitude (V max ), semisaturation constant (K) and slope (n) parameters. Effect of age on the fit parameters was assessed with linear regression, and test-retest reliability was assessed with the Wilcoxon signed-rank test and Bland-Altman analysis. Holm's correction was applied to account for multiple comparisons. V max of BT was significantly smaller than that of PT and b-wave, and the V max of PT and b-wave was not significantly different from each other. The slope parameter n was smallest for BT and the largest for b-wave and the difference between the slopes of all three measures were statistically significant. Small differences observed in the mean values of K for the different measures did not reach statistical significance. The Wilcoxon signed-rank test indicated no significant differences between the two test visits for any of the Naka-Rushton parameters for the three ERG measures, and the Bland-Altman plots indicated that the mean difference between test and retest measurements of the different fit parameters was close to zero and within 6% of the average of the test and retest values of the respective parameters for all three ERG measurements, indicating minimal bias. While the coefficient of reliability (COR, defined as 1.96 times the standard deviation of the test and retest difference) of each fit parameter was more or less comparable across the three ERG measurements, the %COR (COR normalized to the mean test and retest measures) was generally larger for BT compared to both PT and b-wave for each fit parameter. The Naka-Rushton fit parameters did not show statistically significant changes with age for any of the ERG measures when corrections were applied for multiple comparisons. However, the V max of BT demonstrated a weak correlation with age prior to correction for multiple comparisons, and the effect of age on this parameter showed greater significance when the measure was expressed as a ratio of the V max of b-wave from the same subject. V max of the BT amplitude measure of PhNR at the best was weakly correlated with age. None of the other parameters of the Naka-Rushton fit to the intensity response data of either the PhNR or the b-wave showed any systematic changes with age. The test-retest reliability of the fit parameters for PhNR BT amplitude measurements appears to be lower than those of the PhNR PT and b-wave amplitude measurements.
Zhu, Xiaofeng; Feng, Tao; Tayo, Bamidele O; Liang, Jingjing; Young, J Hunter; Franceschini, Nora; Smith, Jennifer A; Yanek, Lisa R; Sun, Yan V; Edwards, Todd L; Chen, Wei; Nalls, Mike; Fox, Ervin; Sale, Michele; Bottinger, Erwin; Rotimi, Charles; Liu, Yongmei; McKnight, Barbara; Liu, Kiang; Arnett, Donna K; Chakravati, Aravinda; Cooper, Richard S; Redline, Susan
2015-01-08
Genome-wide association studies (GWASs) have identified many genetic variants underlying complex traits. Many detected genetic loci harbor variants that associate with multiple-even distinct-traits. Most current analysis approaches focus on single traits, even though the final results from multiple traits are evaluated together. Such approaches miss the opportunity to systemically integrate the phenome-wide data available for genetic association analysis. In this study, we propose a general approach that can integrate association evidence from summary statistics of multiple traits, either correlated, independent, continuous, or binary traits, which might come from the same or different studies. We allow for trait heterogeneity effects. Population structure and cryptic relatedness can also be controlled. Our simulations suggest that the proposed method has improved statistical power over single-trait analysis in most of the cases we studied. We applied our method to the Continental Origins and Genetic Epidemiology Network (COGENT) African ancestry samples for three blood pressure traits and identified four loci (CHIC2, HOXA-EVX1, IGFBP1/IGFBP3, and CDH17; p < 5.0 × 10(-8)) associated with hypertension-related traits that were missed by a single-trait analysis in the original report. Six additional loci with suggestive association evidence (p < 5.0 × 10(-7)) were also observed, including CACNA1D and WNT3. Our study strongly suggests that analyzing multiple phenotypes can improve statistical power and that such analysis can be executed with the summary statistics from GWASs. Our method also provides a way to study a cross phenotype (CP) association by using summary statistics from GWASs of multiple phenotypes. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Oginni, Ao; Udoye, C I
2004-12-01
The present study was performed to compare the incidence of endodontic flare ups in single with multiple visits treatment procedures, to establish the relationship between pre-operative and post obturation pain in patients attending for endodontic therapy in a Nigerian teaching Hospital. Patients were randomly assigned to either single visit or multiple visits group. Data collected at root canal treatment appointment and recall visits (1st, 7th and 30th day post obturation) include pulp vitality status, the presence or absence of pre-operative pain, presence and degree of post obturation pain. Presence of endodontic flare-ups (defined as either patient's report of pain not controlled with over the counter medication and or increasing swelling). The compiled data were analyzed using chi-square where applicable. P level < 0.05 was taken as significant. Ten endodontic flare-ups (8.1 %) were recorded in the multiple visits group compared to 19 (18,3%) flare-ups for the single visit group, P = 0.02. For both single and multiple visits procedures, there were statistically significant correlations between pre operative and post obturation pain (P = 0.002 and P = 0.0004 respectively). Teeth with vital pulps reported the lowest frequency of post obturation pain (48.8%), while those with non vital pulps were found to have the highest frequency oh post obturation pain (50,3%), P = 0.9. Although the present study reported higher incidences for post obturation pain and flare-ups following the single visit procedures, single visit endodontic therapy has been shown to be a safe and effective alternative to multiple visits treatment.
NASA Astrophysics Data System (ADS)
Zahari, Siti Meriam; Ramli, Norazan Mohamed; Moktar, Balkiah; Zainol, Mohammad Said
2014-09-01
In the presence of multicollinearity and multiple outliers, statistical inference of linear regression model using ordinary least squares (OLS) estimators would be severely affected and produces misleading results. To overcome this, many approaches have been investigated. These include robust methods which were reported to be less sensitive to the presence of outliers. In addition, ridge regression technique was employed to tackle multicollinearity problem. In order to mitigate both problems, a combination of ridge regression and robust methods was discussed in this study. The superiority of this approach was examined when simultaneous presence of multicollinearity and multiple outliers occurred in multiple linear regression. This study aimed to look at the performance of several well-known robust estimators; M, MM, RIDGE and robust ridge regression estimators, namely Weighted Ridge M-estimator (WRM), Weighted Ridge MM (WRMM), Ridge MM (RMM), in such a situation. Results of the study showed that in the presence of simultaneous multicollinearity and multiple outliers (in both x and y-direction), the RMM and RIDGE are more or less similar in terms of superiority over the other estimators, regardless of the number of observation, level of collinearity and percentage of outliers used. However, when outliers occurred in only single direction (y-direction), the WRMM estimator is the most superior among the robust ridge regression estimators, by producing the least variance. In conclusion, the robust ridge regression is the best alternative as compared to robust and conventional least squares estimators when dealing with simultaneous presence of multicollinearity and outliers.
A Fiducial Approach to Extremes and Multiple Comparisons
ERIC Educational Resources Information Center
Wandler, Damian V.
2010-01-01
Generalized fiducial inference is a powerful tool for many difficult problems. Based on an extension of R. A. Fisher's work, we used generalized fiducial inference for two extreme value problems and a multiple comparison procedure. The first extreme value problem is dealing with the generalized Pareto distribution. The generalized Pareto…
Reporting of analyses from randomized controlled trials with multiple arms: a systematic review.
Baron, Gabriel; Perrodeau, Elodie; Boutron, Isabelle; Ravaud, Philippe
2013-03-27
Multiple-arm randomized trials can be more complex in their design, data analysis, and result reporting than two-arm trials. We conducted a systematic review to assess the reporting of analyses in reports of randomized controlled trials (RCTs) with multiple arms. The literature in the MEDLINE database was searched for reports of RCTs with multiple arms published in 2009 in the core clinical journals. Two reviewers extracted data using a standardized extraction form. In total, 298 reports were identified. Descriptions of the baseline characteristics and outcomes per group were missing in 45 reports (15.1%) and 48 reports (16.1%), respectively. More than half of the articles (n = 171, 57.4%) reported that a planned global test comparison was used (that is, assessment of the global differences between all groups), but 67 (39.2%) of these 171 articles did not report details of the planned analysis. Of the 116 articles reporting a global comparison test, 12 (10.3%) did not report the analysis as planned. In all, 60% of publications (n = 180) described planned pairwise test comparisons (that is, assessment of the difference between two groups), but 20 of these 180 articles (11.1%) did not report the pairwise test comparisons. Of the 204 articles reporting pairwise test comparisons, the comparisons were not planned for 44 (21.6%) of them. Less than half the reports (n = 137; 46%) provided baseline and outcome data per arm and reported the analysis as planned. Our findings highlight discrepancies between the planning and reporting of analyses in reports of multiple-arm trials.
de Lusignan, Simon; Kumarapeli, Pushpa; Chan, Tom; Pflug, Bernhard; van Vlymen, Jeremy; Jones, Beryl; Freeman, George K
2008-09-08
There is a lack of tools to evaluate and compare Electronic patient record (EPR) systems to inform a rational choice or development agenda. To develop a tool kit to measure the impact of different EPR system features on the consultation. We first developed a specification to overcome the limitations of existing methods. We divided this into work packages: (1) developing a method to display multichannel video of the consultation; (2) code and measure activities, including computer use and verbal interactions; (3) automate the capture of nonverbal interactions; (4) aggregate multiple observations into a single navigable output; and (5) produce an output interpretable by software developers. We piloted this method by filming live consultations (n = 22) by 4 general practitioners (GPs) using different EPR systems. We compared the time taken and variations during coded data entry, prescribing, and blood pressure (BP) recording. We used nonparametric tests to make statistical comparisons. We contrasted methods of BP recording using Unified Modeling Language (UML) sequence diagrams. We found that 4 channels of video were optimal. We identified an existing application for manual coding of video output. We developed in-house tools for capturing use of keyboard and mouse and to time stamp speech. The transcript is then typed within this time stamp. Although we managed to capture body language using pattern recognition software, we were unable to use this data quantitatively. We loaded these observational outputs into our aggregation tool, which allows simultaneous navigation and viewing of multiple files. This also creates a single exportable file in XML format, which we used to develop UML sequence diagrams. In our pilot, the GP using the EMIS LV (Egton Medical Information Systems Limited, Leeds, UK) system took the longest time to code data (mean 11.5 s, 95% CI 8.7-14.2). Nonparametric comparison of EMIS LV with the other systems showed a significant difference, with EMIS PCS (Egton Medical Information Systems Limited, Leeds, UK) (P = .007), iSoft Synergy (iSOFT, Banbury, UK) (P = .014), and INPS Vision (INPS, London, UK) (P = .006) facilitating faster coding. In contrast, prescribing was fastest with EMIS LV (mean 23.7 s, 95% CI 20.5-26.8), but nonparametric comparison showed no statistically significant difference. UML sequence diagrams showed that the simplest BP recording interface was not the easiest to use, as users spent longer navigating or looking up previous blood pressures separately. Complex interfaces with free-text boxes left clinicians unsure of what to add. The ALFA method allows the precise observation of the clinical consultation. It enables rigorous comparison of core elements of EPR systems. Pilot data suggests its capacity to demonstrate differences between systems. Its outputs could provide the evidence base for making more objective choices between systems.
Simultaneous Control of Error Rates in fMRI Data Analysis
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-01-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730
Apparently abnormal Wechsler Memory Scale index score patterns in the normal population.
Carrasco, Roman Marcus; Grups, Josefine; Evans, Brittney; Simco, Edward; Mittenberg, Wiley
2015-01-01
Interpretation of the Wechsler Memory Scale-Fourth Edition may involve examination of multiple memory index score contrasts and similar comparisons with Wechsler Adult Intelligence Scale-Fourth Edition ability indexes. Standardization sample data suggest that 15-point differences between any specific pair of index scores are relatively uncommon in normal individuals, but these base rates refer to a comparison between a single pair of indexes rather than multiple simultaneous comparisons among indexes. This study provides normative data for the occurrence of multiple index score differences calculated by using Monte Carlo simulations and validated against standardization data. Differences of 15 points between any two memory indexes or between memory and ability indexes occurred in 60% and 48% of the normative sample, respectively. Wechsler index score discrepancies are normally common and therefore not clinically meaningful when numerous such comparisons are made. Explicit prior interpretive hypotheses are necessary to reduce the number of index comparisons and associated false-positive conclusions. Monte Carlo simulation accurately predicts these false-positive rates.
ERIC Educational Resources Information Center
Chen, Chi-hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen
2017-01-01
Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories…
ERIC Educational Resources Information Center
Kosko Karl W.; Singh, Rashmi
2018-01-01
Multiplicative reasoning is a key concept in elementary school mathematics. Item statistics reported by the National Assessment of Educational Progress (NAEP) assessment provide the best current indicator for how well elementary students across the U.S. understand this, and other concepts. However, beyond expert reviews and statistical analysis,…
The Ironic Effect of Significant Results on the Credibility of Multiple-Study Articles
ERIC Educational Resources Information Center
Schimmack, Ulrich
2012-01-01
Cohen (1962) pointed out the importance of statistical power for psychology as a science, but statistical power of studies has not increased, while the number of studies in a single article has increased. It has been overlooked that multiple studies with modest power have a high probability of producing nonsignificant results because power…
Analysis of Multiple Contingency Tables by Exact Conditional Tests for Zero Partial Association.
ERIC Educational Resources Information Center
Kreiner, Svend
The tests for zero partial association in a multiple contingency table have gained new importance with the introduction of graphical models. It is shown how these may be performed as exact conditional tests, using as test criteria either the ordinary likelihood ratio, the standard x squared statistic, or any other appropriate statistics. A…
ERIC Educational Resources Information Center
White, Desley
2015-01-01
Two practical activities are described, which aim to support critical thinking about statistics as they concern multiple outcomes testing. Formulae are presented in Microsoft Excel spreadsheets, which are used to calculate the inflation of error associated with the quantity of tests performed. This is followed by a decision-making exercise, where…
NASA Astrophysics Data System (ADS)
Wang, Ji; Fischer, Debra A.; Horch, Elliott P.; Xie, Ji-Wei
2015-06-01
As hundreds of gas giant planets have been discovered, we study how these planets form and evolve in different stellar environments, specifically in multiple stellar systems. In such systems, stellar companions may have a profound influence on gas giant planet formation and evolution via several dynamical effects such as truncation and perturbation. We select 84 Kepler Objects of Interest (KOIs) with gas giant planet candidates. We obtain high-angular resolution images using telescopes with adaptive optics (AO) systems. Together with the AO data, we use archival radial velocity data and dynamical analysis to constrain the presence of stellar companions. We detect 59 stellar companions around 40 KOIs for which we develop methods of testing their physical association. These methods are based on color information and galactic stellar population statistics. We find evidence of suppressive planet formation within 20 AU by comparing stellar multiplicity. The stellar multiplicity rate (MR) for planet host stars is {0}-0+5% within 20 AU. In comparison, the stellar MR is 18% ± 2% for the control sample, i.e., field stars in the solar neighborhood. The stellar MR for planet host stars is 34% ± 8% for separations between 20 and 200 AU, which is higher than the control sample at 12% ± 2%. Beyond 200 AU, stellar MRs are comparable between planet host stars and the control sample. We discuss the implications of the results on gas giant planet formation and evolution.
Analysis and prediction of Multiple-Site Damage (MSD) fatigue crack growth
NASA Technical Reports Server (NTRS)
Dawicke, D. S.; Newman, J. C., Jr.
1992-01-01
A technique was developed to calculate the stress intensity factor for multiple interacting cracks. The analysis was verified through comparison with accepted methods of calculating stress intensity factors. The technique was incorporated into a fatigue crack growth prediction model and used to predict the fatigue crack growth life for multiple-site damage (MSD). The analysis was verified through comparison with experiments conducted on uniaxially loaded flat panels with multiple cracks. Configuration with nearly equal and unequal crack distribution were examined. The fatigue crack growth predictions agreed within 20 percent of the experimental lives for all crack configurations considered.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-09-21
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
Multiple Contact Dates and SARS Incubation Periods
2004-01-01
Many severe acute respiratory syndrome (SARS) patients have multiple possible incubation periods due to multiple contact dates. Multiple contact dates cannot be used in standard statistical analytic techniques, however. I present a simple spreadsheet-based method that uses multiple contact dates to calculate the possible incubation periods of SARS. PMID:15030684
Bull, Marta; Learn, Gerald; Genowati, Indira; McKernan, Jennifer; Hitti, Jane; Lockhart, David; Tapia, Kenneth; Holte, Sarah; Dragavon, Joan; Coombs, Robert; Mullins, James; Frenkel, Lisa
2009-09-22
Compartmentalization of HIV-1 between the genital tract and blood was noted in half of 57 women included in 12 studies primarily using cell-free virus. To further understand differences between genital tract and blood viruses of women with chronic HIV-1 infection cell-free and cell-associated virus populations were sequenced from these tissues, reasoning that integrated viral DNA includes variants archived from earlier in infection, and provides a greater array of genotypes for comparisons. Multiple sequences from single-genome-amplification of HIV-1 RNA and DNA from the genital tract and blood of each woman were compared in a cross-sectional study. Maximum likelihood phylogenies were evaluated for evidence of compartmentalization using four statistical tests. Genital tract and blood HIV-1 appears compartmentalized in 7/13 women by >/=2 statistical analyses. These subjects' phylograms were characterized by low diversity genital-specific viral clades interspersed between clades containing both genital and blood sequences. Many of the genital-specific clades contained monotypic HIV-1 sequences. In 2/7 women, HIV-1 populations were significantly compartmentalized across all four statistical tests; both had low diversity genital tract-only clades. Collapsing monotypic variants into a single sequence diminished the prevalence and extent of compartmentalization. Viral sequences did not demonstrate tissue-specific signature amino acid residues, differential immune selection, or co-receptor usage. In women with chronic HIV-1 infection multiple identical sequences suggest proliferation of HIV-1-infected cells, and low diversity tissue-specific phylogenetic clades are consistent with bursts of viral replication. These monotypic and tissue-specific viruses provide statistical support for compartmentalization of HIV-1 between the female genital tract and blood. However, the intermingling of these clades with clades comprised of both genital and blood sequences and the absence of tissue-specific genetic features suggests compartmentalization between blood and genital tract may be due to viral replication and proliferation of infected cells, and questions whether HIV-1 in the female genital tract is distinct from blood.
Galfalvy, Hanga C; Erraji-Benchekroun, Loubna; Smyrniotopoulos, Peggy; Pavlidis, Paul; Ellis, Steven P; Mann, J John; Sibille, Etienne; Arango, Victoria
2003-01-01
Background Genomic studies of complex tissues pose unique analytical challenges for assessment of data quality, performance of statistical methods used for data extraction, and detection of differentially expressed genes. Ideally, to assess the accuracy of gene expression analysis methods, one needs a set of genes which are known to be differentially expressed in the samples and which can be used as a "gold standard". We introduce the idea of using sex-chromosome genes as an alternative to spiked-in control genes or simulations for assessment of microarray data and analysis methods. Results Expression of sex-chromosome genes were used as true internal biological controls to compare alternate probe-level data extraction algorithms (Microarray Suite 5.0 [MAS5.0], Model Based Expression Index [MBEI] and Robust Multi-array Average [RMA]), to assess microarray data quality and to establish some statistical guidelines for analyzing large-scale gene expression. These approaches were implemented on a large new dataset of human brain samples. RMA-generated gene expression values were markedly less variable and more reliable than MAS5.0 and MBEI-derived values. A statistical technique controlling the false discovery rate was applied to adjust for multiple testing, as an alternative to the Bonferroni method, and showed no evidence of false negative results. Fourteen probesets, representing nine Y- and two X-chromosome linked genes, displayed significant sex differences in brain prefrontal cortex gene expression. Conclusion In this study, we have demonstrated the use of sex genes as true biological internal controls for genomic analysis of complex tissues, and suggested analytical guidelines for testing alternate oligonucleotide microarray data extraction protocols and for adjusting multiple statistical analysis of differentially expressed genes. Our results also provided evidence for sex differences in gene expression in the brain prefrontal cortex, supporting the notion of a putative direct role of sex-chromosome genes in differentiation and maintenance of sexual dimorphism of the central nervous system. Importantly, these analytical approaches are applicable to all microarray studies that include male and female human or animal subjects. PMID:12962547
Galfalvy, Hanga C; Erraji-Benchekroun, Loubna; Smyrniotopoulos, Peggy; Pavlidis, Paul; Ellis, Steven P; Mann, J John; Sibille, Etienne; Arango, Victoria
2003-09-08
Genomic studies of complex tissues pose unique analytical challenges for assessment of data quality, performance of statistical methods used for data extraction, and detection of differentially expressed genes. Ideally, to assess the accuracy of gene expression analysis methods, one needs a set of genes which are known to be differentially expressed in the samples and which can be used as a "gold standard". We introduce the idea of using sex-chromosome genes as an alternative to spiked-in control genes or simulations for assessment of microarray data and analysis methods. Expression of sex-chromosome genes were used as true internal biological controls to compare alternate probe-level data extraction algorithms (Microarray Suite 5.0 [MAS5.0], Model Based Expression Index [MBEI] and Robust Multi-array Average [RMA]), to assess microarray data quality and to establish some statistical guidelines for analyzing large-scale gene expression. These approaches were implemented on a large new dataset of human brain samples. RMA-generated gene expression values were markedly less variable and more reliable than MAS5.0 and MBEI-derived values. A statistical technique controlling the false discovery rate was applied to adjust for multiple testing, as an alternative to the Bonferroni method, and showed no evidence of false negative results. Fourteen probesets, representing nine Y- and two X-chromosome linked genes, displayed significant sex differences in brain prefrontal cortex gene expression. In this study, we have demonstrated the use of sex genes as true biological internal controls for genomic analysis of complex tissues, and suggested analytical guidelines for testing alternate oligonucleotide microarray data extraction protocols and for adjusting multiple statistical analysis of differentially expressed genes. Our results also provided evidence for sex differences in gene expression in the brain prefrontal cortex, supporting the notion of a putative direct role of sex-chromosome genes in differentiation and maintenance of sexual dimorphism of the central nervous system. Importantly, these analytical approaches are applicable to all microarray studies that include male and female human or animal subjects.
ERIC Educational Resources Information Center
Stipancic, Kaila L.; Tjaden, Kris; Wilding, Gregory
2016-01-01
Purpose: This study obtained judgments of sentence intelligibility using orthographic transcription for comparison with previously reported intelligibility judgments obtained using a visual analog scale (VAS) for individuals with Parkinson's disease and multiple sclerosis and healthy controls (K. Tjaden, J. E. Sussman, & G. E. Wilding, 2014).…
Mesterton, Johan; Lindgren, Peter; Ekenberg Abreu, Anna; Ladfors, Lars; Lilja, Monica; Saltvedt, Sissel; Amer-Wåhlin, Isis
2016-05-31
Unwarranted variation in care practice and outcomes has gained attention and inter-hospital comparisons are increasingly being used to highlight and understand differences between hospitals. Adjustment for case mix is a prerequisite for meaningful comparisons between hospitals with different patient populations. The objective of this study was to identify and quantify maternal characteristics that impact a set of important indicators of health outcomes, resource use and care process and which could be used for case mix adjustment of comparisons between hospitals. In this register-based study, 139 756 deliveries in 2011 and 2012 were identified in regional administrative systems from seven Swedish regions, which together cover 67 % of all deliveries in Sweden. Data were linked to the Medical birth register and Statistics Sweden's population data. A number of important indicators in childbirth care were studied: Caesarean section (CS), induction of labour, length of stay, perineal tears, haemorrhage > 1000 ml and post-partum infections. Sociodemographic and clinical characteristics deemed relevant for case mix adjustment of outcomes and resource use were identified based on previous literature and based on clinical expertise. Adjustment using logistic and ordinary least squares regression analysis was performed to quantify the impact of these characteristics on the studied indicators. Almost all case mix factors analysed had an impact on CS rate, induction rate and length of stay and the effect was highly statistically significant for most factors. Maternal age, parity, fetal presentation and multiple birth were strong predictors of all these indicators but a number of additional factors such as born outside the EU, body mass index (BMI) and several complications during pregnancy were also important risk factors. A number of maternal characteristics had a noticeable impact on risk of perineal tears, while the impact of case mix factors was less pronounced for risk of haemorrhage > 1000 ml and post-partum infections. Maternal characteristics have a large impact on care process, resource use and outcomes in childbirth care. For meaningful comparisons between hospitals and benchmarking, a broad spectrum of sociodemographic and clinical maternal characteristics should be accounted for.
Local and systemic effect of transfection-reagent formulated DNA vectors on equine melanoma.
Mählmann, Kathrin; Feige, Karsten; Juhls, Christiane; Endmann, Anne; Schuberth, Hans-Joachim; Oswald, Detlef; Hellige, Mareu; Doherr, Marcus; Cavalleri, Jessika-M V
2015-05-14
Equine melanoma has a high incidence in grey horses. Xenogenic DNA vaccination may represent a promising therapeutic approach against equine melanoma as it successfully induced an immunological response in other species suffering from melanoma and in healthy horses. In a clinical study, twenty-seven, grey, melanoma-bearing, horses were assigned to three groups (n = 9) and vaccinated on days 1, 22, and 78 with DNA vectors encoding for equine (eq) IL-12 and IL-18 alone or in combination with either human glycoprotein (hgp) 100 or human tyrosinase (htyr). Horses were vaccinated intramuscularly, and one selected melanoma was locally treated by intradermal peritumoral injection. Prior to each injection and on day 120, the sizes of up to nine melanoma lesions per horse were measured by caliper and ultrasound. Specific serum antibodies against hgp100 and htyr were measured using cell based flow-cytometric assays. An Analysis of Variance (ANOVA) for repeated measurements was performed to identify statistically significant influences on the relative tumor volume. For post-hoc testing a Tukey-Kramer Multiple-Comparison Test was performed to compare the relative volumes on the different examination days. An ANOVA for repeated measurements was performed to analyse changes in body temperature over time. A one-way ANOVA was used to evaluate differences in body temperature between the groups. A p-value < 0.05 was considered significant for all statistical tests applied. In all groups, the relative tumor volume decreased significantly to 79.1 ± 26.91% by day 120 (p < 0.0001, Tukey-Kramer Multiple-Comparison Test). Affiliation to treatment group, local treatment and examination modality had no significant influence on the results (ANOVA for repeated measurements). Neither a cellular nor a humoral immune response directed against htyr or hgp100 was detected. Horses had an increased body temperature on the day after vaccination. This is the first clinical report on a systemic effect against equine melanoma following treatment with DNA vectors encoding eqIL12 and eqIL18 and formulated with a transfection reagent. Addition of DNA vectors encoding hgp100 respectively htyr did not potentiate this effect.
Local and systemic effect of transfection-reagent formulated DNA vectors on equine melanoma.
Mählmann, Kathrin; Feige, Karsten; Juhls, Christiane; Endmann, Anne; Schuberth, Hans-Joachim; Oswald, Detlef; Hellige, Maren; Doherr, Marcus; Cavalleri, Jessika-M V
2015-06-11
Equine melanoma has a high incidence in grey horses. Xenogenic DNA vaccination may represent a promising therapeutic approach against equine melanoma as it successfully induced an immunological response in other species suffering from melanoma and in healthy horses. In a clinical study, twenty-seven, grey, melanoma-bearing, horses were assigned to three groups (n = 9) and vaccinated on days 1, 22, and 78 with DNA vectors encoding for equine (eq) IL-12 and IL-18 alone or in combination with either human glycoprotein (hgp) 100 or human tyrosinase (htyr). Horses were vaccinated intramuscularly, and one selected melanoma was locally treated by intradermal peritumoral injection. Prior to each injection and on day 120, the sizes of up to nine melanoma lesions per horse were measured by caliper and ultrasound. Specific serum antibodies against hgp100 and htyr were measured using cell based flow-cytometric assays. An Analysis of Variance (ANOVA) for repeated measurements was performed to identify statistically significant influences on the relative tumor volume. For post-hoc testing a Tukey-Kramer Multiple-Comparison Test was performed to compare the relative volumes on the different examination days. An ANOVA for repeated measurements was performed to analyse changes in body temperature over time. A one-way ANOVA was used to evaluate differences in body temperature between the groups. A p-value < 0.05 was considered significant for all statistical tests applied. In all groups, the relative tumor volume decreased significantly to 79.1 ± 26.91% by day 120 (p < 0.0001, Tukey-Kramer Multiple-Comparison Test). Affiliation to treatment group, local treatment and examination modality had no significant influence on the results (ANOVA for repeated measurements). Neither a cellular nor a humoral immune response directed against htyr or hgp100 was detected. Horses had an increased body temperature on the day after vaccination. This is the first clinical report on a systemic effect against equine melanoma following treatment with DNA vectors encoding eqIL12 and eqIL18 and formulated with a transfection reagent. Addition of DNA vectors encoding hgp100 respectively htyr did not potentiate this effect.
Meredith, Dennis S; Losina, Elena; Neumann, Gesa; Yoshioka, Hiroshi; Lang, Philipp K; Katz, Jeffrey N
2009-10-29
In this cross-sectional study, we conducted a comprehensive assessment of all articular elements that could be measured using knee MRI. We assessed the association of pathological change in multiple articular structures involved in the pathoanatomy of osteoarthritis. Knee MRI scans from patients over 45 years old were assessed using a semi-quantitative knee MRI assessment form. The form included six distinct elements: cartilage, bone marrow lesions, osteophytes, subchondral sclerosis, joint effusion and synovitis. Each type of pathology was graded using an ordinal scale with a value of zero indicating no pathology and higher values indicating increasingly severe levels of pathology. The principal dependent variable for comparison was the mean cartilage disease score (CDS), which captured the aggregate extent of involvement of articular cartilage. The distribution of CDS was compared to the individual and cumulative distributions of each articular element using the Chi-squared test. The correlations between pathological change in the various articular structures were assessed in a Spearman correlation table. Data from 140 patients were available for review. The cohort had a median age of 61 years (range 45-89) and was 61% female. The cohort included a wide spectrum of OA severity. Our analysis showed a statistically significant trend towards pathological change involving more articular elements as CDS worsened (p-value for trend < 0.0001). Comparison of CDS to change in the severity of pathology of individual articular elements showed statistically significant trends towards more severe pathology as CDS worsened for osteophytes (p-value for trend < 0.0001), bone marrow lesions (p = 0.0003), and subchondral sclerosis (p = 0.009), but not joint effusion or synovitis. There was a moderate correlation between cartilage damage, osteophytes and BMLs as well as a moderate correlation between joint effusion and synovitis. However, cartilage damage and osteophytes were only weakly associated with synovitis or joint effusion. Our results support an inter-relationship of multiple articular elements in the pathoanatomy of knee OA. Prospective studies of OA pathogenesis in humans are needed to correlate these findings to clinically relevant outcomes such as pain and function.
A Comparison of Academic Status Statistics, Fall 1981 to Fall 1983. Report 83-3.
ERIC Educational Resources Information Center
Parrott, Marietta
A comparison of the number and percent of students subject to academic dismissal, academic probation, progress probation, the dean's list (GPA 2.00), and the president's list (GPA 3.00) at College of the Sequoias was drawn for the years 1981, 1982, and 1983. Statistics showed the following changes: (1) the number of students dismissed due to poor…
40 CFR Appendix IV to Part 265 - Tests for Significance
Code of Federal Regulations, 2010 CFR
2010-07-01
... introductory statistics texts. ... student's t-test involves calculation of the value of a t-statistic for each comparison of the mean... parameter with its initial background concentration or value. The calculated value of the t-statistic must...
Role of diversity in ICA and IVA: theory and applications
NASA Astrophysics Data System (ADS)
Adalı, Tülay
2016-05-01
Independent component analysis (ICA) has been the most popular approach for solving the blind source separation problem. Starting from a simple linear mixing model and the assumption of statistical independence, ICA can recover a set of linearly-mixed sources to within a scaling and permutation ambiguity. It has been successfully applied to numerous data analysis problems in areas as diverse as biomedicine, communications, finance, geo- physics, and remote sensing. ICA can be achieved using different types of diversity—statistical property—and, can be posed to simultaneously account for multiple types of diversity such as higher-order-statistics, sample dependence, non-circularity, and nonstationarity. A recent generalization of ICA, independent vector analysis (IVA), generalizes ICA to multiple data sets and adds the use of one more type of diversity, statistical dependence across the data sets, for jointly achieving independent decomposition of multiple data sets. With the addition of each new diversity type, identification of a broader class of signals become possible, and in the case of IVA, this includes sources that are independent and identically distributed Gaussians. We review the fundamentals and properties of ICA and IVA when multiple types of diversity are taken into account, and then ask the question whether diversity plays an important role in practical applications as well. Examples from various domains are presented to demonstrate that in many scenarios it might be worthwhile to jointly account for multiple statistical properties. This paper is submitted in conjunction with the talk delivered for the "Unsupervised Learning and ICA Pioneer Award" at the 2016 SPIE Conference on Sensing and Analysis Technologies for Biomedical and Cognitive Applications.
Limited Rationality and Its Quantification Through the Interval Number Judgments With Permutations.
Liu, Fang; Pedrycz, Witold; Zhang, Wei-Guo
2017-12-01
The relative importance of alternatives expressed in terms of interval numbers in the fuzzy analytic hierarchy process aims to capture the uncertainty experienced by decision makers (DMs) when making a series of comparisons. Under the assumption of full rationality, the judgements of DMs in the typical analytic hierarchy process could be consistent. However, since the uncertainty in articulating the opinions of DMs is unavoidable, the interval number judgements are associated with the limited rationality. In this paper, we investigate the concept of limited rationality by introducing interval multiplicative reciprocal comparison matrices. By analyzing the consistency of interval multiplicative reciprocal comparison matrices, it is observed that the interval number judgements are inconsistent. By considering the permutations of alternatives, the concepts of approximation-consistency and acceptable approximation-consistency of interval multiplicative reciprocal comparison matrices are proposed. The exchange method is designed to generate all the permutations. A novel method of determining the interval weight vector is proposed under the consideration of randomness in comparing alternatives, and a vector of interval weights is determined. A new algorithm of solving decision making problems with interval multiplicative reciprocal preference relations is provided. Two numerical examples are carried out to illustrate the proposed approach and offer a comparison with the methods available in the literature.
Meyer, Hans Jonas; Leifels, Leonard; Schob, Stefan; Garnov, Nikita; Surov, Alexey
2018-01-01
Nowadays, multiparametric investigations of head and neck squamous cell carcinoma (HNSCC) are established. These approaches can better characterize tumor biology and behavior. Diffusion weighted imaging (DWI) can by means of apparent diffusion coefficient (ADC) quantitatively characterize different tissue compartments. Dynamic contrast-enhanced magnetic resonance imaging (DCE MRI) reflects perfusion and vascularization of tissues. Recently, a novel approach of data acquisition, namely histogram analysis of different images is a novel diagnostic approach, which can provide more information of tissue heterogeneity. The purpose of this study was to analyze possible associations between DWI, and DCE parameters derived from histogram analysis in patients with HNSCC. Overall, 34 patients, 9 women and 25 men, mean age, 56.7±10.2years, with different HNSCC were involved in the study. DWI was obtained by using of an axial echo planar imaging sequence with b-values of 0 and 800s/mm 2 . Dynamic T1w DCE sequence after intravenous application of contrast medium was performed for estimation of the following perfusion parameters: volume transfer constant (K trans ), volume of the extravascular extracellular leakage space (Ve), and diffusion of contrast medium from the extravascular extracellular leakage space back to the plasma (Kep). Both ADC and perfusion parameters maps were processed offline in DICOM format with custom-made Matlab-based application. Thereafter, polygonal ROIs were manually drawn on the transferred maps on each slice. For every parameter, mean, maximal, minimal, and median values, as well percentiles 10th, 25th, 75th, 90th, kurtosis, skewness, and entropy were estimated. Сorrelation analysis identified multiple statistically significant correlations between the investigated parameters. Ve related parameters correlated well with different ADC values. Especially, percentiles 10 and 75, mode, and median values showed stronger correlations in comparison to other parameters. Thereby, the calculated correlation coefficients ranged from 0.62 to 0.69. Furthermore, K trans related parameters showed multiple slightly to moderate significant correlations with different ADC values. Strongest correlations were identified between ADC P75 and K trans min (p=0.58, P=0.0007), and ADC P75 and K trans P10 (p=0.56, P=0.001). Only four K ep related parameters correlated statistically significant with ADC fractions. Strongest correlation was found between K ep max and ADC mode (p=-0.47, P=0.008). Multiple statistically significant correlations between, DWI and DCE MRI parameters derived from histogram analysis were identified in HNSCC. Copyright © 2017 Elsevier Inc. All rights reserved.
Statistical Analysis of Time-Series from Monitoring of Active Volcanic Vents
NASA Astrophysics Data System (ADS)
Lachowycz, S.; Cosma, I.; Pyle, D. M.; Mather, T. A.; Rodgers, M.; Varley, N. R.
2016-12-01
Despite recent advances in the collection and analysis of time-series from volcano monitoring, and the resulting insights into volcanic processes, challenges remain in forecasting and interpreting activity from near real-time analysis of monitoring data. Statistical methods have potential to characterise the underlying structure and facilitate intercomparison of these time-series, and so inform interpretation of volcanic activity. We explore the utility of multiple statistical techniques that could be widely applicable to monitoring data, including Shannon entropy and detrended fluctuation analysis, by their application to various data streams from volcanic vents during periods of temporally variable activity. Each technique reveals changes through time in the structure of some of the data that were not apparent from conventional analysis. For example, we calculate the Shannon entropy (a measure of the randomness of a signal) of time-series from the recent dome-forming eruptions of Volcán de Colima (Mexico) and Soufrière Hills (Montserrat). The entropy of real-time seismic measurements and the count rate of certain volcano-seismic event types from both volcanoes is found to be temporally variable, with these data generally having higher entropy during periods of lava effusion and/or larger explosions. In some instances, the entropy shifts prior to or coincident with changes in seismic or eruptive activity, some of which were not clearly recognised by real-time monitoring. Comparison with other statistics demonstrates the sensitivity of the entropy to the data distribution, but that it is distinct from conventional statistical measures such as coefficient of variation. We conclude that each analysis technique examined could provide valuable insights for interpretation of diverse monitoring time-series.
NASA Astrophysics Data System (ADS)
Karuppiah, R.; Faldi, A.; Laurenzi, I.; Usadi, A.; Venkatesh, A.
2014-12-01
An increasing number of studies are focused on assessing the environmental footprint of different products and processes, especially using life cycle assessment (LCA). This work shows how combining statistical methods and Geographic Information Systems (GIS) with environmental analyses can help improve the quality of results and their interpretation. Most environmental assessments in literature yield single numbers that characterize the environmental impact of a process/product - typically global or country averages, often unchanging in time. In this work, we show how statistical analysis and GIS can help address these limitations. For example, we demonstrate a method to separately quantify uncertainty and variability in the result of LCA models using a power generation case study. This is important for rigorous comparisons between the impacts of different processes. Another challenge is lack of data that can affect the rigor of LCAs. We have developed an approach to estimate environmental impacts of incompletely characterized processes using predictive statistical models. This method is applied to estimate unreported coal power plant emissions in several world regions. There is also a general lack of spatio-temporal characterization of the results in environmental analyses. For instance, studies that focus on water usage do not put in context where and when water is withdrawn. Through the use of hydrological modeling combined with GIS, we quantify water stress on a regional and seasonal basis to understand water supply and demand risks for multiple users. Another example where it is important to consider regional dependency of impacts is when characterizing how agricultural land occupation affects biodiversity in a region. We developed a data-driven methodology used in conjuction with GIS to determine if there is a statistically significant difference between the impacts of growing different crops on different species in various biomes of the world.
NASA Astrophysics Data System (ADS)
Bergant, Klemen; Kajfež-Bogataj, Lučka; Črepinšek, Zalika
2002-02-01
Phenological observations are a valuable source of information for investigating the relationship between climate variation and plant development. Potential climate change in the future will shift the occurrence of phenological phases. Information about future climate conditions is needed in order to estimate this shift. General circulation models (GCM) provide the best information about future climate change. They are able to simulate reliably the most important mean features on a large scale, but they fail on a regional scale because of their low spatial resolution. A common approach to bridging the scale gap is statistical downscaling, which was used to relate the beginning of flowering of Taraxacum officinale in Slovenia with the monthly mean near-surface air temperature for January, February and March in Central Europe. Statistical models were developed and tested with NCAR/NCEP Reanalysis predictor data and EARS predictand data for the period 1960-1999. Prior to developing statistical models, empirical orthogonal function (EOF) analysis was employed on the predictor data. Multiple linear regression was used to relate the beginning of flowering with expansion coefficients of the first three EOF for the Janauary, Febrauary and March air temperatures, and a strong correlation was found between them. Developed statistical models were employed on the results of two GCM (HadCM3 and ECHAM4/OPYC3) to estimate the potential shifts in the beginning of flowering for the periods 1990-2019 and 2020-2049 in comparison with the period 1960-1989. The HadCM3 model predicts, on average, 4 days earlier occurrence and ECHAM4/OPYC3 5 days earlier occurrence of flowering in the period 1990-2019. The analogous results for the period 2020-2049 are a 10- and 11-day earlier occurrence.
Prado, Jérôme; Mutreja, Rachna; Zhang, Hongchuan; Mehta, Rucha; Desroches, Amy S.; Minas, Jennifer E.; Booth, James R.
2010-01-01
It has been proposed that recent cultural inventions such as symbolic arithmetic recycle evolutionary older neural mechanisms. A central assumption of this hypothesis is that the degree to which a pre-existing mechanism is recycled depends upon the degree of similarity between its initial function and the novel task. To test this assumption, we investigated whether the brain region involved in magnitude comparison in the intraparietal sulcus (IPS), localized by a numerosity comparison task, is recruited to a greater degree by arithmetic problems that involve number comparison (single-digit subtractions) than by problems that involve retrieving facts from memory (single-digit multiplications). Our results confirmed that subtractions are associated with greater activity in the IPS than multiplications, whereas multiplications elicit greater activity than subtractions in regions involved in verbal processing including the middle temporal gyrus and inferior frontal gyrus that were localized by a phonological processing task. Pattern analyses further indicated that the neural mechanisms more active for subtraction than multiplication in the IPS overlap with those involved in numerosity comparison, and that the strength of this overlap predicts inter-individual performance in the subtraction task. These findings provide novel evidence that elementary arithmetic relies on the co-option of evolutionary older neural circuits. PMID:21246667
Friction between various self-ligating brackets and archwire couples during sliding mechanics.
Stefanos, Sennay; Secchi, Antonino G; Coby, Guy; Tanna, Nipul; Mante, Francis K
2010-10-01
The aim of this study was to evaluate the frictional resistance between active and passive self-ligating brackets and 0.019 × 0.025-in stainless steel archwire during sliding mechanics by using an orthodontic sliding simulation device. Maxillary right first premolar active self-ligating brackets In-Ovation R, In-Ovation C (both, GAC International, Bohemia, NY), and SPEED (Strite Industries, Cambridge, Ontario, Canada), and passive self-ligating brackets SmartClip (3M Unitek, Monrovia, Calif), Synergy R (Rocky Mountain Orthodontics, Denver, Colo), and Damon 3mx (Ormco, Orange, Calif) with 0.022-in slots were used. Frictional force was measured by using an orthodontic sliding simulation device attached to a universal testing machine. Each bracket-archwire combination was tested 30 times at 0° angulation relative to the sliding direction. Statistical comparisons were performed with 1-way analysis of variance (ANOVA) followed by Dunn multiple comparisons. The level of statistical significance was set at P <0.05. The Damon 3mx brackets had significantly the lowest mean static frictional force (8.6 g). The highest mean static frictional force was shown by the SPEED brackets (83.1 g). The other brackets were ranked as follows, from highest to lowest, In-Ovation R, In-Ovation C, SmartClip, and Synergy R. The mean static frictional forces were all statistically different. The ranking of the kinetic frictional forces of bracket-archwire combinations was the same as that for static frictional forces. All bracket-archwire combinations showed significantly different kinetic frictional forces except SmartClip and In-Ovation C, which were not significantly different from each other. Passive self-ligating brackets have lower static and kinetic frictional resistance than do active self-ligating brackets with 0.019 × 0.025-in stainless steel wire. Copyright © 2010 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Zhao, Binbin; Chen, Wei; Jiang, Rui; Zhang, Rui; Wang, Yan; Wang, Ling; Gordon, Lynn; Chen, Ling
2015-09-01
The purpose of this study was to evaluate the cytokine expression profile of specific IL-1 family members in the aqueous humor and sera of patients with HLA-B27 associated acute anterior uveitis (AAU) and idiopathic AAU. Following informed consent, a total of 13 patients with HLA-B27 associated AAU, 12 patients with idiopathic AAU and 9 controls were recruited to this study from May 2013 to July 2014. Each individual received a complete ophthalmologic examination. Aqueous humor and sera samples were collected and 11 inflammation-related cytokines of the IL-1 family (IL-1α, IL-1β, IL-1 receptor antagonist [IL-1Ra], IL-18, IL-36 receptor antagonist [IL-36Ra], IL-33, IL-36α, IL-36β, IL-36γ, IL-37, IL-38) were quantitatively measured and analyzed for statistical significance between groups. The degree of inflammation, anterior chamber cell or flare, correlated with expression of IL-1β, IL-1Ra, and IL-18. The highest levels of IL-1β, IL-1Ra, IL-18, and IL-36Ra were seen in the aqueous of patients with HLA-B27 associated AAU and this was statically significant when compared to the controls, but not to idiopathic AAU. Expression of IL-18 was statistically higher in the aqueous of patients with HLA-B27 associated AAU in comparison to either idiopathic AAU or controls, but this may reflect greater inflammation in this patient group. In the sera only IL-1α was statistically higher in the HLA-B27 associated AAU in comparison to the control. Cytokine analysis reveals elevation of multiple IL-1 family members in the aqueous humor of patients with AAU as compared to controls. The specific signature of inflammation may potentially be useful in developing new future therapies for AAU. Copyright © 2015 Elsevier Ltd. All rights reserved.
Imaging predictors of poststroke depression: methodological factors in voxel-based analysis
Gozzi, Sophia A; Wood, Amanda G; Chen, Jian; Vaddadi, Krishnarao; Phan, Thanh G
2014-01-01
Objective The purpose of this study was to explore the relationship between lesion location and poststroke depression using statistical parametric mapping. Methods First episode patients with stroke were assessed within 12 days and at 1-month poststroke. Patients with an a priori defined cut-off score of 11 on the Hospital Anxiety and Depression Scale (HADS) at follow-up were further assessed using the Mini-International Neuropsychiatric Interview (MINI) to confirm a clinical diagnosis of major or minor depression in accordance with Diagnostic and Statistical Manual-IV (DSM-IV) inclusion criteria. Participants were included if they were aged 18–85 years, proficient in English and eligible for MRI. Patients were excluded if they had a confounding diagnosis such as major depressive disorder at the time of admission, a neurodegenerative disease, epilepsy or an imminently life-threatening comorbid illness, subarachnoid or subdural stroke, a second episode of stroke before follow-up and/or a serious impairment of consciousness or language. Infarcts observed on MRI scans were manually segmented into binary images, linearly registered into a common stereotaxic coordinate space. Using statistical parametric mapping, we compared infarct patterns in patients with stroke with and without depression. Results 27% (15/55 patients) met criteria for depression at follow-up. Mean infarct volume was 19±53 mL and National Institute of Health Stroke Scale (NIHSS) at Time 1 (within 12 days of stroke) was 4±4, indicating a sample of mild strokes. No voxels or clusters were significant after a multiple comparison correction was applied (p>0.05). Examination of infarct maps showed that there was minimal overlap of infarct location between patients, thus invalidating the voxel comparison analysis. Conclusions This study provided inconclusive evidence for the association between infarcts in a specific region and poststroke depression. PMID:25001395
Grover, Harpreet Singh; Choudhary, Pankaj
2016-01-01
Introduction Dentinal hypersensitivity is one of the most common problem, encountered in dental practice but has least predictable treatment outcome. The advent of lasers in dentistry has provided an additional therapeutic option for treating dentinal hypersensitivity. Although various lasers have been tried over a period of time to treat dentinal hypersensitivity, but still the doubt persist as to which laser leads to maximum dentinal tubular occlusion and is most suitable with minimal hazardous effects. Aim To compare the effects of Nd: YAG, CO2 and 810-nm diode lasers on width of exposed dentinal tubule orifices and to evaluate the morphologic changes on dentinal surface of human tooth after laser irradiation by scanning electron microscope (SEM). Materials and Methods Forty root specimens were obtained from ten freshly extracted human premolars, which were randomly divided into four groups of ten each. Group I: control group treated with only saline, Group II: Nd:YAG laser, Group III: CO2 laser and Group IV: 810-nm diode laser. The specimens were examined using SEM. After calculating mean tubular diameter for each group, the values were compared statistically using parametric one-way ANOVA test and Turkey’s post hoc multiple comparison test. Results All the three lased groups showed a highly statistical significant result with p-value of <0.001 as compared to non-lased group. On intergroup comparison within the lased groups, all the three groups showed statistically significant difference in the reduction of dentinal tubular diameter (p-value < 0.001). Conclusion Nd: YAG laser was found to be most effective, followed by the CO2 laser and 810-nm diode laser was found to be least effective. The morphologic changes like craters, cracks and charring effect of the dentine were seen maximum by the use of CO2 laser. PMID:27630957
Saluja, Mini; Grover, Harpreet Singh; Choudhary, Pankaj
2016-07-01
Dentinal hypersensitivity is one of the most common problem, encountered in dental practice but has least predictable treatment outcome. The advent of lasers in dentistry has provided an additional therapeutic option for treating dentinal hypersensitivity. Although various lasers have been tried over a period of time to treat dentinal hypersensitivity, but still the doubt persist as to which laser leads to maximum dentinal tubular occlusion and is most suitable with minimal hazardous effects. To compare the effects of Nd: YAG, CO2 and 810-nm diode lasers on width of exposed dentinal tubule orifices and to evaluate the morphologic changes on dentinal surface of human tooth after laser irradiation by scanning electron microscope (SEM). Forty root specimens were obtained from ten freshly extracted human premolars, which were randomly divided into four groups of ten each. Group I: control group treated with only saline, Group II: Nd:YAG laser, Group III: CO2 laser and Group IV: 810-nm diode laser. The specimens were examined using SEM. After calculating mean tubular diameter for each group, the values were compared statistically using parametric one-way ANOVA test and Turkey's post hoc multiple comparison test. All the three lased groups showed a highly statistical significant result with p-value of <0.001 as compared to non-lased group. On intergroup comparison within the lased groups, all the three groups showed statistically significant difference in the reduction of dentinal tubular diameter (p-value < 0.001). Nd: YAG laser was found to be most effective, followed by the CO2 laser and 810-nm diode laser was found to be least effective. The morphologic changes like craters, cracks and charring effect of the dentine were seen maximum by the use of CO2 laser.
Gültekin, Salih Sinan; Kir, Metin; Tuğ, Tuğbay; Demirer, Seher; Genç, Yasemin
2011-10-01
This study was conducted to evaluate the early and delayed pinhole MIBI single photon emission computed tomography (pSPECT) images in detecting hyperfunctioning parathyroid glands, to make a comparison with peroperative γ probe (GP) findings. Planar, early, and delayed pSPECT scans and skin in-vivo and ex-vivo GP counts were obtained in 22 patients with hyperparathyroidism. All data were analyzed statistically on the basis of localization of the lesions, using the histopathological findings as the gold standard. Histopathological examinations revealed 18 of 44 adenomas, 18 of 44 hyperplasic glands, two of 44 lymph nodules, five of 44 thyroid nodules, and one of 44 normal parathyroid glands. Sensitivity and specificity were found to be 36 and 100% for planar, 69 and 75% for early pSPECT, 86 and 88% for delayed pSPECT scans, and similarly, 78 and 75% on skin, 92 and 75% in-vivo and 83 and 100% ex-vivo GP counts, respectively. For distinction ability of GP counts between three groups of lesions, there was a statistically significant difference among the three groups for ex-vivo GP counts but not between groups of adenomas and hyperplasic lesions for in-vivo GP counts. Early and delayed pSPECT scans play a complementary role on the planar scans. Delayed pSPECT scans and in-vivo GP counts are equally valuable to localize both single and multiple hyperfunctioning parathyroid glands. Ex-vivo GP counts seem to be better for making a distinction among types of lesions.
NASA Astrophysics Data System (ADS)
Kadhem, Hasan; Amagasa, Toshiyuki; Kitagawa, Hiroyuki
Encryption can provide strong security for sensitive data against inside and outside attacks. This is especially true in the “Database as Service” model, where confidentiality and privacy are important issues for the client. In fact, existing encryption approaches are vulnerable to a statistical attack because each value is encrypted to another fixed value. This paper presents a novel database encryption scheme called MV-OPES (Multivalued — Order Preserving Encryption Scheme), which allows privacy-preserving queries over encrypted databases with an improved security level. Our idea is to encrypt a value to different multiple values to prevent statistical attacks. At the same time, MV-OPES preserves the order of the integer values to allow comparison operations to be directly applied on encrypted data. Using calculated distance (range), we propose a novel method that allows a join query between relations based on inequality over encrypted values. We also present techniques to offload query execution load to a database server as much as possible, thereby making a better use of server resources in a database outsourcing environment. Our scheme can easily be integrated with current database systems as it is designed to work with existing indexing structures. It is robust against statistical attack and the estimation of true values. MV-OPES experiments show that security for sensitive data can be achieved with reasonable overhead, establishing the practicability of the scheme.
White, John R; Padowski, Jeannie M; Zhong, Yili; Chen, Gang; Luo, Shaman; Lazarus, Philip; Layton, Matthew E; McPherson, Sterling
2016-01-01
There is a paucity of data describing the impact of type of beverage (coffee versus energy drink), different rates of consumption and different temperature of beverages on the pharmacokinetic disposition of caffeine. Additionally, there is concern that inordinately high levels of caffeine may result from the rapid consumption of cold energy drinks. The objective of this study was to compare the pharmacokinetics of caffeine under various drink temperature, rate of consumption and vehicle (coffee versus energy drink) conditions. Five caffeine (dose = 160 mg) conditions were evaluated in an open-label, group-randomized, crossover fashion. After the administration of each caffeine dose, 10 serial plasma samples were harvested. Caffeine concentration was measured via liquid chromatography-mass spectrometry (LC-MS), and those concentrations were assessed by non-compartmental pharmacokinetic analysis. The calculated mean pharmacokinetic parameters were analyzed statistically by one-way repeated measures analysis of variance (RM ANOVA). If differences were found, each group was compared to the other by all pair-wise multiple comparison. Twenty-four healthy subjects ranging in age from 18 to 30 completed the study. The mean caffeine concentration time profiles were similar with overlapping SDs at all measured time points. The ANOVA revealed significant differences in mean Cmax and Vd ss/F, but no pair-wise comparisons reached statistical significance. No other differences in pharmacokinetic parameters were found. The results of this study are consistent with previous caffeine pharmacokinetic studies and suggest that while rate of consumption, temperature of beverage and vehicle (coffee versus energy drink) may be associated with slightly different pharmacokinetic parameters, the overall impact of these variables is small. This study suggests that caffeine absorption and exposure from coffee and energy drink is similar irrespective of beverage temperature or rate of consumption.
A comparison of the wear resistance and hardness of indirect composite resins.
Mandikos, M N; McGivney, G P; Davis, E; Bush, P J; Carter, J M
2001-04-01
Various new, second-generation indirect composites have been developed with claimed advantages over existing tooth-colored restorative materials. To date, little independent research has been published on these materials, and the properties specified in the advertising materials are largely derived from in-house or contracted testing. Four second-generation indirect composites (Artglass, belleGlass, Sculpture, and Targis) were tested for wear resistance and hardness against 2 control materials with well-documented clinical application. Human enamel was also tested for comparison. Twelve specimens of each material were fabricated according to the manufacturers' directions and subjected to accelerated wear in a 3-body abrasion, toothbrushing apparatus. Vickers hardness was measured for each of the tested materials, and energy dispersive x-ray (EDX) spectroscopy was performed to determine the elemental composition of the composite fillers. The statistical tests used for wear and hardness were the Kruskal-Wallis 1-way ANOVA test with Mann-Whitney tests and 1-way ANOVA with multiple comparisons (Tukey HSD). The Pearson correlation coefficient was used to determine the existence of a relationship between the hardness of the materials and the degree to which they had worn. The level of statistical significance chosen was alpha=.05. The control material Concept was superior to the other composites in wear resistance and hardness and had the lowest surface roughness. Significant relationships were observed between depth of wear and hardness and between depth of wear and average surface roughness. Enamel specimens were harder and more wear resistant than any of the composites. EDX spectroscopy revealed that the elemental composition of the fillers of the 4 new composites was almost identical, as was the composition of the 2 control composites. The differences in wear, hardness, and average surface roughness may have been due to differences in the chemistry or method of polymerization of the composites. Further research in this area should be encouraged. It was also apparent that the filler present in the tested composites did not exactly fit the manufacturers' descriptions.
White, John R.; Padowski, Jeannie M.; Zhong, Yili; Chen, Gang; Luo, Shaman; Lazarus, Philip; Layton, Matthew E.; McPherson, Sterling
2016-01-01
Abstract Context: There is a paucity of data describing the impact of type of beverage (coffee versus energy drink), different rates of consumption and different temperature of beverages on the pharmacokinetic disposition of caffeine. Additionally, there is concern that inordinately high levels of caffeine may result from the rapid consumption of cold energy drinks. Objective: The objective of this study was to compare the pharmacokinetics of caffeine under various drink temperature, rate of consumption and vehicle (coffee versus energy drink) conditions. Materials: Five caffeine (dose = 160 mg) conditions were evaluated in an open-label, group-randomized, crossover fashion. After the administration of each caffeine dose, 10 serial plasma samples were harvested. Caffeine concentration was measured via liquid chromatography–mass spectrometry (LC–MS), and those concentrations were assessed by non-compartmental pharmacokinetic analysis. The calculated mean pharmacokinetic parameters were analyzed statistically by one-way repeated measures analysis of variance (RM ANOVA). If differences were found, each group was compared to the other by all pair-wise multiple comparison. Results: Twenty-four healthy subjects ranging in age from 18 to 30 completed the study. The mean caffeine concentration time profiles were similar with overlapping SDs at all measured time points. The ANOVA revealed significant differences in mean C max and V d ss/F, but no pair-wise comparisons reached statistical significance. No other differences in pharmacokinetic parameters were found. Discussion: The results of this study are consistent with previous caffeine pharmacokinetic studies and suggest that while rate of consumption, temperature of beverage and vehicle (coffee versus energy drink) may be associated with slightly different pharmacokinetic parameters, the overall impact of these variables is small. Conclusion: This study suggests that caffeine absorption and exposure from coffee and energy drink is similar irrespective of beverage temperature or rate of consumption. PMID:27100333
Dai, Qi; Yang, Yanchun; Wang, Tianming
2008-10-15
Many proposed statistical measures can efficiently compare biological sequences to further infer their structures, functions and evolutionary information. They are related in spirit because all the ideas for sequence comparison try to use the information on the k-word distributions, Markov model or both. Motivated by adding k-word distributions to Markov model directly, we investigated two novel statistical measures for sequence comparison, called wre.k.r and S2.k.r. The proposed measures were tested by similarity search, evaluation on functionally related regulatory sequences and phylogenetic analysis. This offers the systematic and quantitative experimental assessment of our measures. Moreover, we compared our achievements with these based on alignment or alignment-free. We grouped our experiments into two sets. The first one, performed via ROC (receiver operating curve) analysis, aims at assessing the intrinsic ability of our statistical measures to search for similar sequences from a database and discriminate functionally related regulatory sequences from unrelated sequences. The second one aims at assessing how well our statistical measure is used for phylogenetic analysis. The experimental assessment demonstrates that our similarity measures intending to incorporate k-word distributions into Markov model are more efficient.
Lin, Yu-Pin; Chu, Hone-Jay; Huang, Yu-Long; Tang, Chia-Hsi; Rouhani, Shahrokh
2011-06-01
This study develops a stratified conditional Latin hypercube sampling (scLHS) approach for multiple, remotely sensed, normalized difference vegetation index (NDVI) images. The objective is to sample, monitor, and delineate spatiotemporal landscape changes, including spatial heterogeneity and variability, in a given area. The scLHS approach, which is based on the variance quadtree technique (VQT) and the conditional Latin hypercube sampling (cLHS) method, selects samples in order to delineate landscape changes from multiple NDVI images. The images are then mapped for calibration and validation by using sequential Gaussian simulation (SGS) with the scLHS selected samples. Spatial statistical results indicate that in terms of their statistical distribution, spatial distribution, and spatial variation, the statistics and variograms of the scLHS samples resemble those of multiple NDVI images more closely than those of cLHS and VQT samples. Moreover, the accuracy of simulated NDVI images based on SGS with scLHS samples is significantly better than that of simulated NDVI images based on SGS with cLHS samples and VQT samples, respectively. However, the proposed approach efficiently monitors the spatial characteristics of landscape changes, including the statistics, spatial variability, and heterogeneity of NDVI images. In addition, SGS with the scLHS samples effectively reproduces spatial patterns and landscape changes in multiple NDVI images.
2012-01-01
Background The NCBI Conserved Domain Database (CDD) consists of a collection of multiple sequence alignments of protein domains that are at various stages of being manually curated into evolutionary hierarchies based on conserved and divergent sequence and structural features. These domain models are annotated to provide insights into the relationships between sequence, structure and function via web-based BLAST searches. Results Here we automate the generation of conserved domain (CD) hierarchies using a combination of heuristic and Markov chain Monte Carlo (MCMC) sampling procedures and starting from a (typically very large) multiple sequence alignment. This procedure relies on statistical criteria to define each hierarchy based on the conserved and divergent sequence patterns associated with protein functional-specialization. At the same time this facilitates the sequence and structural annotation of residues that are functionally important. These statistical criteria also provide a means to objectively assess the quality of CD hierarchies, a non-trivial task considering that the protein subgroups are often very distantly related—a situation in which standard phylogenetic methods can be unreliable. Our aim here is to automatically generate (typically sub-optimal) hierarchies that, based on statistical criteria and visual comparisons, are comparable to manually curated hierarchies; this serves as the first step toward the ultimate goal of obtaining optimal hierarchical classifications. A plot of runtimes for the most time-intensive (non-parallelizable) part of the algorithm indicates a nearly linear time complexity so that, even for the extremely large Rossmann fold protein class, results were obtained in about a day. Conclusions This approach automates the rapid creation of protein domain hierarchies and thus will eliminate one of the most time consuming aspects of conserved domain database curation. At the same time, it also facilitates protein domain annotation by identifying those pattern residues that most distinguish each protein domain subgroup from other related subgroups. PMID:22726767
Hu, Jinxiang; Ward, Michael M
2017-09-01
To determine if persons with arthritis differ systematically from persons without arthritis in how they respond to questions on three depression questionnaires, which include somatic items such as fatigue and sleep disturbance. We extracted data on the Centers for Epidemiological Studies Depression (CES-D) scale, the Patient Health Questionnaire-9 (PHQ-9), and the Kessler-6 (K-6) scale from three large population-based national surveys. We assessed items on these questionnaires for differential item functioning (DIF) between persons with and without self-reported physician-diagnosed arthritis using multiple indicator multiple cause models, which controlled for the underlying level of depression and important confounders. We also examined if DIF by arthritis status was similar between women and men. Although five items of the CES-D, one item of the PHQ-9, and five items of the K-6 scale had evidence of DIF based on statistical comparisons, the magnitude of each difference was less than the threshold of a small effect. The statistical differences were a function of the very large sample sizes in the surveys. Effect sizes for DIF were similar between women and men except for two items on the Patient Health Questionnaire-9. For each questionnaire, DIF accounted for 8% or less of the arthritis-depression association, and excluding items with DIF did not reduce the difference in depression scores between those with and without arthritis. Persons with arthritis respond to items on the CES-D, PHQ-9, and K-6 depression scales similarly to persons without arthritis, despite the inclusion of somatic items in these scales.