Sample records for statistical analyses show

  1. Metal and physico-chemical variations at a hydroelectric reservoir analyzed by Multivariate Analyses and Artificial Neural Networks: environmental management and policy/decision-making tools.

    PubMed

    Cavalcante, Y L; Hauser-Davis, R A; Saraiva, A C F; Brandão, I L S; Oliveira, T F; Silveira, A M

    2013-01-01

    This paper compared and evaluated seasonal variations in physico-chemical parameters and metals at a hydroelectric power station reservoir by applying Multivariate Analyses and Artificial Neural Networks (ANN) statistical techniques. A Factor Analysis was used to reduce the number of variables: the first factor was composed of elements Ca, K, Mg and Na, and the second by Chemical Oxygen Demand. The ANN showed 100% correct classifications in training and validation samples. Physico-chemical analyses showed that water pH values were not statistically different between the dry and rainy seasons, while temperature, conductivity, alkalinity, ammonia and DO were higher in the dry period. TSS, hardness and COD, on the other hand, were higher during the rainy season. The statistical analyses showed that Ca, K, Mg and Na are directly connected to the Chemical Oxygen Demand, which indicates a possibility of their input into the reservoir system by domestic sewage and agricultural run-offs. These statistical applications, thus, are also relevant in cases of environmental management and policy decision-making processes, to identify which factors should be further studied and/or modified to recover degraded or contaminated water bodies. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. The Problem of Auto-Correlation in Parasitology

    PubMed Central

    Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick

    2012-01-01

    Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865

  3. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth

    ERIC Educational Resources Information Center

    Steyvers, Mark; Tenenbaum, Joshua B.

    2005-01-01

    We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…

  4. A retrospective survey of research design and statistical analyses in selected Chinese medical journals in 1998 and 2008.

    PubMed

    Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia

    2010-05-25

    High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, p<0.001), 59.8% (545/1,335) in 1998 compared to 52.2% (664/1,578) in 2008. The overall error/defect proportion of study design also decreased ( = 21.22, p<0.001), 50.9% (680/1,335) compared to 42.40% (669/1,578). In 2008, design with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, p<0.001), 92.7% (945/1,019) compared to 78.2% (1023/1,309) and interpretation ( = 27.26, p<0.001), 9.7% (99/1,019) compared to 4.3% (56/1,309), some serious ones persisted. Chinese medical research seems to have made significant progress regarding statistical analyses, but there remains ample room for improvement regarding study designs. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative.

  5. Data from the Television Game Show "Friend or Foe?"

    ERIC Educational Resources Information Center

    Kalist, David E.

    2004-01-01

    The data discussed in this paper are from the television game show "Friend or Foe", and can be used to examine whether age, gender, race, and the amount of prize money affect contestants' strategies. The data are suitable for a variety of statistical analyses, such as descriptive statistics, testing for differences in means or proportions, and…

  6. A Retrospective Survey of Research Design and Statistical Analyses in Selected Chinese Medical Journals in 1998 and 2008

    PubMed Central

    Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia

    2010-01-01

    Background High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Methodology/Principal Findings Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, p<0.001), 59.8% (545/1,335) in 1998 compared to 52.2% (664/1,578) in 2008. The overall error/defect proportion of study design also decreased ( = 21.22, p<0.001), 50.9% (680/1,335) compared to 42.40% (669/1,578). In 2008, design with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, p<0.001), 92.7% (945/1,019) compared to 78.2% (1023/1,309) and interpretation ( = 27.26, p<0.001), 9.7% (99/1,019) compared to 4.3% (56/1,309), some serious ones persisted. Conclusions/Significance Chinese medical research seems to have made significant progress regarding statistical analyses, but there remains ample room for improvement regarding study designs. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative. PMID:20520824

  7. A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.

    PubMed

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L

    2014-01-01

    We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.

  8. Statistical power of intervention analyses: simulation and empirical application to treated lumber prices

    Treesearch

    Jeffrey P. Prestemon

    2009-01-01

    Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...

  9. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI)

    PubMed Central

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non–expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI’s robustness and sensitivity in capturing useful data relating to the students’ conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. PMID:26903497

  10. Effects of Interventions on Survival in Acute Respiratory Distress Syndrome: an Umbrella Review of 159 Published Randomized Trials and 29 Meta-analyses

    PubMed Central

    Tonelli, Adriano R.; Zein, Joe; Adams, Jacob; Ioannidis, John P.A.

    2014-01-01

    Purpose Multiple interventions have been tested in acute respiratory distress syndrome (ARDS). We examined the entire agenda of published randomized controlled trials (RCTs) in ARDS that reported on mortality and of respective meta-analyses. Methods We searched PubMed, the Cochrane Library and Web of Knowledge until July 2013. We included RCTs in ARDS published in English. We excluded trials of newborns and children; and those on short-term interventions, ARDS prevention or post-traumatic lung injury. We also reviewed all meta-analyses of RCTs in this field that addressed mortality. Treatment modalities were grouped in five categories: mechanical ventilation strategies and respiratory care, enteral or parenteral therapies, inhaled / intratracheal medications, nutritional support and hemodynamic monitoring. Results We identified 159 published RCTs of which 93 had overall mortality reported (n= 20,671 patients) - 44 trials (14,426 patients) reported mortality as a primary outcome. A statistically significant survival benefit was observed in 8 trials (7 interventions) and two trials reported an adverse effect on survival. Among RTCs with >50 deaths in at least 1 treatment arm (n=21), 2 showed a statistically significant mortality benefit of the intervention (lower tidal volumes and prone positioning), 1 showed a statistically significant mortality benefit only in adjusted analyses (cisatracurium) and 1 (high-frequency oscillatory ventilation) showed a significant detrimental effect. Across 29 meta-analyses, the most consistent evidence was seen for low tidal volumes and prone positioning in severe ARDS. Conclusions There is limited supportive evidence that specific interventions can decrease mortality in ARDS. While low tidal volumes and prone positioning in severe ARDS seem effective, most sporadic findings of interventions suggesting reduced mortality are not corroborated consistently in large-scale evidence including meta-analyses. PMID:24667919

  11. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    ERIC Educational Resources Information Center

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  12. A systematic review of the quality of statistical methods employed for analysing quality of life data in cancer randomised controlled trials.

    PubMed

    Hamel, Jean-Francois; Saulnier, Patrick; Pe, Madeline; Zikos, Efstathios; Musoro, Jammbe; Coens, Corneel; Bottomley, Andrew

    2017-09-01

    Over the last decades, Health-related Quality of Life (HRQoL) end-points have become an important outcome of the randomised controlled trials (RCTs). HRQoL methodology in RCTs has improved following international consensus recommendations. However, no international recommendations exist concerning the statistical analysis of such data. The aim of our study was to identify and characterise the quality of the statistical methods commonly used for analysing HRQoL data in cancer RCTs. Building on our recently published systematic review, we analysed a total of 33 published RCTs studying the HRQoL methods reported in RCTs since 1991. We focussed on the ability of the methods to deal with the three major problems commonly encountered when analysing HRQoL data: their multidimensional and longitudinal structure and the commonly high rate of missing data. All studies reported HRQoL being assessed repeatedly over time for a period ranging from 2 to 36 months. Missing data were common, with compliance rates ranging from 45% to 90%. From the 33 studies considered, 12 different statistical methods were identified. Twenty-nine studies analysed each of the questionnaire sub-dimensions without type I error adjustment. Thirteen studies repeated the HRQoL analysis at each assessment time again without type I error adjustment. Only 8 studies used methods suitable for repeated measurements. Our findings show a lack of consistency in statistical methods for analysing HRQoL data. Problems related to multiple comparisons were rarely considered leading to a high risk of false positive results. It is therefore critical that international recommendations for improving such statistical practices are developed. Copyright © 2017. Published by Elsevier Ltd.

  13. Spatial analyses for nonoverlapping objects with size variations and their application to coral communities.

    PubMed

    Muko, Soyoka; Shimatani, Ichiro K; Nozawa, Yoko

    2014-07-01

    Spatial distributions of individuals are conventionally analysed by representing objects as dimensionless points, in which spatial statistics are based on centre-to-centre distances. However, if organisms expand without overlapping and show size variations, such as is the case for encrusting corals, interobject spacing is crucial for spatial associations where interactions occur. We introduced new pairwise statistics using minimum distances between objects and demonstrated their utility when examining encrusting coral community data. We also calculated the conventional point process statistics and the grid-based statistics to clarify the advantages and limitations of each spatial statistical method. For simplicity, coral colonies were approximated by disks in these demonstrations. Focusing on short-distance effects, the use of minimum distances revealed that almost all coral genera were aggregated at a scale of 1-25 cm. However, when fragmented colonies (ramets) were treated as a genet, a genet-level analysis indicated weak or no aggregation, suggesting that most corals were randomly distributed and that fragmentation was the primary cause of colony aggregations. In contrast, point process statistics showed larger aggregation scales, presumably because centre-to-centre distances included both intercolony spacing and colony sizes (radius). The grid-based statistics were able to quantify the patch (aggregation) scale of colonies, but the scale was strongly affected by the colony size. Our approach quantitatively showed repulsive effects between an aggressive genus and a competitively weak genus, while the grid-based statistics (covariance function) also showed repulsion although the spatial scale indicated from the statistics was not directly interpretable in terms of ecological meaning. The use of minimum distances together with previously proposed spatial statistics helped us to extend our understanding of the spatial patterns of nonoverlapping objects that vary in size and the associated specific scales. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.

  14. Unconscious analyses of visual scenes based on feature conjunctions.

    PubMed

    Tachibana, Ryosuke; Noguchi, Yasuki

    2015-06-01

    To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).

  15. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    PubMed Central

    De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep

    2017-01-01

    Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107

  16. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects

    ERIC Educational Resources Information Center

    Ho, Andrew D.; Yu, Carol C.

    2015-01-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological…

  17. Power-up: A Reanalysis of 'Power Failure' in Neuroscience Using Mixture Modeling

    PubMed Central

    Wood, John

    2017-01-01

    Recently, evidence for endemically low statistical power has cast neuroscience findings into doubt. If low statistical power plagues neuroscience, then this reduces confidence in the reported effects. However, if statistical power is not uniformly low, then such blanket mistrust might not be warranted. Here, we provide a different perspective on this issue, analyzing data from an influential study reporting a median power of 21% across 49 meta-analyses (Button et al., 2013). We demonstrate, using Gaussian mixture modeling, that the sample of 730 studies included in that analysis comprises several subcomponents so the use of a single summary statistic is insufficient to characterize the nature of the distribution. We find that statistical power is extremely low for studies included in meta-analyses that reported a null result and that it varies substantially across subfields of neuroscience, with particularly low power in candidate gene association studies. Therefore, whereas power in neuroscience remains a critical issue, the notion that studies are systematically underpowered is not the full story: low power is far from a universal problem. SIGNIFICANCE STATEMENT Recently, researchers across the biomedical and psychological sciences have become concerned with the reliability of results. One marker for reliability is statistical power: the probability of finding a statistically significant result given that the effect exists. Previous evidence suggests that statistical power is low across the field of neuroscience. Our results present a more comprehensive picture of statistical power in neuroscience: on average, studies are indeed underpowered—some very seriously so—but many studies show acceptable or even exemplary statistical power. We show that this heterogeneity in statistical power is common across most subfields in neuroscience. This new, more nuanced picture of statistical power in neuroscience could affect not only scientific understanding, but potentially policy and funding decisions for neuroscience research. PMID:28706080

  18. Differences in Performance Among Test Statistics for Assessing Phylogenomic Model Adequacy.

    PubMed

    Duchêne, David A; Duchêne, Sebastian; Ho, Simon Y W

    2018-05-18

    Statistical phylogenetic analyses of genomic data depend on models of nucleotide or amino acid substitution. The adequacy of these substitution models can be assessed using a number of test statistics, allowing the model to be rejected when it is found to provide a poor description of the evolutionary process. A potentially valuable use of model-adequacy test statistics is to identify when data sets are likely to produce unreliable phylogenetic estimates, but their differences in performance are rarely explored. We performed a comprehensive simulation study to identify test statistics that are sensitive to some of the most commonly cited sources of phylogenetic estimation error. Our results show that, for many test statistics, traditional thresholds for assessing model adequacy can fail to reject the model when the phylogenetic inferences are inaccurate and imprecise. This is particularly problematic when analysing loci that have few variable informative sites. We propose new thresholds for assessing substitution model adequacy and demonstrate their effectiveness in analyses of three phylogenomic data sets. These thresholds lead to frequent rejection of the model for loci that yield topological inferences that are imprecise and are likely to be inaccurate. We also propose the use of a summary statistic that provides a practical assessment of overall model adequacy. Our approach offers a promising means of enhancing model choice in genome-scale data sets, potentially leading to improvements in the reliability of phylogenomic inference.

  19. Use of Statistical Analyses in the Ophthalmic Literature

    PubMed Central

    Lisboa, Renato; Meira-Freitas, Daniel; Tatham, Andrew J.; Marvasti, Amir H.; Sharpsten, Lucie; Medeiros, Felipe A.

    2014-01-01

    Purpose To identify the most commonly used statistical analyses in the ophthalmic literature and to determine the likely gain in comprehension of the literature that readers could expect if they were to sequentially add knowledge of more advanced techniques to their statistical repertoire. Design Cross-sectional study Methods All articles published from January 2012 to December 2012 in Ophthalmology, American Journal of Ophthalmology and Archives of Ophthalmology were reviewed. A total of 780 peer-reviewed articles were included. Two reviewers examined each article and assigned categories to each one depending on the type of statistical analyses used. Discrepancies between reviewers were resolved by consensus. Main Outcome Measures Total number and percentage of articles containing each category of statistical analysis were obtained. Additionally we estimated the accumulated number and percentage of articles that a reader would be expected to be able to interpret depending on their statistical repertoire. Results Readers with little or no statistical knowledge would be expected to be able to interpret the statistical methods presented in only 20.8% of articles. In order to understand more than half (51.4%) of the articles published, readers were expected to be familiar with at least 15 different statistical methods. Knowledge of 21 categories of statistical methods was necessary to comprehend 70.9% of articles, while knowledge of more than 29 categories was necessary to comprehend more than 90% of articles. Articles in retina and glaucoma subspecialties showed a tendency for using more complex analysis when compared to cornea. Conclusions Readers of clinical journals in ophthalmology need to have substantial knowledge of statistical methodology to understand the results of published studies in the literature. The frequency of use of complex statistical analyses also indicates that those involved in the editorial peer-review process must have sound statistical knowledge in order to critically appraise articles submitted for publication. The results of this study could provide guidance to direct the statistical learning of clinical ophthalmologists, researchers and educators involved in the design of courses for residents and medical students. PMID:24612977

  20. Statistical analysis of solid waste composition data: Arithmetic mean, standard deviation and correlation coefficients.

    PubMed

    Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2017-11-01

    Data for fractional solid waste composition provide relative magnitudes of individual waste fractions, the percentages of which always sum to 100, thereby connecting them intrinsically. Due to this sum constraint, waste composition data represent closed data, and their interpretation and analysis require statistical methods, other than classical statistics that are suitable only for non-constrained data such as absolute values. However, the closed characteristics of waste composition data are often ignored when analysed. The results of this study showed, for example, that unavoidable animal-derived food waste amounted to 2.21±3.12% with a confidence interval of (-4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson's correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Teaching statistics in biology: using inquiry-based learning to strengthen understanding of statistical analysis in biology laboratory courses.

    PubMed

    Metz, Anneke M

    2008-01-01

    There is an increasing need for students in the biological sciences to build a strong foundation in quantitative approaches to data analyses. Although most science, engineering, and math field majors are required to take at least one statistics course, statistical analysis is poorly integrated into undergraduate biology course work, particularly at the lower-division level. Elements of statistics were incorporated into an introductory biology course, including a review of statistics concepts and opportunity for students to perform statistical analysis in a biological context. Learning gains were measured with an 11-item statistics learning survey instrument developed for the course. Students showed a statistically significant 25% (p < 0.005) increase in statistics knowledge after completing introductory biology. Students improved their scores on the survey after completing introductory biology, even if they had previously completed an introductory statistics course (9%, improvement p < 0.005). Students retested 1 yr after completing introductory biology showed no loss of their statistics knowledge as measured by this instrument, suggesting that the use of statistics in biology course work may aid long-term retention of statistics knowledge. No statistically significant differences in learning were detected between male and female students in the study.

  2. Football goal distributions and extremal statistics

    NASA Astrophysics Data System (ADS)

    Greenhough, J.; Birch, P. C.; Chapman, S. C.; Rowlands, G.

    2002-12-01

    We analyse the distributions of the number of goals scored by home teams, away teams, and the total scored in the match, in domestic football games from 169 countries between 1999 and 2001. The probability density functions (PDFs) of goals scored are too heavy-tailed to be fitted over their entire ranges by Poisson or negative binomial distributions which would be expected for uncorrelated processes. Log-normal distributions cannot include zero scores and here we find that the PDFs are consistent with those arising from extremal statistics. In addition, we show that it is sufficient to model English top division and FA Cup matches in the seasons of 1970/71-2000/01 on Poisson or negative binomial distributions, as reported in analyses of earlier seasons, and that these are not consistent with extremal statistics.

  3. Logistic regression applied to natural hazards: rare event logistic regression with replications

    NASA Astrophysics Data System (ADS)

    Guns, M.; Vanacker, V.

    2012-06-01

    Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

  4. Adopting a Patient-Centered Approach to Primary Outcome Analysis of Acute Stroke Trials by Use of a Utility-Weighted Modified Rankin Scale

    PubMed Central

    Chaisinanunkul, Napasri; Adeoye, Opeolu; Lewis, Roger J.; Grotta, James C.; Broderick, Joseph; Jovin, Tudor G.; Nogueira, Raul G.; Elm, Jordan; Graves, Todd; Berry, Scott; Lees, Kennedy R.; Barreto, Andrew D.; Saver, Jeffrey L.

    2015-01-01

    Background and Purpose Although the modified Rankin Scale (mRS) is the most commonly employed primary endpoint in acute stroke trials, its power is limited when analyzed in dichotomized fashion and its indication of effect size challenging to interpret when analyzed ordinally. Weighting the seven Rankin levels by utilities may improve scale interpretability while preserving statistical power. Methods A utility weighted mRS (UW-mRS) was derived by averaging values from time-tradeoff (patient centered) and person-tradeoff (clinician centered) studies. The UW-mRS, standard ordinal mRS, and dichotomized mRS were applied to 11 trials or meta-analyses of acute stroke treatments, including lytic, endovascular reperfusion, blood pressure moderation, and hemicraniectomy interventions. Results Utility values were: mRS 0–1.0; mRS 1 - 0.91; mRS 2 - 0.76; mRS 3 - 0.65; mRS 4 - 0.33; mRS 5 & 6 - 0. For trials with unidirectional treatment effects, the UW-mRS paralleled the ordinal mRS and outperformed dichotomous mRS analyses. Both the UW-mRS and the ordinal mRS were statistically significant in six of eight unidirectional effect trials, while dichotomous analyses were statistically significant in two to four of eight. In bidirectional effect trials, both the UW-mRS and ordinal tests captured the divergent treatment effects by showing neutral results whereas some dichotomized analyses showed positive results. Mean utility differences in trials with statistically significant positive results ranged from 0.026 to 0.249. Conclusion A utility-weighted mRS performs similarly to the standard ordinal mRS in detecting treatment effects in actual stroke trials and ensures the quantitative outcome is a valid reflection of patient-centered benefits. PMID:26138130

  5. Post Hoc Analyses of ApoE Genotype-Defined Subgroups in Clinical Trials.

    PubMed

    Kennedy, Richard E; Cutter, Gary R; Wang, Guoqiao; Schneider, Lon S

    2016-01-01

    Many post hoc analyses of clinical trials in Alzheimer's disease (AD) and mild cognitive impairment (MCI) are in small Phase 2 trials. Subject heterogeneity may lead to statistically significant post hoc results that cannot be replicated in larger follow-up studies. We investigated the extent of this problem using simulation studies mimicking current trial methods with post hoc analyses based on ApoE4 carrier status. We used a meta-database of 24 studies, including 3,574 subjects with mild AD and 1,171 subjects with MCI/prodromal AD, to simulate clinical trial scenarios. Post hoc analyses examined if rates of progression on the Alzheimer's Disease Assessment Scale-cognitive (ADAS-cog) differed between ApoE4 carriers and non-carriers. Across studies, ApoE4 carriers were younger and had lower baseline scores, greater rates of progression, and greater variability on the ADAS-cog. Up to 18% of post hoc analyses for 18-month trials in AD showed greater rates of progression for ApoE4 non-carriers that were statistically significant but unlikely to be confirmed in follow-up studies. The frequency of erroneous conclusions dropped below 3% with trials of 100 subjects per arm. In MCI, rates of statistically significant differences with greater progression in ApoE4 non-carriers remained below 3% unless sample sizes were below 25 subjects per arm. Statistically significant differences for ApoE4 in post hoc analyses often reflect heterogeneity among small samples rather than true differential effect among ApoE4 subtypes. Such analyses must be viewed cautiously. ApoE genotype should be incorporated into the design stage to minimize erroneous conclusions.

  6. Statistics for the Relative Detectability of Chemicals in Weak Gaseous Plumes in LWIR Hyperspectral Imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metoyer, Candace N.; Walsh, Stephen J.; Tardiff, Mark F.

    2008-10-30

    The detection and identification of weak gaseous plumes using thermal imaging data is complicated by many factors. These include variability due to atmosphere, ground and plume temperature, and background clutter. This paper presents an analysis of one formulation of the physics-based model that describes the at-sensor observed radiance. The motivating question for the analyses performed in this paper is as follows. Given a set of backgrounds, is there a way to predict the background over which the probability of detecting a given chemical will be the highest? Two statistics were developed to address this question. These statistics incorporate data frommore » the long-wave infrared band to predict the background over which chemical detectability will be the highest. These statistics can be computed prior to data collection. As a preliminary exploration into the predictive ability of these statistics, analyses were performed on synthetic hyperspectral images. Each image contained one chemical (either carbon tetrachloride or ammonia) spread across six distinct background types. The statistics were used to generate predictions for the background ranks. Then, the predicted ranks were compared to the empirical ranks obtained from the analyses of the synthetic images. For the simplified images under consideration, the predicted and empirical ranks showed a promising amount of agreement. One statistic accurately predicted the best and worst background for detection in all of the images. Future work may include explorations of more complicated plume ingredients, background types, and noise structures.« less

  7. Impact of ontology evolution on functional analyses.

    PubMed

    Groß, Anika; Hartung, Michael; Prüfer, Kay; Kelso, Janet; Rahm, Erhard

    2012-10-15

    Ontologies are used in the annotation and analysis of biological data. As knowledge accumulates, ontologies and annotation undergo constant modifications to reflect this new knowledge. These modifications may influence the results of statistical applications such as functional enrichment analyses that describe experimental data in terms of ontological groupings. Here, we investigate to what degree modifications of the Gene Ontology (GO) impact these statistical analyses for both experimental and simulated data. The analysis is based on new measures for the stability of result sets and considers different ontology and annotation changes. Our results show that past changes in the GO are non-uniformly distributed over different branches of the ontology. Considering the semantic relatedness of significant categories in analysis results allows a more realistic stability assessment for functional enrichment studies. We observe that the results of term-enrichment analyses tend to be surprisingly stable despite changes in ontology and annotation.

  8. Statistical Selection of Biological Models for Genome-Wide Association Analyses.

    PubMed

    Bi, Wenjian; Kang, Guolian; Pounds, Stanley B

    2018-05-24

    Genome-wide association studies have discovered many biologically important associations of genes with phenotypes. Typically, genome-wide association analyses formally test the association of each genetic feature (SNP, CNV, etc) with the phenotype of interest and summarize the results with multiplicity-adjusted p-values. However, very small p-values only provide evidence against the null hypothesis of no association without indicating which biological model best explains the observed data. Correctly identifying a specific biological model may improve the scientific interpretation and can be used to more effectively select and design a follow-up validation study. Thus, statistical methodology to identify the correct biological model for a particular genotype-phenotype association can be very useful to investigators. Here, we propose a general statistical method to summarize how accurately each of five biological models (null, additive, dominant, recessive, co-dominant) represents the data observed for each variant in a GWAS study. We show that the new method stringently controls the false discovery rate and asymptotically selects the correct biological model. Simulations of two-stage discovery-validation studies show that the new method has these properties and that its validation power is similar to or exceeds that of simple methods that use the same statistical model for all SNPs. Example analyses of three data sets also highlight these advantages of the new method. An R package is freely available at www.stjuderesearch.org/site/depts/biostats/maew. Copyright © 2018. Published by Elsevier Inc.

  9. The extent and consequences of p-hacking in science.

    PubMed

    Head, Megan L; Holman, Luke; Lanfear, Rob; Kahn, Andrew T; Jennions, Michael D

    2015-03-01

    A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as "p-hacking," occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.

  10. Power-up: A Reanalysis of 'Power Failure' in Neuroscience Using Mixture Modeling.

    PubMed

    Nord, Camilla L; Valton, Vincent; Wood, John; Roiser, Jonathan P

    2017-08-23

    Recently, evidence for endemically low statistical power has cast neuroscience findings into doubt. If low statistical power plagues neuroscience, then this reduces confidence in the reported effects. However, if statistical power is not uniformly low, then such blanket mistrust might not be warranted. Here, we provide a different perspective on this issue, analyzing data from an influential study reporting a median power of 21% across 49 meta-analyses (Button et al., 2013). We demonstrate, using Gaussian mixture modeling, that the sample of 730 studies included in that analysis comprises several subcomponents so the use of a single summary statistic is insufficient to characterize the nature of the distribution. We find that statistical power is extremely low for studies included in meta-analyses that reported a null result and that it varies substantially across subfields of neuroscience, with particularly low power in candidate gene association studies. Therefore, whereas power in neuroscience remains a critical issue, the notion that studies are systematically underpowered is not the full story: low power is far from a universal problem. SIGNIFICANCE STATEMENT Recently, researchers across the biomedical and psychological sciences have become concerned with the reliability of results. One marker for reliability is statistical power: the probability of finding a statistically significant result given that the effect exists. Previous evidence suggests that statistical power is low across the field of neuroscience. Our results present a more comprehensive picture of statistical power in neuroscience: on average, studies are indeed underpowered-some very seriously so-but many studies show acceptable or even exemplary statistical power. We show that this heterogeneity in statistical power is common across most subfields in neuroscience. This new, more nuanced picture of statistical power in neuroscience could affect not only scientific understanding, but potentially policy and funding decisions for neuroscience research. Copyright © 2017 Nord, Valton et al.

  11. Teaching Statistics in Biology: Using Inquiry-based Learning to Strengthen Understanding of Statistical Analysis in Biology Laboratory Courses

    PubMed Central

    2008-01-01

    There is an increasing need for students in the biological sciences to build a strong foundation in quantitative approaches to data analyses. Although most science, engineering, and math field majors are required to take at least one statistics course, statistical analysis is poorly integrated into undergraduate biology course work, particularly at the lower-division level. Elements of statistics were incorporated into an introductory biology course, including a review of statistics concepts and opportunity for students to perform statistical analysis in a biological context. Learning gains were measured with an 11-item statistics learning survey instrument developed for the course. Students showed a statistically significant 25% (p < 0.005) increase in statistics knowledge after completing introductory biology. Students improved their scores on the survey after completing introductory biology, even if they had previously completed an introductory statistics course (9%, improvement p < 0.005). Students retested 1 yr after completing introductory biology showed no loss of their statistics knowledge as measured by this instrument, suggesting that the use of statistics in biology course work may aid long-term retention of statistics knowledge. No statistically significant differences in learning were detected between male and female students in the study. PMID:18765754

  12. Online incidental statistical learning of audiovisual word sequences in adults: a registered report.

    PubMed

    Kuppuraj, Sengottuvel; Duta, Mihaela; Thompson, Paul; Bishop, Dorothy

    2018-02-01

    Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory-picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test-retest reliability ( r  = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process.

  13. Online incidental statistical learning of audiovisual word sequences in adults: a registered report

    PubMed Central

    Duta, Mihaela; Thompson, Paul

    2018-01-01

    Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory–picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test–retest reliability (r = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process. PMID:29515876

  14. Emotional and cognitive effects of peer tutoring among secondary school mathematics students

    NASA Astrophysics Data System (ADS)

    Alegre Ansuategui, Francisco José; Moliner Miravet, Lidón

    2017-11-01

    This paper describes an experience of same-age peer tutoring conducted with 19 eighth-grade mathematics students in a secondary school in Castellon de la Plana (Spain). Three constructs were analysed before and after launching the program: academic performance, mathematics self-concept and attitude of solidarity. Students' perceptions of the method were also analysed. The quantitative data was gathered by means of a mathematics self-concept questionnaire, an attitude of solidarity questionnaire and the students' numerical ratings. A statistical analysis was performed using Student's t-test. The qualitative information was gathered by means of discussion groups and a field diary. This information was analysed using descriptive analysis and by categorizing the information. Results show statistically significant improvements in all the variables and the positive assessment of the experience and the interactions that took place between the students.

  15. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI).

    PubMed

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non-expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI's robustness and sensitivity in capturing useful data relating to the students' conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. © 2016 T. Deane et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  16. Race and Class in Britain: A Critique of the Statistical Basis for Critical Race Theory in Britain: And Some Political Implications

    ERIC Educational Resources Information Center

    Hill, Dave

    2009-01-01

    In this paper, the author critiques what he analyses as the misuse of statistics in arguments put forward by some Critical Race Theorists in Britain showing that "Race" "trumps" Class in terms of underachievement at 16+ exams in England and Wales. At a theoretical level, using Marxist work the author argues for a notion of…

  17. Statistics for NAEG: past efforts, new results, and future plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, R.O.; Simpson, J.C.; Kinnison, R.R.

    A brief review of Nevada Applied Ecology Group (NAEG) objectives is followed by a summary of past statistical analyses conducted by Pacific Northwest Laboratory for the NAEG. Estimates of spatial pattern of radionuclides and other statistical analyses at NS's 201, 219 and 221 are reviewed as background for new analyses presented in this paper. Suggested NAEG activities and statistical analyses needed for the projected termination date of NAEG studies in March 1986 are given.

  18. Critical analysis of adsorption data statistically

    NASA Astrophysics Data System (ADS)

    Kaushal, Achla; Singh, S. K.

    2017-10-01

    Experimental data can be presented, computed, and critically analysed in a different way using statistics. A variety of statistical tests are used to make decisions about the significance and validity of the experimental data. In the present study, adsorption was carried out to remove zinc ions from contaminated aqueous solution using mango leaf powder. The experimental data was analysed statistically by hypothesis testing applying t test, paired t test and Chi-square test to (a) test the optimum value of the process pH, (b) verify the success of experiment and (c) study the effect of adsorbent dose in zinc ion removal from aqueous solutions. Comparison of calculated and tabulated values of t and χ 2 showed the results in favour of the data collected from the experiment and this has been shown on probability charts. K value for Langmuir isotherm was 0.8582 and m value for Freundlich adsorption isotherm obtained was 0.725, both are <1, indicating favourable isotherms. Karl Pearson's correlation coefficient values for Langmuir and Freundlich adsorption isotherms were obtained as 0.99 and 0.95 respectively, which show higher degree of correlation between the variables. This validates the data obtained for adsorption of zinc ions from the contaminated aqueous solution with the help of mango leaf powder.

  19. Polygenic scores via penalized regression on summary statistics.

    PubMed

    Mak, Timothy Shin Heng; Porsch, Robert Milan; Choi, Shing Wan; Zhou, Xueya; Sham, Pak Chung

    2017-09-01

    Polygenic scores (PGS) summarize the genetic contribution of a person's genotype to a disease or phenotype. They can be used to group participants into different risk categories for diseases, and are also used as covariates in epidemiological analyses. A number of possible ways of calculating PGS have been proposed, and recently there is much interest in methods that incorporate information available in published summary statistics. As there is no inherent information on linkage disequilibrium (LD) in summary statistics, a pertinent question is how we can use LD information available elsewhere to supplement such analyses. To answer this question, we propose a method for constructing PGS using summary statistics and a reference panel in a penalized regression framework, which we call lassosum. We also propose a general method for choosing the value of the tuning parameter in the absence of validation data. In our simulations, we showed that pseudovalidation often resulted in prediction accuracy that is comparable to using a dataset with validation phenotype and was clearly superior to the conservative option of setting the tuning parameter of lassosum to its lowest value. We also showed that lassosum achieved better prediction accuracy than simple clumping and P-value thresholding in almost all scenarios. It was also substantially faster and more accurate than the recently proposed LDpred. © 2017 WILEY PERIODICALS, INC.

  20. The analysis of morphometric data on rocky mountain wolves and artic wolves using statistical method

    NASA Astrophysics Data System (ADS)

    Ammar Shafi, Muhammad; Saifullah Rusiman, Mohd; Hamzah, Nor Shamsidah Amir; Nor, Maria Elena; Ahmad, Noor’ani; Azia Hazida Mohamad Azmi, Nur; Latip, Muhammad Faez Ab; Hilmi Azman, Ahmad

    2018-04-01

    Morphometrics is a quantitative analysis depending on the shape and size of several specimens. Morphometric quantitative analyses are commonly used to analyse fossil record, shape and size of specimens and others. The aim of the study is to find the differences between rocky mountain wolves and arctic wolves based on gender. The sample utilised secondary data which included seven variables as independent variables and two dependent variables. Statistical modelling was used in the analysis such was the analysis of variance (ANOVA) and multivariate analysis of variance (MANOVA). The results showed there exist differentiating results between arctic wolves and rocky mountain wolves based on independent factors and gender.

  1. The Extent and Consequences of P-Hacking in Science

    PubMed Central

    Head, Megan L.; Holman, Luke; Lanfear, Rob; Kahn, Andrew T.; Jennions, Michael D.

    2015-01-01

    A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses. PMID:25768323

  2. Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses

    PubMed Central

    Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy

    2015-01-01

    Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically consistent under the multi-species coalescent model. New data used in this study are available at DOI: http://dx.doi.org/10.6084/m9.figshare.1411146, and the software is available at https://github.com/smirarab/binning. PMID:26086579

  3. Low statistical power in biomedical science: a review of three human research domains.

    PubMed

    Dumas-Mallet, Estelle; Button, Katherine S; Boraud, Thomas; Gonon, Francois; Munafò, Marcus R

    2017-02-01

    Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0-10% or 11-20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation.

  4. Low statistical power in biomedical science: a review of three human research domains

    PubMed Central

    Dumas-Mallet, Estelle; Button, Katherine S.; Boraud, Thomas; Gonon, Francois

    2017-01-01

    Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0–10% or 11–20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation. PMID:28386409

  5. Evaluation and application of summary statistic imputation to discover new height-associated loci.

    PubMed

    Rüeger, Sina; McDaid, Aaron; Kutalik, Zoltán

    2018-05-01

    As most of the heritability of complex traits is attributed to common and low frequency genetic variants, imputing them by combining genotyping chips and large sequenced reference panels is the most cost-effective approach to discover the genetic basis of these traits. Association summary statistics from genome-wide meta-analyses are available for hundreds of traits. Updating these to ever-increasing reference panels is very cumbersome as it requires reimputation of the genetic data, rerunning the association scan, and meta-analysing the results. A much more efficient method is to directly impute the summary statistics, termed as summary statistics imputation, which we improved to accommodate variable sample size across SNVs. Its performance relative to genotype imputation and practical utility has not yet been fully investigated. To this end, we compared the two approaches on real (genotyped and imputed) data from 120K samples from the UK Biobank and show that, genotype imputation boasts a 3- to 5-fold lower root-mean-square error, and better distinguishes true associations from null ones: We observed the largest differences in power for variants with low minor allele frequency and low imputation quality. For fixed false positive rates of 0.001, 0.01, 0.05, using summary statistics imputation yielded a decrease in statistical power by 9, 43 and 35%, respectively. To test its capacity to discover novel associations, we applied summary statistics imputation to the GIANT height meta-analysis summary statistics covering HapMap variants, and identified 34 novel loci, 19 of which replicated using data in the UK Biobank. Additionally, we successfully replicated 55 out of the 111 variants published in an exome chip study. Our study demonstrates that summary statistics imputation is a very efficient and cost-effective way to identify and fine-map trait-associated loci. Moreover, the ability to impute summary statistics is important for follow-up analyses, such as Mendelian randomisation or LD-score regression.

  6. Evaluation and application of summary statistic imputation to discover new height-associated loci

    PubMed Central

    2018-01-01

    As most of the heritability of complex traits is attributed to common and low frequency genetic variants, imputing them by combining genotyping chips and large sequenced reference panels is the most cost-effective approach to discover the genetic basis of these traits. Association summary statistics from genome-wide meta-analyses are available for hundreds of traits. Updating these to ever-increasing reference panels is very cumbersome as it requires reimputation of the genetic data, rerunning the association scan, and meta-analysing the results. A much more efficient method is to directly impute the summary statistics, termed as summary statistics imputation, which we improved to accommodate variable sample size across SNVs. Its performance relative to genotype imputation and practical utility has not yet been fully investigated. To this end, we compared the two approaches on real (genotyped and imputed) data from 120K samples from the UK Biobank and show that, genotype imputation boasts a 3- to 5-fold lower root-mean-square error, and better distinguishes true associations from null ones: We observed the largest differences in power for variants with low minor allele frequency and low imputation quality. For fixed false positive rates of 0.001, 0.01, 0.05, using summary statistics imputation yielded a decrease in statistical power by 9, 43 and 35%, respectively. To test its capacity to discover novel associations, we applied summary statistics imputation to the GIANT height meta-analysis summary statistics covering HapMap variants, and identified 34 novel loci, 19 of which replicated using data in the UK Biobank. Additionally, we successfully replicated 55 out of the 111 variants published in an exome chip study. Our study demonstrates that summary statistics imputation is a very efficient and cost-effective way to identify and fine-map trait-associated loci. Moreover, the ability to impute summary statistics is important for follow-up analyses, such as Mendelian randomisation or LD-score regression. PMID:29782485

  7. The relationship between the behavior problems and motor skills of students with intellectual disability.

    PubMed

    Lee, Yangchool; Jeoung, Bogja

    2016-12-01

    The purpose of this study was to determine the relationship between the motor skills and the behavior problems of students with intellectual disabilities. The study participants were 117 students with intellectual disabilities who were between 7 and 25 years old (male, n=79; female, n=38) and attending special education schools in South Korea. Motor skill abilities were assessed by using the second version of the Bruininks-Oseretsky test of motor proficiency, which includes subtests in fine motor control, manual coordination, body coordination, strength, and agility. Data were analyzed with SPSS IBM 21 by using correlation and regression analyses, and the significance level was set at P <0.05. The results showed that fine motor precision and integration had a statistically significant influence on aggressive behavior. Manual dexterity showed a statistically significant influence on somatic complaint and anxiety/depression, and bilateral coordination had a statistically significant influence on social problems, attention problem, and aggressive behavior. Our results showed that balance had a statistically significant influence on social problems and aggressive behavior, and speed and agility had a statistically significant influence on social problems and aggressive behavior. Upper limb coordination and strength had a statistically significant influence on social problems.

  8. Identification of key micro-organisms involved in Douchi fermentation by statistical analysis and their use in an experimental fermentation.

    PubMed

    Chen, C; Xiang, J Y; Hu, W; Xie, Y B; Wang, T J; Cui, J W; Xu, Y; Liu, Z; Xiang, H; Xie, Q

    2015-11-01

    To screen and identify safe micro-organisms used during Douchi fermentation, and verify the feasibility of producing high-quality Douchi using these identified micro-organisms. PCR-denaturing gradient gel electrophoresis (DGGE) and automatic amino-acid analyser were used to investigate the microbial diversity and free amino acids (FAAs) content of 10 commercial Douchi samples. The correlations between microbial communities and FAAs were analysed by statistical analysis. Ten strains with significant positive correlation were identified. Then an experiment on Douchi fermentation by identified strains was carried out, and the nutritional composition in Douchi was analysed. Results showed that FAAs and relative content of isoflavone aglycones in verification Douchi samples were generally higher than those in commercial Douchi samples. Our study indicated that fungi, yeasts, Bacillus and lactic acid bacteria were the key players in Douchi fermentation, and with identified probiotic micro-organisms participating in fermentation, a higher quality Douchi product was produced. This is the first report to analyse and confirm the key micro-organisms during Douchi fermentation by statistical analysis. This work proves fermentation micro-organisms to be the key influencing factor of Douchi quality, and demonstrates the feasibility of fermenting Douchi using identified starter micro-organisms. © 2015 The Society for Applied Microbiology.

  9. Adopting a Patient-Centered Approach to Primary Outcome Analysis of Acute Stroke Trials Using a Utility-Weighted Modified Rankin Scale.

    PubMed

    Chaisinanunkul, Napasri; Adeoye, Opeolu; Lewis, Roger J; Grotta, James C; Broderick, Joseph; Jovin, Tudor G; Nogueira, Raul G; Elm, Jordan J; Graves, Todd; Berry, Scott; Lees, Kennedy R; Barreto, Andrew D; Saver, Jeffrey L

    2015-08-01

    Although the modified Rankin Scale (mRS) is the most commonly used primary end point in acute stroke trials, its power is limited when analyzed in dichotomized fashion and its indication of effect size challenging to interpret when analyzed ordinally. Weighting the 7 Rankin levels by utilities may improve scale interpretability while preserving statistical power. A utility-weighted mRS (UW-mRS) was derived by averaging values from time-tradeoff (patient centered) and person-tradeoff (clinician centered) studies. The UW-mRS, standard ordinal mRS, and dichotomized mRS were applied to 11 trials or meta-analyses of acute stroke treatments, including lytic, endovascular reperfusion, blood pressure moderation, and hemicraniectomy interventions. Utility values were 1.0 for mRS level 0; 0.91 for mRS level 1; 0.76 for mRS level 2; 0.65 for mRS level 3; 0.33 for mRS level 4; 0 for mRS level 5; and 0 for mRS level 6. For trials with unidirectional treatment effects, the UW-mRS paralleled the ordinal mRS and outperformed dichotomous mRS analyses. Both the UW-mRS and the ordinal mRS were statistically significant in 6 of 8 unidirectional effect trials, whereas dichotomous analyses were statistically significant in 2 to 4 of 8. In bidirectional effect trials, both the UW-mRS and ordinal tests captured the divergent treatment effects by showing neutral results, whereas some dichotomized analyses showed positive results. Mean utility differences in trials with statistically significant positive results ranged from 0.026 to 0.249. A UW-mRS performs similar to the standard ordinal mRS in detecting treatment effects in actual stroke trials and ensures the quantitative outcome is a valid reflection of patient-centered benefits. © 2015 American Heart Association, Inc.

  10. Statistical analysis of lightning electric field measured under Malaysian condition

    NASA Astrophysics Data System (ADS)

    Salimi, Behnam; Mehranzamir, Kamyar; Abdul-Malek, Zulkurnain

    2014-02-01

    Lightning is an electrical discharge during thunderstorms that can be either within clouds (Inter-Cloud), or between clouds and ground (Cloud-Ground). The Lightning characteristics and their statistical information are the foundation for the design of lightning protection system as well as for the calculation of lightning radiated fields. Nowadays, there are various techniques to detect lightning signals and to determine various parameters produced by a lightning flash. Each technique provides its own claimed performances. In this paper, the characteristics of captured broadband electric fields generated by cloud-to-ground lightning discharges in South of Malaysia are analyzed. A total of 130 cloud-to-ground lightning flashes from 3 separate thunderstorm events (each event lasts for about 4-5 hours) were examined. Statistical analyses of the following signal parameters were presented: preliminary breakdown pulse train time duration, time interval between preliminary breakdowns and return stroke, multiplicity of stroke, and percentages of single stroke only. The BIL model is also introduced to characterize the lightning signature patterns. Observations on the statistical analyses show that about 79% of lightning signals fit well with the BIL model. The maximum and minimum of preliminary breakdown time duration of the observed lightning signals are 84 ms and 560 us, respectively. The findings of the statistical results show that 7.6% of the flashes were single stroke flashes, and the maximum number of strokes recorded was 14 multiple strokes per flash. A preliminary breakdown signature in more than 95% of the flashes can be identified.

  11. Statistical analyses on sandstones: Systematic approach for predicting petrographical and petrophysical properties

    NASA Astrophysics Data System (ADS)

    Stück, H. L.; Siegesmund, S.

    2012-04-01

    Sandstones are a popular natural stone due to their wide occurrence and availability. The different applications for these stones have led to an increase in demand. From the viewpoint of conservation and the natural stone industry, an understanding of the material behaviour of this construction material is very important. Sandstones are a highly heterogeneous material. Based on statistical analyses with a sufficiently large dataset, a systematic approach to predicting the material behaviour should be possible. Since the literature already contains a large volume of data concerning the petrographical and petrophysical properties of sandstones, a large dataset could be compiled for the statistical analyses. The aim of this study is to develop constraints on the material behaviour and especially on the weathering behaviour of sandstones. Approximately 300 samples from historical and presently mined natural sandstones in Germany and ones described worldwide were included in the statistical approach. The mineralogical composition and fabric characteristics were determined from detailed thin section analyses and descriptions in the literature. Particular attention was paid to evaluating the compositional and textural maturity, grain contact respectively contact thickness, type of cement, degree of alteration and the intergranular volume. Statistical methods were used to test for normal distributions and calculating the linear regression of the basic petrophysical properties of density, porosity, water uptake as well as the strength. The sandstones were classified into three different pore size distributions and evaluated with the other petrophysical properties. Weathering behavior like hygric swelling and salt loading tests were also included. To identify similarities between individual sandstones or to define groups of specific sandstone types, principle component analysis, cluster analysis and factor analysis were applied. Our results show that composition and porosity evolution during diagenesis is a very important control on the petrophysical properties of a building stone. The relationship between intergranular volume, cementation and grain contact, can also provide valuable information to predict the strength properties. Since the samples investigated mainly originate from the Triassic German epicontinental basin, arkoses and feldspar-arenites are underrepresented. In general, the sandstones can be grouped as follows: i) quartzites, highly mature with a primary porosity of about 40%, ii) quartzites, highly mature, showing a primary porosity of 40% but with early clay infiltration, iii) sublitharenites-lithic arenites exhibiting a lower primary porosity, higher cementation with quartz and Fe-oxides ferritic and iv) sublitharenites-lithic arenites with a higher content of pseudomatrix. However, in the last two groups the feldspar and lithoclasts can also show considerable alteration. All sandstone groups differ with respect to the pore space and strength data, as well as water uptake properties, which were obtained by linear regression analysis. Similar petrophysical properties are discernible for each type when using principle component analysis. Furthermore, strength as well as the porosity of sandstones shows distinct differences considering their stratigraphic ages and the compositions. The relationship between porosity, strength as well as salt resistance could also be verified. Hygric swelling shows an interrelation to pore size type, porosity and strength but also to the degree of alteration (e.g. lithoclasts, pseudomatrix). To summarize, the different regression analyses and the calculated confidence regions provide a significant tool to classify the petrographical and petrophysical parameters of sandstones. Based on this, the durability and the weathering behavior of the sandstone groups can be constrained. Keywords: sandstones, petrographical & petrophysical properties, predictive approach, statistical investigation

  12. Narrative Review of Statistical Reporting Checklists, Mandatory Statistical Editing, and Rectifying Common Problems in the Reporting of Scientific Articles.

    PubMed

    Dexter, Franklin; Shafer, Steven L

    2017-03-01

    Considerable attention has been drawn to poor reproducibility in the biomedical literature. One explanation is inadequate reporting of statistical methods by authors and inadequate assessment of statistical reporting and methods during peer review. In this narrative review, we examine scientific studies of several well-publicized efforts to improve statistical reporting. We also review several retrospective assessments of the impact of these efforts. These studies show that instructions to authors and statistical checklists are not sufficient; no findings suggested that either improves the quality of statistical methods and reporting. Second, even basic statistics, such as power analyses, are frequently missing or incorrectly performed. Third, statistical review is needed for all papers that involve data analysis. A consistent finding in the studies was that nonstatistical reviewers (eg, "scientific reviewers") and journal editors generally poorly assess statistical quality. We finish by discussing our experience with statistical review at Anesthesia & Analgesia from 2006 to 2016.

  13. Valid randomization-based p-values for partially post hoc subgroup analyses.

    PubMed

    Lee, Joseph J; Rubin, Donald B

    2015-10-30

    By 'partially post-hoc' subgroup analyses, we mean analyses that compare existing data from a randomized experiment-from which a subgroup specification is derived-to new, subgroup-only experimental data. We describe a motivating example in which partially post hoc subgroup analyses instigated statistical debate about a medical device's efficacy. We clarify the source of such analyses' invalidity and then propose a randomization-based approach for generating valid posterior predictive p-values for such partially post hoc subgroups. Lastly, we investigate the approach's operating characteristics in a simple illustrative setting through a series of simulations, showing that it can have desirable properties under both null and alternative hypotheses. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Multispectral determination of soil moisture-2. [Guymon, Oklahoma and Dalhart, Texas

    NASA Technical Reports Server (NTRS)

    Estes, J. E.; Simonett, D. S. (Principal Investigator); Hajic, E. J.; Hilton, B. M.; Lees, R. D.

    1982-01-01

    Soil moisture data obtained using scatterometers, modular multispectral scanners and passive microwave radiometers were revised and grouped into four field cover types for statistical anaysis. Guymon data are grouped as alfalfa, bare, milo with rows perpendicular to the field view, and milo viewed parallel to the field of view. Dalhart data are grouped as bare combo, stubble, disked stubble, and corn field. Summary graphs combine selected analyses to compare the effects of field cover. The analysis for each of the cover types is presented in tables and graphs. Other tables show elementary statistics, correlation matrices, and single variable regressions. Selected eigenvectors and factor analyses are included and the highest correlating sensor typs for each location are summarized.

  15. Incorporating oximeter analyses to investigate synchronies in heart rate while teaching and learning about race

    NASA Astrophysics Data System (ADS)

    Amat, Arnau; Zapata, Corinna; Alexakos, Konstantinos; Pride, Leah D.; Paylor-Smith, Christian; Hernandez, Matthew

    2016-09-01

    In this paper, we look closely at two events selected through event-oriented inquiry that were part of a classroom presentation on race. The first event was a provocative discussion about Mark Twain's ( Pudd'nhead Wilson, Harper, New York, 1899) and passing for being White. The other was a discussion on the use of the N-word. Grounded in authentic inquiry, we use ethnographic narrative, cogenerative dialogues, and video and oximeter data analyses as part of a multi-ontological approach for studying emotions. Statistical analysis of oximeter data shows statistically significant heart rate synchrony among two of the coteachers during their presentations, providing evidence of emotional synchrony, resonance, and social and emotional contagion.

  16. Statistical methods for convergence detection of multi-objective evolutionary algorithms.

    PubMed

    Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J

    2009-01-01

    In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.

  17. Cross-population validation of statistical distance as a measure of physiological dysregulation during aging.

    PubMed

    Cohen, Alan A; Milot, Emmanuel; Li, Qing; Legault, Véronique; Fried, Linda P; Ferrucci, Luigi

    2014-09-01

    Measuring physiological dysregulation during aging could be a key tool both to understand underlying aging mechanisms and to predict clinical outcomes in patients. However, most existing indices are either circular or hard to interpret biologically. Recently, we showed that statistical distance of 14 common blood biomarkers (a measure of how strange an individual's biomarker profile is) was associated with age and mortality in the WHAS II data set, validating its use as a measure of physiological dysregulation. Here, we extend the analyses to other data sets (WHAS I and InCHIANTI) to assess the stability of the measure across populations. We found that the statistical criteria used to determine the original 14 biomarkers produced diverging results across populations; in other words, had we started with a different data set, we would have chosen a different set of markers. Nonetheless, the same 14 markers (or the subset of 12 available for InCHIANTI) produced highly similar predictions of age and mortality. We include analyses of all combinatorial subsets of the markers and show that results do not depend much on biomarker choice or data set, but that more markers produce a stronger signal. We conclude that statistical distance as a measure of physiological dysregulation is stable across populations in Europe and North America. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Advanced Behavioral Analyses Show that the Presence of Food Causes Subtle Changes in C. elegans Movement.

    PubMed

    Angstman, Nicholas B; Frank, Hans-Georg; Schmitz, Christoph

    2016-01-01

    As a widely used and studied model organism, Caenorhabditis elegans worms offer the ability to investigate implications of behavioral change. Although, investigation of C. elegans behavioral traits has been shown, analysis is often narrowed down to measurements based off a single point, and thus cannot pick up on subtle behavioral and morphological changes. In the present study videos were captured of four different C. elegans strains grown in liquid cultures and transferred to NGM-agar plates with an E. coli lawn or with no lawn. Using an advanced software, WormLab, the full skeleton and outline of worms were tracked to determine whether the presence of food affects behavioral traits. In all seven investigated parameters, statistically significant differences were found in worm behavior between those moving on NGM-agar plates with an E. coli lawn and NGM-agar plates with no lawn. Furthermore, multiple test groups showed differences in interaction between variables as the parameters that significantly correlated statistically with speed of locomotion varied. In the present study, we demonstrate the validity of a model to analyze C. elegans behavior beyond simple speed of locomotion. The need to account for a nested design while performing statistical analyses in similar studies is also demonstrated. With extended analyses, C. elegans behavioral change can be investigated with greater sensitivity, which could have wide utility in fields such as, but not limited to, toxicology, drug discovery, and RNAi screening.

  19. Histometric analyses of cancellous and cortical interface in autogenous bone grafting

    PubMed Central

    Netto, Henrique Duque; Olate, Sergio; Klüppel, Leandro; do Carmo, Antonio Marcio Resende; Vásquez, Bélgica; Albergaria-Barbosa, Jose

    2013-01-01

    Surgical procedures involving the rehabilitation of the maxillofacial region frequently require bone grafts; the aim of this research was to evaluate the interface between recipient and graft with cortical or cancellous contact. 6 adult beagle dogs with 15 kg weight were included in the study. Under general anesthesia, an 8 mm diameter block was obtained from parietal bone of each animal and was put on the frontal bone with a 12 mm 1.5 screws. Was used the lag screw technique from better contact between the recipient and graft. 3-week and 6-week euthanized period were chosen for histometric evaluation. Hematoxylin-eosin was used in a histologic routine technique and histomorphometry was realized with IMAGEJ software. T test was used for data analyses with p<0.05 for statistical significance. The result show some differences in descriptive histology but non statistical differences in the interface between cortical or cancellous bone at 3 or 6 week; as natural, after 6 week of surgery, bone integration was better and statistically superior to 3-week analyses. We conclude that integration of cortical or cancellous bone can be usefully without differences. PMID:23923071

  20. Gait patterns for crime fighting: statistical evaluation

    NASA Astrophysics Data System (ADS)

    Sulovská, Kateřina; Bělašková, Silvie; Adámek, Milan

    2013-10-01

    The criminality is omnipresent during the human history. Modern technology brings novel opportunities for identification of a perpetrator. One of these opportunities is an analysis of video recordings, which may be taken during the crime itself or before/after the crime. The video analysis can be classed as identification analyses, respectively identification of a person via externals. The bipedal locomotion focuses on human movement on the basis of their anatomical-physiological features. Nowadays, the human gait is tested by many laboratories to learn whether the identification via bipedal locomotion is possible or not. The aim of our study is to use 2D components out of 3D data from the VICON Mocap system for deep statistical analyses. This paper introduces recent results of a fundamental study focused on various gait patterns during different conditions. The study contains data from 12 participants. Curves obtained from these measurements were sorted, averaged and statistically tested to estimate the stability and distinctiveness of this biometrics. Results show satisfactory distinctness of some chosen points, while some do not embody significant difference. However, results presented in this paper are of initial phase of further deeper and more exacting analyses of gait patterns under different conditions.

  1. Statistical analysis of iron geochemical data suggests limited late Proterozoic oxygenation

    NASA Astrophysics Data System (ADS)

    Sperling, Erik A.; Wolock, Charles J.; Morgan, Alex S.; Gill, Benjamin C.; Kunzmann, Marcus; Halverson, Galen P.; MacDonald, Francis A.; Knoll, Andrew H.; Johnston, David T.

    2015-07-01

    Sedimentary rocks deposited across the Proterozoic-Phanerozoic transition record extreme climate fluctuations, a potential rise in atmospheric oxygen or re-organization of the seafloor redox landscape, and the initial diversification of animals. It is widely assumed that the inferred redox change facilitated the observed trends in biodiversity. Establishing this palaeoenvironmental context, however, requires that changes in marine redox structure be tracked by means of geochemical proxies and translated into estimates of atmospheric oxygen. Iron-based proxies are among the most effective tools for tracking the redox chemistry of ancient oceans. These proxies are inherently local, but have global implications when analysed collectively and statistically. Here we analyse about 4,700 iron-speciation measurements from shales 2,300 to 360 million years old. Our statistical analyses suggest that subsurface water masses in mid-Proterozoic oceans were predominantly anoxic and ferruginous (depleted in dissolved oxygen and iron-bearing), but with a tendency towards euxinia (sulfide-bearing) that is not observed in the Neoproterozoic era. Analyses further indicate that early animals did not experience appreciable benthic sulfide stress. Finally, unlike proxies based on redox-sensitive trace-metal abundances, iron geochemical data do not show a statistically significant change in oxygen content through the Ediacaran and Cambrian periods, sharply constraining the magnitude of the end-Proterozoic oxygen increase. Indeed, this re-analysis of trace-metal data is consistent with oxygenation continuing well into the Palaeozoic era. Therefore, if changing redox conditions facilitated animal diversification, it did so through a limited rise in oxygen past critical functional and ecological thresholds, as is seen in modern oxygen minimum zone benthic animal communities.

  2. Ataxia Telangiectasia–Mutated Gene Polymorphisms and Acute Normal Tissue Injuries in Cancer Patients After Radiation Therapy: A Systematic Review and Meta-analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Lihua; Cui, Jingkun; Tang, Fengjiao

    Purpose: Studies of the association between ataxia telangiectasia–mutated (ATM) gene polymorphisms and acute radiation injuries are often small in sample size, and the results are inconsistent. We conducted the first meta-analysis to provide a systematic review of published findings. Methods and Materials: Publications were identified by searching PubMed up to April 25, 2014. Primary meta-analysis was performed for all acute radiation injuries, and subgroup meta-analyses were based on clinical endpoint. The influence of sample size and radiation injury incidence on genetic effects was estimated in sensitivity analyses. Power calculations were also conducted. Results: The meta-analysis was conducted on the ATMmore » polymorphism rs1801516, including 5 studies with 1588 participants. For all studies, the cut-off for differentiating cases from controls was grade 2 acute radiation injuries. The primary meta-analysis showed a significant association with overall acute radiation injuries (allelic model: odds ratio = 1.33, 95% confidence interval: 1.04-1.71). Subgroup analyses detected an association between the rs1801516 polymorphism and a significant increase in urinary and lower gastrointestinal injuries and an increase in skin injury that was not statistically significant. There was no between-study heterogeneity in any meta-analyses. In the sensitivity analyses, small studies did not show larger effects than large studies. In addition, studies with high incidence of acute radiation injuries showed larger effects than studies with low incidence. Power calculations revealed that the statistical power of the primary meta-analysis was borderline, whereas there was adequate power for the subgroup analysis of studies with high incidence of acute radiation injuries. Conclusions: Our meta-analysis showed a consistency of the results from the overall and subgroup analyses. We also showed that the genetic effect of the rs1801516 polymorphism on acute radiation injuries was dependent on the incidence of the injury. These support the evidence of an association between the rs1801516 polymorphism and acute radiation injuries, encouraging further research of this topic.« less

  3. Nitrogen Dioxide Exposure and Airway Responsiveness in ...

    EPA Pesticide Factsheets

    Controlled human exposure studies evaluating the effect of inhaled NO2 on the inherent responsiveness of the airways to challenge by bronchoconstricting agents have had mixed results. In general, existing meta-analyses show statistically significant effects of NO2 on the airway responsiveness of individuals with asthma. However, no meta-analysis has provided a comprehensive assessment of clinical relevance of changes in airway responsiveness, the potential for methodological biases in the original papers, and the distribution of responses. This paper provides analyses showing that a statistically significant fraction, 70% of individuals with asthma exposed to NO2 at rest, experience increases in airway responsiveness following 30-minute exposures to NO2 in the range of 200 to 300 ppb and following 60-minute exposures to 100 ppb. The distribution of changes in airway responsiveness is log-normally distributed with a median change of 0.75 (provocative dose following NO2 divided by provocative dose following filtered air exposure) and geometric standard deviation of 1.88. About a quarter of the exposed individuals experience a clinically relevant reduction in their provocative dose due to NO2 relative to air exposure. The fraction experiencing an increase in responsiveness was statistically significant and robust to exclusion of individual studies. Results showed minimal change in airway responsiveness for individuals exposed to NO2 during exercise. A variety of fa

  4. Environmental implications of element emissions from phosphate-processing operations in southeastern Idaho

    USGS Publications Warehouse

    Severson, R.C.; Gough, L.P.

    1979-01-01

    In order to assess the contribution to plants and soils of certain elements emitted by phosphate processing, we sampled sagebrush, grasses, and A- and C-horizon soils along upwind and downwind transects at Pocatello and Soda Springs, Idaho. Analyses for 70 elements in plants showed that, statistically, the concentration of 7 environmentally important elements, cadmium, chromium, fluorine, selenium, uranium, vanadium, and zinc, were related to emissions from phosphate-processing operations. Two additional elements, lithium and nickel, show probable relationships. The literature on the effects of these elements on plant and animal health is briefly surveyed. Relations between element content in plants and distance from the phosphate-processing operations were stronger at Soda Springs than at Pocatello and, in general, stronger in sagebrush than in the grasses. Analyses for 58 elements in soils showed that, statistically, beryllium, fluorine, iron, lead, lithium, potassium, rubidium, thorium, and zinc were related to emissions only at Pocatello and only in the A horizon. Moreover, six additional elements, copper, mercury, nickel, titanium, uranium, and vanadium, probably are similarly related along the same transect. The approximate amounts of elements added to the soils by the emissions are estimated. In C-horizon soils, no statistically significant relations were observed between element concentrations and distance from the processing sites. At Soda Springs, the nonuniformity of soils at the sampling locations may have obscured the relationship between soil-element content and emissions from phosphate processing.

  5. Probabilistic dietary exposure assessment taking into account variability in both amount and frequency of consumption.

    PubMed

    Slob, Wout

    2006-07-01

    Probabilistic dietary exposure assessments that are fully based on Monte Carlo sampling from the raw intake data may not be appropriate. This paper shows that the data should first be analysed by using a statistical model that is able to take the various dimensions of food consumption patterns into account. A (parametric) model is discussed that takes into account the interindividual variation in (daily) consumption frequencies, as well as in amounts consumed. Further, the model can be used to include covariates, such as age, sex, or other individual attributes. Some illustrative examples show how this model may be used to estimate the probability of exceeding an (acute or chronic) exposure limit. These results are compared with the results based on directly counting the fraction of observed intakes exceeding the limit value. This comparison shows that the latter method is not adequate, in particular for the acute exposure situation. A two-step approach for probabilistic (acute) exposure assessment is proposed: first analyse the consumption data by a (parametric) statistical model as discussed in this paper, and then use Monte Carlo techniques for combining the variation in concentrations with the variation in consumption (by sampling from the statistical model). This approach results in an estimate of the fraction of the population as a function of the fraction of days at which the exposure limit is exceeded by the individual.

  6. Nitrogen Dioxide Exposure and Airway Responsiveness in Individuals with Asthma

    EPA Science Inventory

    Controlled human exposure studies evaluating the effect of inhaled NO2 on the inherent responsiveness of the airways to challenge by bronchoconstricting agents have had mixed results. In general, existing meta-analyses show statistically significant effects of NO2 on the airway r...

  7. Topographic ERP analyses: a step-by-step tutorial review.

    PubMed

    Murray, Micah M; Brunet, Denis; Michel, Christoph M

    2008-06-01

    In this tutorial review, we detail both the rationale for as well as the implementation of a set of analyses of surface-recorded event-related potentials (ERPs) that uses the reference-free spatial (i.e. topographic) information available from high-density electrode montages to render statistical information concerning modulations in response strength, latency, and topography both between and within experimental conditions. In these and other ways these topographic analysis methods allow the experimenter to glean additional information and neurophysiologic interpretability beyond what is available from canonical waveform analyses. In this tutorial we present the example of somatosensory evoked potentials (SEPs) in response to stimulation of each hand to illustrate these points. For each step of these analyses, we provide the reader with both a conceptual and mathematical description of how the analysis is carried out, what it yields, and how to interpret its statistical outcome. We show that these topographic analysis methods are intuitive and easy-to-use approaches that can remove much of the guesswork often confronting ERP researchers and also assist in identifying the information contained within high-density ERP datasets.

  8. Hydrology and trout populations of cold-water rivers of Michigan and Wisconsin

    USGS Publications Warehouse

    Hendrickson, G.E.; Knutilla, R.L.

    1974-01-01

    Statistical multiple-regression analyses showed significant relationships between trout populations and hydrologic parameters. Parameters showing the higher levels of significance were temperature, hardness of water, percentage of gravel bottom, percentage of bottom vegetation, variability of streamflow, and discharge per unit drainage area. Trout populations increase with lower levels of annual maximum water temperatures, with increase in water hardness, and with increase in percentage of gravel and bottom vegetation. Trout populations also increase with decrease in variability of streamflow, and with increase in discharge per unit drainage area. Most hydrologic parameters were significant when evaluated collectively, but no parameter, by itself, showed a high degree of correlation with trout populations in regression analyses that included all the streams sampled. Regression analyses of stream segments that were restricted to certain limits of hardness, temperature, or percentage of gravel bottom showed improvements in correlation. Analyses of trout populations, in pounds per acre and pounds per mile and hydrologic parameters resulted in regression equations from which trout populations could be estimated with standard errors of 89 and 84 per cent, respectively.

  9. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    ERIC Educational Resources Information Center

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  10. Lehrer in der Bundesrepublik Deutschland. Eine Kritische Analyse Statistischer Daten uber das Lehrpersonal an Allgemeinbildenden Schulen. (Education in the Federal Republic of Germany. A Statistical Study of Teachers in Schools of General Education.)

    ERIC Educational Resources Information Center

    Kohler, Helmut

    The purpose of this study was to analyze the available statistics concerning teachers in schools of general education in the Federal Republic of Germany. An analysis of the demographic structure of the pool of full-time teachers showed that in 1971 30 percent of the teachers were under age 30, and 50 percent were under age 35. It was expected that…

  11. BRepertoire: a user-friendly web server for analysing antibody repertoire data.

    PubMed

    Margreitter, Christian; Lu, Hui-Chun; Townsend, Catherine; Stewart, Alexander; Dunn-Walters, Deborah K; Fraternali, Franca

    2018-04-14

    Antibody repertoire analysis by high throughput sequencing is now widely used, but a persisting challenge is enabling immunologists to explore their data to discover discriminating repertoire features for their own particular investigations. Computational methods are necessary for large-scale evaluation of antibody properties. We have developed BRepertoire, a suite of user-friendly web-based software tools for large-scale statistical analyses of repertoire data. The software is able to use data preprocessed by IMGT, and performs statistical and comparative analyses with versatile plotting options. BRepertoire has been designed to operate in various modes, for example analysing sequence-specific V(D)J gene usage, discerning physico-chemical properties of the CDR regions and clustering of clonotypes. Those analyses are performed on the fly by a number of R packages and are deployed by a shiny web platform. The user can download the analysed data in different table formats and save the generated plots as image files ready for publication. We believe BRepertoire to be a versatile analytical tool that complements experimental studies of immune repertoires. To illustrate the server's functionality, we show use cases including differential gene usage in a vaccination dataset and analysis of CDR3H properties in old and young individuals. The server is accessible under http://mabra.biomed.kcl.ac.uk/BRepertoire.

  12. Research Design and Statistical Methods in Indian Medical Journals: A Retrospective Survey

    PubMed Central

    Hassan, Shabbeer; Yellur, Rajashree; Subramani, Pooventhan; Adiga, Poornima; Gokhale, Manoj; Iyer, Manasa S.; Mayya, Shreemathi S.

    2015-01-01

    Good quality medical research generally requires not only an expertise in the chosen medical field of interest but also a sound knowledge of statistical methodology. The number of medical research articles which have been published in Indian medical journals has increased quite substantially in the past decade. The aim of this study was to collate all evidence on study design quality and statistical analyses used in selected leading Indian medical journals. Ten (10) leading Indian medical journals were selected based on impact factors and all original research articles published in 2003 (N = 588) and 2013 (N = 774) were categorized and reviewed. A validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation of the articles. Main outcomes considered in the present study were – study design types and their frequencies, error/defects proportion in study design, statistical analyses, and implementation of CONSORT checklist in RCT (randomized clinical trials). From 2003 to 2013: The proportion of erroneous statistical analyses did not decrease (χ2=0.592, Φ=0.027, p=0.4418), 25% (80/320) in 2003 compared to 22.6% (111/490) in 2013. Compared with 2003, significant improvement was seen in 2013; the proportion of papers using statistical tests increased significantly (χ2=26.96, Φ=0.16, p<0.0001) from 42.5% (250/588) to 56.7 % (439/774). The overall proportion of errors in study design decreased significantly (χ2=16.783, Φ=0.12 p<0.0001), 41.3% (243/588) compared to 30.6% (237/774). In 2013, randomized clinical trials designs has remained very low (7.3%, 43/588) with majority showing some errors (41 papers, 95.3%). Majority of the published studies were retrospective in nature both in 2003 [79.1% (465/588)] and in 2013 [78.2% (605/774)]. Major decreases in error proportions were observed in both results presentation (χ2=24.477, Φ=0.17, p<0.0001), 82.2% (263/320) compared to 66.3% (325/490) and interpretation (χ2=25.616, Φ=0.173, p<0.0001), 32.5% (104/320) compared to 17.1% (84/490), though some serious ones were still present. Indian medical research seems to have made no major progress regarding using correct statistical analyses, but error/defects in study designs have decreased significantly. Randomized clinical trials are quite rarely published and have high proportion of methodological problems. PMID:25856194

  13. Research design and statistical methods in Indian medical journals: a retrospective survey.

    PubMed

    Hassan, Shabbeer; Yellur, Rajashree; Subramani, Pooventhan; Adiga, Poornima; Gokhale, Manoj; Iyer, Manasa S; Mayya, Shreemathi S

    2015-01-01

    Good quality medical research generally requires not only an expertise in the chosen medical field of interest but also a sound knowledge of statistical methodology. The number of medical research articles which have been published in Indian medical journals has increased quite substantially in the past decade. The aim of this study was to collate all evidence on study design quality and statistical analyses used in selected leading Indian medical journals. Ten (10) leading Indian medical journals were selected based on impact factors and all original research articles published in 2003 (N = 588) and 2013 (N = 774) were categorized and reviewed. A validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation of the articles. Main outcomes considered in the present study were - study design types and their frequencies, error/defects proportion in study design, statistical analyses, and implementation of CONSORT checklist in RCT (randomized clinical trials). From 2003 to 2013: The proportion of erroneous statistical analyses did not decrease (χ2=0.592, Φ=0.027, p=0.4418), 25% (80/320) in 2003 compared to 22.6% (111/490) in 2013. Compared with 2003, significant improvement was seen in 2013; the proportion of papers using statistical tests increased significantly (χ2=26.96, Φ=0.16, p<0.0001) from 42.5% (250/588) to 56.7 % (439/774). The overall proportion of errors in study design decreased significantly (χ2=16.783, Φ=0.12 p<0.0001), 41.3% (243/588) compared to 30.6% (237/774). In 2013, randomized clinical trials designs has remained very low (7.3%, 43/588) with majority showing some errors (41 papers, 95.3%). Majority of the published studies were retrospective in nature both in 2003 [79.1% (465/588)] and in 2013 [78.2% (605/774)]. Major decreases in error proportions were observed in both results presentation (χ2=24.477, Φ=0.17, p<0.0001), 82.2% (263/320) compared to 66.3% (325/490) and interpretation (χ2=25.616, Φ=0.173, p<0.0001), 32.5% (104/320) compared to 17.1% (84/490), though some serious ones were still present. Indian medical research seems to have made no major progress regarding using correct statistical analyses, but error/defects in study designs have decreased significantly. Randomized clinical trials are quite rarely published and have high proportion of methodological problems.

  14. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects.

    PubMed

    Ho, Andrew D; Yu, Carol C

    2015-06-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological practice. In this article, the authors extend these previous analyses to state-level educational test score distributions that are an increasingly common target of high-stakes analysis and interpretation. Among 504 scale-score and raw-score distributions from state testing programs from recent years, nonnormal distributions are common and are often associated with particular state programs. The authors explain how scaling procedures from item response theory lead to nonnormal distributions as well as unusual patterns of discreteness. The authors recommend that distributional descriptive statistics be calculated routinely to inform model selection for large-scale test score data, and they illustrate consequences of nonnormality using sensitivity studies that compare baseline results to those from normalized score scales.

  15. Statistical analysis of the determinations of the Sun's Galactocentric distance

    NASA Astrophysics Data System (ADS)

    Malkin, Zinovy

    2013-02-01

    Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.

  16. Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review.

    PubMed

    Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C

    2018-03-07

    Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.

  17. Using venlafaxine to treat behavioral disorders in patients with autism spectrum disorder.

    PubMed

    Carminati, Giuliana Galli; Gerber, Fabienne; Darbellay, Barbara; Kosel, Markus Mathaus; Deriaz, Nicolas; Chabert, Jocelyne; Fathi, Marc; Bertschy, Gilles; Ferrero, François; Carminati, Federico

    2016-02-04

    To test the efficacy of venlafaxine at a dose of 18.75 mg/day on the reduction of behavioral problems such as irritability and hyperactivity/noncompliance in patients with intellectual disabilities and autism spectrum disorder (ASD). Our secondary hypothesis was that the usual doses of zuclopenthixol and/or clonazepam would decrease in the venlafaxine-treated group. In a randomized double-blind study, we compared six patients who received venlafaxine along with their usual treatment (zuclopenthixol and/or clonazepam) with seven patients who received placebo plus usual care. Irritability, hyperactivity/noncompliance, and overall clinical improvement were measured after 2 and 8 weeks, using validated clinical scales. Univariate analyses showed that the symptom of irritability improved in the entire sample (p = 0.023 after 2 weeks, p = 0.061 at study endpoint), although no difference was observed between the venlafaxine and placebo groups. No significant decrease in hyperactivity/noncompliance was observed during the study. At the end of the study, global improvement was observed in 33% of participants treated with venlafaxine and in 71% of participants in the placebo group (p = 0.29). The study found that decreased cumulative doses of clonazepam and zuclopenthixol were required for the venlafaxine group. Multivariate analyses (principal component analyses) with at least three combinations of variables showed that the two populations could be clearly separated (p b 0.05). Moreover, in all cases, the venlafaxine population had lower values for the Aberrant Behavior Checklist (ABC), Behavior Problems Inventory (BPI), and levels of urea with respect to the placebo group. In one case, a reduction in the dosage of clonazepam was also suggested. For an additional set of variables (ABC factor 2, BPI frequency of aggressive behaviors, hematic ammonia at Day 28, and zuclopenthixol and clonazepam intake), the separation between the two samples was statistically significant as was the Bartlett's test, but the Kaiser–Meyer–Olkin Measure of Sampling Adequacy was below the accepted threshold. This set of variables showed a reduction in the cumulative intake of both zuclopenthixol and clonazepam. Despite the small sample sizes, this study documented a statistically significant effect of venlafaxine. Moreover, we showed that lower doses of zuclopenthixol and clonazepam were needed in the venlafaxine group, although this difference was not statistically significant. This was confirmed by multivariate analyses, where this difference reached statistical significance when using a combination of variables involving zuclopenthixol. Larger-scale studies are recommended to better investigate the effectiveness of venlafaxine treatment in patients with intellectual disabilities and ASD.

  18. Borrowing of strength and study weights in multivariate and network meta-analysis.

    PubMed

    Jackson, Dan; White, Ian R; Price, Malcolm; Copas, John; Riley, Richard D

    2017-12-01

    Multivariate and network meta-analysis have the potential for the estimated mean of one effect to borrow strength from the data on other effects of interest. The extent of this borrowing of strength is usually assessed informally. We present new mathematical definitions of 'borrowing of strength'. Our main proposal is based on a decomposition of the score statistic, which we show can be interpreted as comparing the precision of estimates from the multivariate and univariate models. Our definition of borrowing of strength therefore emulates the usual informal assessment. We also derive a method for calculating study weights, which we embed into the same framework as our borrowing of strength statistics, so that percentage study weights can accompany the results from multivariate and network meta-analyses as they do in conventional univariate meta-analyses. Our proposals are illustrated using three meta-analyses involving correlated effects for multiple outcomes, multiple risk factor associations and multiple treatments (network meta-analysis).

  19. Bioconductor Workflow for Microbiome Data Analysis: from raw reads to community analyses

    PubMed Central

    Callahan, Ben J.; Sankaran, Kris; Fukuyama, Julia A.; McMurdie, Paul J.; Holmes, Susan P.

    2016-01-01

    High-throughput sequencing of PCR-amplified taxonomic markers (like the 16S rRNA gene) has enabled a new level of analysis of complex bacterial communities known as microbiomes. Many tools exist to quantify and compare abundance levels or OTU composition of communities in different conditions. The sequencing reads have to be denoised and assigned to the closest taxa from a reference database. Common approaches use a notion of 97% similarity and normalize the data by subsampling to equalize library sizes. In this paper, we show that statistical models allow more accurate abundance estimates. By providing a complete workflow in R, we enable the user to do sophisticated downstream statistical analyses, whether parametric or nonparametric. We provide examples of using the R packages dada2, phyloseq, DESeq2, ggplot2 and vegan to filter, visualize and test microbiome data. We also provide examples of supervised analyses using random forests and nonparametric testing using community networks and the ggnetwork package. PMID:27508062

  20. Data on xylem sap proteins from Mn- and Fe-deficient tomato plants obtained using shotgun proteomics.

    PubMed

    Ceballos-Laita, Laura; Gutierrez-Carbonell, Elain; Takahashi, Daisuke; Abadía, Anunciación; Uemura, Matsuo; Abadía, Javier; López-Millán, Ana Flor

    2018-04-01

    This article contains consolidated proteomic data obtained from xylem sap collected from tomato plants grown in Fe- and Mn-sufficient control, as well as Fe-deficient and Mn-deficient conditions. Data presented here cover proteins identified and quantified by shotgun proteomics and Progenesis LC-MS analyses: proteins identified with at least two peptides and showing changes statistically significant (ANOVA; p ≤ 0.05) and above a biologically relevant selected threshold (fold ≥ 2) between treatments are listed. The comparison between Fe-deficient, Mn-deficient and control xylem sap samples using a multivariate statistical data analysis (Principal Component Analysis, PCA) is also included. Data included in this article are discussed in depth in the research article entitled "Effects of Fe and Mn deficiencies on the protein profiles of tomato ( Solanum lycopersicum) xylem sap as revealed by shotgun analyses" [1]. This dataset is made available to support the cited study as well to extend analyses at a later stage.

  1. Borrowing of strength and study weights in multivariate and network meta-analysis

    PubMed Central

    Jackson, Dan; White, Ian R; Price, Malcolm; Copas, John; Riley, Richard D

    2016-01-01

    Multivariate and network meta-analysis have the potential for the estimated mean of one effect to borrow strength from the data on other effects of interest. The extent of this borrowing of strength is usually assessed informally. We present new mathematical definitions of ‘borrowing of strength’. Our main proposal is based on a decomposition of the score statistic, which we show can be interpreted as comparing the precision of estimates from the multivariate and univariate models. Our definition of borrowing of strength therefore emulates the usual informal assessment. We also derive a method for calculating study weights, which we embed into the same framework as our borrowing of strength statistics, so that percentage study weights can accompany the results from multivariate and network meta-analyses as they do in conventional univariate meta-analyses. Our proposals are illustrated using three meta-analyses involving correlated effects for multiple outcomes, multiple risk factor associations and multiple treatments (network meta-analysis). PMID:26546254

  2. Patient experience and process measures of quality of care at home health agencies: Factors associated with high performance.

    PubMed

    Smith, Laura M; Anderson, Wayne L; Lines, Lisa M; Pronier, Cristalle; Thornburg, Vanessa; Butler, Janelle P; Teichman, Lori; Dean-Whittaker, Debra; Goldstein, Elizabeth

    2017-01-01

    We examined the effects of provider characteristics on home health agency performance on patient experience of care (Home Health CAHPS) and process (OASIS) measures. Descriptive, multivariate, and factor analyses were used. While agencies score high on both domains, factor analyses showed that the underlying items represent separate constructs. Freestanding and Visiting Nurse Association agencies, higher number of home health aides per 100 episodes, and urban location were statistically significant predictors of lower performance. Lack of variation in composite measures potentially led to counterintuitive results for effects of organizational characteristics. This exploratory study showed the value of having separate quality domains.

  3. Relationship between academic motivation and mathematics achievement among Indian adolescents in Canada and India.

    PubMed

    Areepattamannil, Shaljan

    2014-01-01

    This study examined the relationships between academic motivation-intrinsic motivation, extrinsic motivation, amotivation-and mathematics achievement among 363 Indian adolescents in India and 355 Indian immigrant adolescents in Canada. Results of hierarchical multiple regression analyses showed that intrinsic motivation, extrinsic motivation, and amotivation were not statistically significantly related to mathematics achievement among Indian adolescents in India. In contrast, both intrinsic motivation and extrinsic motivation were statistically significantly related to mathematics achievement among Indian immigrant adolescents in Canada. While intrinsic motivation was a statistically significant positive predictor of mathematics achievement among Indian immigrant adolescents in Canada, extrinsic motivation was a statistically significant negative predictor of mathematics achievement among Indian immigrant adolescents in Canada. Amotivation was not statistically significantly related to mathematics achievement among Indian immigrant adolescents in Canada. Implications of the findings for pedagogy and practice are discussed.

  4. Statistical analysis of environmental monitoring data: does a worst case time for monitoring clean rooms exist?

    PubMed

    Cundell, A M; Bean, R; Massimore, L; Maier, C

    1998-01-01

    To determine the relationship between the sampling time of the environmental monitoring, i.e., viable counts, in aseptic filling areas and the microbial count and frequency of alerts for air, surface and personnel microbial monitoring, statistical analyses were conducted on 1) the frequency of alerts versus the time of day for routine environmental sampling conducted in calendar year 1994, and 2) environmental monitoring data collected at 30-minute intervals during routine aseptic filling operations over two separate days in four different clean rooms with multiple shifts and equipment set-ups at a parenteral manufacturing facility. Statistical analyses showed, except for one floor location that had significantly higher number of counts but no alert or action level samplings in the first two hours of operation, there was no relationship between the number of counts and the time of sampling. Further studies over a 30-day period at the floor location showed no relationship between time of sampling and microbial counts. The conclusion reached in the study was that there is no worst case time for environmental monitoring at that facility and that sampling any time during the aseptic filling operation will give a satisfactory measure of the microbial cleanliness in the clean room during the set-up and aseptic filling operation.

  5. Periodontal disease and carotid atherosclerosis: A meta-analysis of 17,330 participants.

    PubMed

    Zeng, Xian-Tao; Leng, Wei-Dong; Lam, Yat-Yin; Yan, Bryan P; Wei, Xue-Mei; Weng, Hong; Kwong, Joey S W

    2016-01-15

    The association between periodontal disease and carotid atherosclerosis has been evaluated primarily in single-center studies, and whether periodontal disease is an independent risk factor of carotid atherosclerosis remains uncertain. This meta-analysis aimed to evaluate the association between periodontal disease and carotid atherosclerosis. We searched PubMed and Embase for relevant observational studies up to February 20, 2015. Two authors independently extracted data from included studies, and odds ratios (ORs) with 95% confidence intervals (CIs) were calculated for overall and subgroup meta-analyses. Statistical heterogeneity was assessed by the chi-squared test (P<0.1 for statistical significance) and quantified by the I(2) statistic. Data analysis was conducted using the Comprehensive Meta-Analysis (CMA) software. Fifteen observational studies involving 17,330 participants were included in the meta-analysis. The overall pooled result showed that periodontal disease was associated with carotid atherosclerosis (OR: 1.27, 95% CI: 1.14-1.41; P<0.001) but statistical heterogeneity was substantial (I(2)=78.90%). Subgroup analysis of adjusted smoking and diabetes mellitus showed borderline significance (OR: 1.08; 95% CI: 1.00-1.18; P=0.05). Sensitivity and cumulative analyses both indicated that our results were robust. Findings of our meta-analysis indicated that the presence of periodontal disease was associated with carotid atherosclerosis; however, further large-scale, well-conducted clinical studies are needed to explore the precise risk of developing carotid atherosclerosis in patients with periodontal disease. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Antiviral treatment of Bell's palsy based on baseline severity: a systematic review and meta-analysis.

    PubMed

    Turgeon, Ricky D; Wilby, Kyle J; Ensom, Mary H H

    2015-06-01

    We conducted a systematic review with meta-analysis to evaluate the efficacy of antiviral agents on complete recovery of Bell's palsy. We searched CENTRAL, Embase, MEDLINE, International Pharmaceutical Abstracts, and sources of unpublished literature to November 1, 2014. Primary and secondary outcomes were complete and satisfactory recovery, respectively. To evaluate statistical heterogeneity, we performed subgroup analysis of baseline severity of Bell's palsy and between-study sensitivity analyses based on risk of allocation and detection bias. The 10 included randomized controlled trials (2419 patients; 807 with severe Bell's palsy at onset) had variable risk of bias, with 9 trials having a high risk of bias in at least 1 domain. Complete recovery was not statistically significantly greater with antiviral use versus no antiviral use in the random-effects meta-analysis of 6 trials (relative risk, 1.06; 95% confidence interval, 0.97-1.16; I(2) = 65%). Conversely, random-effects meta-analysis of 9 trials showed a statistically significant difference in satisfactory recovery (relative risk, 1.10; 95% confidence interval, 1.02-1.18; I(2) = 63%). Response to antiviral agents did not differ visually or statistically between patients with severe symptoms at baseline and those with milder disease (test for interaction, P = .11). Sensitivity analyses did not show a clear effect of bias on outcomes. Antiviral agents are not efficacious in increasing the proportion of patients with Bell's palsy who achieved complete recovery, regardless of baseline symptom severity. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Small studies may overestimate the effect sizes in critical care meta-analyses: a meta-epidemiological study

    PubMed Central

    2013-01-01

    Introduction Small-study effects refer to the fact that trials with limited sample sizes are more likely to report larger beneficial effects than large trials. However, this has never been investigated in critical care medicine. Thus, the present study aimed to examine the presence and extent of small-study effects in critical care medicine. Methods Critical care meta-analyses involving randomized controlled trials and reported mortality as an outcome measure were considered eligible for the study. Component trials were classified as large (≥100 patients per arm) and small (<100 patients per arm) according to their sample sizes. Ratio of odds ratio (ROR) was calculated for each meta-analysis and then RORs were combined using a meta-analytic approach. ROR<1 indicated larger beneficial effect in small trials. Small and large trials were compared in methodological qualities including sequence generating, blinding, allocation concealment, intention to treat and sample size calculation. Results A total of 27 critical care meta-analyses involving 317 trials were included. Of them, five meta-analyses showed statistically significant RORs <1, and other meta-analyses did not reach a statistical significance. Overall, the pooled ROR was 0.60 (95% CI: 0.53 to 0.68); the heterogeneity was moderate with an I2 of 50.3% (chi-squared = 52.30; P = 0.002). Large trials showed significantly better reporting quality than small trials in terms of sequence generating, allocation concealment, blinding, intention to treat, sample size calculation and incomplete follow-up data. Conclusions Small trials are more likely to report larger beneficial effects than large trials in critical care medicine, which could be partly explained by the lower methodological quality in small trials. Caution should be practiced in the interpretation of meta-analyses involving small trials. PMID:23302257

  8. Occupational injuries in Bahrain.

    PubMed

    al-Arrayed, A; Hamza, A

    1995-10-01

    A study was conducted to show the problem of occupational injuries in Bahrain and try to highlight some solutions that may help to prevent or reduce workplace hazards. The data for occupational injuries between 1988 to 1991 from the social insurance records were reviewed and analysed. The data were summarized, grouped and tabulated according to age, sex, nationality, work place, type of injuries, cause and site of injury. Data were analysed statistically, frequencies were computed and results represented graphically. The study shows that there was a decline in the number of injuries in 1990 and 1991 due to a slow-down of economic activities in general in the Arabian Gulf region during the Gulf War. It also shows that Asian workers are at a high risk of occupational injuries.

  9. Evaluation and Assessment of a Biomechanics Computer-Aided Instruction.

    ERIC Educational Resources Information Center

    Washington, N.; Parnianpour, M.; Fraser, J. M.

    1999-01-01

    Describes the Biomechanics Tutorial, a computer-aided instructional tool that was developed at Ohio State University to expedite the transition from lecture to application for undergraduate students. Reports evaluation results that used statistical analyses and student questionnaires to show improved performance on posttests as well as positive…

  10. An Experimental Ecological Study of a Garden Compost Heap.

    ERIC Educational Resources Information Center

    Curds, Tracy

    1985-01-01

    A quantitative study of the fauna of a garden compost heap shows it to be similar to that of organisms found in soil and leaf litter. Materials, methods, and results are discussed and extensive tables of fauna lists, wet/dry masses, and statistical analyses are presented. (Author/DH)

  11. Metaprop: a Stata command to perform meta-analysis of binomial data.

    PubMed

    Nyaga, Victoria N; Arbyn, Marc; Aerts, Marc

    2014-01-01

    Meta-analyses have become an essential tool in synthesizing evidence on clinical and epidemiological questions derived from a multitude of similar studies assessing the particular issue. Appropriate and accessible statistical software is needed to produce the summary statistic of interest. Metaprop is a statistical program implemented to perform meta-analyses of proportions in Stata. It builds further on the existing Stata procedure metan which is typically used to pool effects (risk ratios, odds ratios, differences of risks or means) but which is also used to pool proportions. Metaprop implements procedures which are specific to binomial data and allows computation of exact binomial and score test-based confidence intervals. It provides appropriate methods for dealing with proportions close to or at the margins where the normal approximation procedures often break down, by use of the binomial distribution to model the within-study variability or by allowing Freeman-Tukey double arcsine transformation to stabilize the variances. Metaprop was applied on two published meta-analyses: 1) prevalence of HPV-infection in women with a Pap smear showing ASC-US; 2) cure rate after treatment for cervical precancer using cold coagulation. The first meta-analysis showed a pooled HPV-prevalence of 43% (95% CI: 38%-48%). In the second meta-analysis, the pooled percentage of cured women was 94% (95% CI: 86%-97%). By using metaprop, no studies with 0% or 100% proportions were excluded from the meta-analysis. Furthermore, study specific and pooled confidence intervals always were within admissible values, contrary to the original publication, where metan was used.

  12. Differences in game-related statistics of basketball performance by game location for men's winning and losing teams.

    PubMed

    Gómez, Miguel A; Lorenzo, Alberto; Barakat, Rubén; Ortega, Enrique; Palao, José M

    2008-02-01

    The aim of the present study was to identify game-related statistics that differentiate winning and losing teams according to game location. The sample included 306 games of the 2004-2005 regular season of the Spanish professional men's league (ACB League). The independent variables were game location (home or away) and game result (win or loss). The game-related statistics registered were free throws (successful and unsuccessful), 2- and 3-point field goals (successful and unsuccessful), offensive and defensive rebounds, blocks, assists, fouls, steals, and turnovers. Descriptive and inferential analyses were done (one-way analysis of variance and discriminate analysis). The multivariate analysis showed that winning teams differ from losing teams in defensive rebounds (SC = .42) and in assists (SC = .38). Similarly, winning teams differ from losing teams when they play at home in defensive rebounds (SC = .40) and in assists (SC = .41). On the other hand, winning teams differ from losing teams when they play away in defensive rebounds (SC = .44), assists (SC = .30), successful 2-point field goals (SC = .31), and unsuccessful 3-point field goals (SC = -.35). Defensive rebounds and assists were the only game-related statistics common to all three analyses.

  13. mvp - an open-source preprocessor for cleaning duplicate records and missing values in mass spectrometry data.

    PubMed

    Lee, Geunho; Lee, Hyun Beom; Jung, Byung Hwa; Nam, Hojung

    2017-07-01

    Mass spectrometry (MS) data are used to analyze biological phenomena based on chemical species. However, these data often contain unexpected duplicate records and missing values due to technical or biological factors. These 'dirty data' problems increase the difficulty of performing MS analyses because they lead to performance degradation when statistical or machine-learning tests are applied to the data. Thus, we have developed missing values preprocessor (mvp), an open-source software for preprocessing data that might include duplicate records and missing values. mvp uses the property of MS data in which identical chemical species present the same or similar values for key identifiers, such as the mass-to-charge ratio and intensity signal, and forms cliques via graph theory to process dirty data. We evaluated the validity of the mvp process via quantitative and qualitative analyses and compared the results from a statistical test that analyzed the original and mvp-applied data. This analysis showed that using mvp reduces problems associated with duplicate records and missing values. We also examined the effects of using unprocessed data in statistical tests and examined the improved statistical test results obtained with data preprocessed using mvp.

  14. Homeopathy: meta-analyses of pooled clinical data.

    PubMed

    Hahn, Robert G

    2013-01-01

    In the first decade of the evidence-based era, which began in the mid-1990s, meta-analyses were used to scrutinize homeopathy for evidence of beneficial effects in medical conditions. In this review, meta-analyses including pooled data from placebo-controlled clinical trials of homeopathy and the aftermath in the form of debate articles were analyzed. In 1997 Klaus Linde and co-workers identified 89 clinical trials that showed an overall odds ratio of 2.45 in favor of homeopathy over placebo. There was a trend toward smaller benefit from studies of the highest quality, but the 10 trials with the highest Jadad score still showed homeopathy had a statistically significant effect. These results challenged academics to perform alternative analyses that, to demonstrate the lack of effect, relied on extensive exclusion of studies, often to the degree that conclusions were based on only 5-10% of the material, or on virtual data. The ultimate argument against homeopathy is the 'funnel plot' published by Aijing Shang's research group in 2005. However, the funnel plot is flawed when applied to a mixture of diseases, because studies with expected strong treatments effects are, for ethical reasons, powered lower than studies with expected weak or unclear treatment effects. To conclude that homeopathy lacks clinical effect, more than 90% of the available clinical trials had to be disregarded. Alternatively, flawed statistical methods had to be applied. Future meta-analyses should focus on the use of homeopathy in specific diseases or groups of diseases instead of pooling data from all clinical trials. © 2013 S. Karger GmbH, Freiburg.

  15. Relationship between sitting volleyball performance and field fitness of sitting volleyball players in Korea

    PubMed Central

    Jeoung, Bogja

    2017-01-01

    The purpose of this study was to evaluate the relationship between sitting volleyball performance and the field fitness of sitting volleyball players. Forty-five elite sitting volleyball players participated in 10 field fitness tests. Additionally, the players’ head coach and coach assessed their volleyball performance (receive and defense, block, attack, and serve). Data were analyzed with SPSS software version 21 by using correlation and regression analyses, and the significance level was set at P< 0.05. The results showed that chest pass, overhand throw, one-hand throw, one-hand side throw, splint, speed endurance, reaction time, and graded exercise test results had a statistically significant influence on the players’ abilities to attack, serve, and block. Grip strength, t-test, speed, and agility showed a statistically significant relationship with the players’ skill at defense and receive. Our results showed that chest pass, overhand throw, one-hand throw, one-hand side throw, speed endurance, reaction time, and graded exercise test results had a statistically significant influence on volleyball performance. PMID:29326896

  16. Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments

    NASA Astrophysics Data System (ADS)

    Munsky, Brian; Shepherd, Douglas

    2014-03-01

    Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.

  17. Statistical parameters of random heterogeneity estimated by analysing coda waves based on finite difference method

    NASA Astrophysics Data System (ADS)

    Emoto, K.; Saito, T.; Shiomi, K.

    2017-12-01

    Short-period (<1 s) seismograms are strongly affected by small-scale (<10 km) heterogeneities in the lithosphere. In general, short-period seismograms are analysed based on the statistical method by considering the interaction between seismic waves and randomly distributed small-scale heterogeneities. Statistical properties of the random heterogeneities have been estimated by analysing short-period seismograms. However, generally, the small-scale random heterogeneity is not taken into account for the modelling of long-period (>2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which leads the way to characterizing a wider wavenumber range of spectra, including the corner wavenumber.

  18. Temporal scaling and spatial statistical analyses of groundwater level fluctuations

    NASA Astrophysics Data System (ADS)

    Sun, H.; Yuan, L., Sr.; Zhang, Y.

    2017-12-01

    Natural dynamics such as groundwater level fluctuations can exhibit multifractionality and/or multifractality due likely to multi-scale aquifer heterogeneity and controlling factors, whose statistics requires efficient quantification methods. This study explores multifractionality and non-Gaussian properties in groundwater dynamics expressed by time series of daily level fluctuation at three wells located in the lower Mississippi valley, after removing the seasonal cycle in the temporal scaling and spatial statistical analysis. First, using the time-scale multifractional analysis, a systematic statistical method is developed to analyze groundwater level fluctuations quantified by the time-scale local Hurst exponent (TS-LHE). Results show that the TS-LHE does not remain constant, implying the fractal-scaling behavior changing with time and location. Hence, we can distinguish the potentially location-dependent scaling feature, which may characterize the hydrology dynamic system. Second, spatial statistical analysis shows that the increment of groundwater level fluctuations exhibits a heavy tailed, non-Gaussian distribution, which can be better quantified by a Lévy stable distribution. Monte Carlo simulations of the fluctuation process also show that the linear fractional stable motion model can well depict the transient dynamics (i.e., fractal non-Gaussian property) of groundwater level, while fractional Brownian motion is inadequate to describe natural processes with anomalous dynamics. Analysis of temporal scaling and spatial statistics therefore may provide useful information and quantification to understand further the nature of complex dynamics in hydrology.

  19. An evaluation of GTAW-P versus GTA welding of alloy 718

    NASA Technical Reports Server (NTRS)

    Gamwell, W. R.; Kurgan, C.; Malone, T. W.

    1991-01-01

    Mechanical properties were evaluated to determine statistically whether the pulsed current gas tungsten arc welding (GTAW-P) process produces welds in alloy 718 with room temperature structural performance equivalent to current Space Shuttle Main Engine (SSME) welds manufactured by the constant current GTAW-P process. Evaluations were conducted on two base metal lots, two filler metal lots, two heat input levels, and two welding processes. The material form was 0.125-inch (3.175-mm) alloy 718 sheet. Prior to welding, sheets were treated to either the ST or STA-1 condition. After welding, panels were left as welded or heat treated to the STA-1 condition, and weld beads were left intact or machined flush. Statistical analyses were performed on yield strength, ultimate tensile strength (UTS), and high cycle fatigue (HCF) properties for all the post welded material conditions. Analyses of variance were performed on the data to determine if there were any significant effects on UTS or HCF life due to variations in base metal, filler metal, heat input level, or welding process. Statistical analyses showed that the GTAW-P process does produce welds with room temperature structural performance equivalent to current SSME welds manufactured by the GTAW process, regardless of prior material condition or post welding condition.

  20. Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Udey, Ruth Norma

    Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.

  1. Processes and subdivisions in diogenites, a multivariate statistical analysis

    NASA Technical Reports Server (NTRS)

    Harriott, T. A.; Hewins, R. H.

    1984-01-01

    Multivariate statistical techniques used on diogenite orthopyroxene analyses show the relationships that occur within diogenites and the two orthopyroxenite components (class I and II) in the polymict diogenite Garland. Cluster analysis shows that only Peckelsheim is similar to Garland class I (Fe-rich) and the other diogenites resemble Garland class II. The unique diogenite Y 75032 may be related to type I by fractionation. Factor analysis confirms the subdivision and shows that Fe does not correlate with the weakly incompatible elements across the entire pyroxene composition range, indicating that igneous fractionation is not the process controlling total diogenite composition variation. The occurrence of two groups of diogenites is interpreted as the result of sampling or mixing of two main sequences of orthopyroxene cumulates with slightly different compositions.

  2. Exploring students’ perceived and actual ability in solving statistical problems based on Rasch measurement tools

    NASA Astrophysics Data System (ADS)

    Azila Che Musa, Nor; Mahmud, Zamalia; Baharun, Norhayati

    2017-09-01

    One of the important skills that is required from any student who are learning statistics is knowing how to solve statistical problems correctly using appropriate statistical methods. This will enable them to arrive at a conclusion and make a significant contribution and decision for the society. In this study, a group of 22 students majoring in statistics at UiTM Shah Alam were given problems relating to topics on testing of hypothesis which require them to solve the problems using confidence interval, traditional and p-value approach. Hypothesis testing is one of the techniques used in solving real problems and it is listed as one of the difficult concepts for students to grasp. The objectives of this study is to explore students’ perceived and actual ability in solving statistical problems and to determine which item in statistical problem solving that students find difficult to grasp. Students’ perceived and actual ability were measured based on the instruments developed from the respective topics. Rasch measurement tools such as Wright map and item measures for fit statistics were used to accomplish the objectives. Data were collected and analysed using Winsteps 3.90 software which is developed based on the Rasch measurement model. The results showed that students’ perceived themselves as moderately competent in solving the statistical problems using confidence interval and p-value approach even though their actual performance showed otherwise. Item measures for fit statistics also showed that the maximum estimated measures were found on two problems. These measures indicate that none of the students have attempted these problems correctly due to reasons which include their lack of understanding in confidence interval and probability values.

  3. Decomposing the gap in missed opportunities for vaccination between poor and non-poor in sub-Saharan Africa: A Multicountry Analyses.

    PubMed

    Ndwandwe, Duduzile; Uthman, Olalekan A; Adamu, Abdu A; Sambala, Evanson Z; Wiyeh, Alison B; Olukade, Tawa; Bishwajit, Ghose; Yaya, Sanni; Okwo-Bele, Jean-Marie; Wiysonge, Charles S

    2018-04-24

    Understanding the gaps in missed opportunities for vaccination (MOV) in sub-Saharan Africa would inform interventions for improving immunisation coverage to achieving universal childhood immunisation. We aimed to conduct a multicountry analyses to decompose the gap in MOV between poor and non-poor in SSA. We used cross-sectional data from 35 Demographic and Health Surveys in SSA conducted between 2007 and 2016. Descriptive statistics used to understand the gap in MOV between the urban poor and non-poor, and across the selected covariates. Out of the 35 countries included in this analysis, 19 countries showed pro-poor inequality, 5 showed pro-non-poor inequality and remaining 11 countries showed no statistically significant inequality. Among the countries with statistically significant pro-illiterate inequality, the risk difference ranged from 4.2% in DR Congo to 20.1% in Kenya. Important factors responsible for the inequality varied across countries. In Madagascar, the largest contributors to inequality in MOV were media access, number of under-five children, and maternal education. However, in Liberia media access narrowed inequality in MOV between poor and non-poor households. The findings indicate that in most SSA countries, children belonging to poor households are most likely to have MOV and that socio-economic inequality in is determined not only by health system functions, but also by factors beyond the scope of health authorities and care delivery system. The findings suggest the need for addressing social determinants of health.

  4. Comparison of safety, efficacy and tolerability of dexibuprofen and ibuprofen in the treatment of osteoarthritis of the hip or knee.

    PubMed

    Zamani, Omid; Böttcher, Elke; Rieger, Jörg D; Mitterhuber, Johann; Hawel, Reinhold; Stallinger, Sylvia; Eller, Norbert

    2014-06-01

    In this observer-blinded, multicenter, non-inferiority study, 489 patients suffering from painful osteoarthritis of the hip or knee were included to investigate safety and tolerability of Dexibuprofen vs. Ibuprofen powder for oral suspension. Only patients who had everyday joint pain for the past 3 months and "moderate" to "severe" global pain intensity in the involved hip/knee of within the last 48 h were enrolled. The treatment period was up to 14 days with a control visit after 3 days. The test product was Dexibuprofen 400 mg powder for oral suspension (daily dose 800 mg) compared to Ibuprofen 400 mg powder for oral suspension (daily dose 1,600 mg). Gastrointestinal adverse drug reactions were reported in 8 patients (3.3 %) in the Dexibuprofen group and in 19 patients (7.8 %) in the Ibuprofen group. Statistically significant non-inferiority was shown for Dexibuprofen. Comparing both groups by a Chi square test showed a statistical significant lower proportion of related gastrointestinal events in the Dexibuprofen group. All analyses of secondary tolerability parameters showed the same result of a significantly better safety profile in this therapy setting for Dexibuprofen compared to Ibuprofen. The sum of pain intensity, pain relief and global assessments showed no significant difference between treatment groups. In summary, analyses revealed at least non-inferiority in terms of efficacy and a statistically significant better safety profile for the Dexibuprofen treatment.

  5. On Improving the Quality and Interpretation of Environmental Assessments using Statistical Analysis and Geographic Information Systems

    NASA Astrophysics Data System (ADS)

    Karuppiah, R.; Faldi, A.; Laurenzi, I.; Usadi, A.; Venkatesh, A.

    2014-12-01

    An increasing number of studies are focused on assessing the environmental footprint of different products and processes, especially using life cycle assessment (LCA). This work shows how combining statistical methods and Geographic Information Systems (GIS) with environmental analyses can help improve the quality of results and their interpretation. Most environmental assessments in literature yield single numbers that characterize the environmental impact of a process/product - typically global or country averages, often unchanging in time. In this work, we show how statistical analysis and GIS can help address these limitations. For example, we demonstrate a method to separately quantify uncertainty and variability in the result of LCA models using a power generation case study. This is important for rigorous comparisons between the impacts of different processes. Another challenge is lack of data that can affect the rigor of LCAs. We have developed an approach to estimate environmental impacts of incompletely characterized processes using predictive statistical models. This method is applied to estimate unreported coal power plant emissions in several world regions. There is also a general lack of spatio-temporal characterization of the results in environmental analyses. For instance, studies that focus on water usage do not put in context where and when water is withdrawn. Through the use of hydrological modeling combined with GIS, we quantify water stress on a regional and seasonal basis to understand water supply and demand risks for multiple users. Another example where it is important to consider regional dependency of impacts is when characterizing how agricultural land occupation affects biodiversity in a region. We developed a data-driven methodology used in conjuction with GIS to determine if there is a statistically significant difference between the impacts of growing different crops on different species in various biomes of the world.

  6. A comparison of InVivoStat with other statistical software packages for analysis of data generated from animal experiments.

    PubMed

    Clark, Robin A; Shoaib, Mohammed; Hewitt, Katherine N; Stanford, S Clare; Bate, Simon T

    2012-08-01

    InVivoStat is a free-to-use statistical software package for analysis of data generated from animal experiments. The package is designed specifically for researchers in the behavioural sciences, where exploiting the experimental design is crucial for reliable statistical analyses. This paper compares the analysis of three experiments conducted using InVivoStat with other widely used statistical packages: SPSS (V19), PRISM (V5), UniStat (V5.6) and Statistica (V9). We show that InVivoStat provides results that are similar to those from the other packages and, in some cases, are more advanced. This investigation provides evidence of further validation of InVivoStat and should strengthen users' confidence in this new software package.

  7. Effects of cadmium concentration on ozone-induced phytotoxicity in cress

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czuba, M.; Ormrod, D.P.

    1974-01-01

    Cadmium solutions at concentrations of 0, 10, 40, 100, 500 or 1000 ppm were applied to the soil around cress (Lepidium sativum L. cv. Fine Curled) every 4th day for several weeks. Four week old plants were fumigated once at ozone levels of 0, 5, 10, 20, 25 or 30-35 pphm for 6 hours. Plants that had received higher concentrations of cadmium showed markedly increased sensitivity to ozone in terms of visible leaf damage after ozone treatment. Plants receiving cadmium solution alone or those receiving ozone treatment alone either did not show leaf damage or as much leaf damage asmore » plants which had received both treatments. Mineral analyses of plant tissues showed the relationship between tissue content of both essential and toxic cations and the sensitivity of the plant to various ozone levels. Pigment analyses showed changes in chlorophyll amounts and ratios between treatments. Statistical analyses of data for morphological parameters showed that there is an interaction between Cd and ozone treatments over a range of concentrations.« less

  8. Four modes of optical parametric operation for squeezed state generation

    NASA Astrophysics Data System (ADS)

    Andersen, U. L.; Buchler, B. C.; Lam, P. K.; Wu, J. W.; Gao, J. R.; Bachor, H.-A.

    2003-11-01

    We report a versatile instrument, based on a monolithic optical parametric amplifier, which reliably generates four different types of squeezed light. We obtained vacuum squeezing, low power amplitude squeezing, phase squeezing and bright amplitude squeezing. We show a complete analysis of this light, including a full quantum state tomography. In addition we demonstrate the direct detection of the squeezed state statistics without the aid of a spectrum analyser. This technique makes the nonclassical properties directly visible and allows complete measurement of the statistical moments of the squeezed quadrature.

  9. Progressive statistics for studies in sports medicine and exercise science.

    PubMed

    Hopkins, William G; Marshall, Stephen W; Batterham, Alan M; Hanin, Juri

    2009-01-01

    Statistical guidelines and expert statements are now available to assist in the analysis and reporting of studies in some biomedical disciplines. We present here a more progressive resource for sample-based studies, meta-analyses, and case studies in sports medicine and exercise science. We offer forthright advice on the following controversial or novel issues: using precision of estimation for inferences about population effects in preference to null-hypothesis testing, which is inadequate for assessing clinical or practical importance; justifying sample size via acceptable precision or confidence for clinical decisions rather than via adequate power for statistical significance; showing SD rather than SEM, to better communicate the magnitude of differences in means and nonuniformity of error; avoiding purely nonparametric analyses, which cannot provide inferences about magnitude and are unnecessary; using regression statistics in validity studies, in preference to the impractical and biased limits of agreement; making greater use of qualitative methods to enrich sample-based quantitative projects; and seeking ethics approval for public access to the depersonalized raw data of a study, to address the need for more scrutiny of research and better meta-analyses. Advice on less contentious issues includes the following: using covariates in linear models to adjust for confounders, to account for individual differences, and to identify potential mechanisms of an effect; using log transformation to deal with nonuniformity of effects and error; identifying and deleting outliers; presenting descriptive, effect, and inferential statistics in appropriate formats; and contending with bias arising from problems with sampling, assignment, blinding, measurement error, and researchers' prejudices. This article should advance the field by stimulating debate, promoting innovative approaches, and serving as a useful checklist for authors, reviewers, and editors.

  10. On the Use of Biomineral Oxygen Isotope Data to Identify Human Migrants in the Archaeological Record: Intra-Sample Variation, Statistical Methods and Geographical Considerations

    PubMed Central

    Lightfoot, Emma; O’Connell, Tamsin C.

    2016-01-01

    Oxygen isotope analysis of archaeological skeletal remains is an increasingly popular tool to study past human migrations. It is based on the assumption that human body chemistry preserves the δ18O of precipitation in such a way as to be a useful technique for identifying migrants and, potentially, their homelands. In this study, the first such global survey, we draw on published human tooth enamel and bone bioapatite data to explore the validity of using oxygen isotope analyses to identify migrants in the archaeological record. We use human δ18O results to show that there are large variations in human oxygen isotope values within a population sample. This may relate to physiological factors influencing the preservation of the primary isotope signal, or due to human activities (such as brewing, boiling, stewing, differential access to water sources and so on) causing variation in ingested water and food isotope values. We compare the number of outliers identified using various statistical methods. We determine that the most appropriate method for identifying migrants is dependent on the data but is likely to be the IQR or median absolute deviation from the median under most archaeological circumstances. Finally, through a spatial assessment of the dataset, we show that the degree of overlap in human isotope values from different locations across Europe is such that identifying individuals’ homelands on the basis of oxygen isotope analysis alone is not possible for the regions analysed to date. Oxygen isotope analysis is a valid method for identifying first-generation migrants from an archaeological site when used appropriately, however it is difficult to identify migrants using statistical methods for a sample size of less than c. 25 individuals. In the absence of local previous analyses, each sample should be treated as an individual dataset and statistical techniques can be used to identify migrants, but in most cases pinpointing a specific homeland should not be attempted. PMID:27124001

  11. Antibiotic treatment of bacterial vaginosis in pregnancy: multiple meta-analyses and dilemmas in interpretation.

    PubMed

    Varma, Rajesh; Gupta, Janesh K

    2006-01-01

    There is considerable evidence to show an association between genital tract infections, such as bacterial vaginosis (BV), and preterm delivery (PTD). Meta-analyses to date have shown screening and treating BV in pregnancy does not prevent PTD. This casts doubt on a cause and effect relationship between BV and PTD. However, the meta-analyses reported significant clinical, methodological and statistical heterogeneity of the included studies. We therefore undertook a repeat meta-analysis, included recently published trials, and applied strict criteria on data extraction. We meta-analysed low and high-risk pregnancies separately. We found that screening and treating BV in low-risk pregnancies produced a statistically significant reduction in spontaneous PTD (RR 0.73; 95% CI 0.55-0.98). This beneficial effect was not observed in high-risk or combined risk groups. The differences in antibiotic sensitivity between high and low risk groups may suggest differing causal contributions of the infectious process to PTD. The evidence, along with prior knowledge of differing predisposing factors and prognosis between these risk groups, supports the hypothesis that PTD in high and low risk pregnant women are different entities and not linear extremes of the same syndrome.

  12. Early Warning Signs of Suicide in Service Members Who Engage in Unauthorized Acts of Violence

    DTIC Science & Technology

    2016-06-01

    observable to military law enforcement personnel. Statistical analyses tested for differences in warning signs between cases of suicide, violence, or...indicators, (2) Behavioral Change indicators, (3) Social indicators, and (4) Occupational indicators. Statistical analyses were conducted to test for...6 Coding _________________________________________________________________ 7 Statistical

  13. [Statistical analysis using freely-available "EZR (Easy R)" software].

    PubMed

    Kanda, Yoshinobu

    2015-10-01

    Clinicians must often perform statistical analyses for purposes such evaluating preexisting evidence and designing or executing clinical studies. R is a free software environment for statistical computing. R supports many statistical analysis functions, but does not incorporate a statistical graphical user interface (GUI). The R commander provides an easy-to-use basic-statistics GUI for R. However, the statistical function of the R commander is limited, especially in the field of biostatistics. Therefore, the author added several important statistical functions to the R commander and named it "EZR (Easy R)", which is now being distributed on the following website: http://www.jichi.ac.jp/saitama-sct/. EZR allows the application of statistical functions that are frequently used in clinical studies, such as survival analyses, including competing risk analyses and the use of time-dependent covariates and so on, by point-and-click access. In addition, by saving the script automatically created by EZR, users can learn R script writing, maintain the traceability of the analysis, and assure that the statistical process is overseen by a supervisor.

  14. The Influence of Statistical versus Exemplar Appeals on Indian Adults' Health Intentions: An Investigation of Direct Effects and Intervening Persuasion Processes.

    PubMed

    McKinley, Christopher J; Limbu, Yam; Jayachandran, C N

    2017-04-01

    In two separate investigations, we examined the persuasive effectiveness of statistical versus exemplar appeals on Indian adults' smoking cessation and mammography screening intentions. To more comprehensively address persuasion processes, we explored whether message response and perceived message effectiveness functioned as antecedents to persuasive effects. Results showed that statistical appeals led to higher levels of health intentions than exemplar appeals. In addition, findings from both studies indicated that statistical appeals stimulated more attention and were perceived as more effective than anecdotal accounts. Among male smokers, statistical appeals also generated greater cognitive processing than exemplar appeals. Subsequent mediation analyses revealed that message response and perceived message effectiveness fully carried the influence of appeal format on health intentions. Given these findings, future public health initiatives conducted among similar populations should design messages that include substantive factual information while ensuring that this content is perceived as credible and valuable.

  15. International Students Reported for Academic Integrity Violations: Demographics, Retention, and Graduation

    ERIC Educational Resources Information Center

    Fass-Holmes, Barry

    2017-01-01

    How many international students are reported for academic integrity violations (AIV), what are their demographics, and how do AIV sanctions affect their retention and/or graduation? Descriptive statistical analyses showed that the number of internationals reported for AIVs at an American West Coast public university increased almost six-fold…

  16. Test 6, Test 7, and Gas Standard Analysis Results

    NASA Technical Reports Server (NTRS)

    Perez, Horacio, III

    2007-01-01

    This viewgraph presentation shows results of analyses on odor, toxic off gassing and gas standards. The topics include: 1) Statistical Analysis Definitions; 2) Odor Analysis Results NASA Standard 6001 Test 6; 3) Toxic Off gassing Analysis Results NASA Standard 6001 Test 7; and 4) Gas Standard Results NASA Standard 6001 Test 7;

  17. Mobile phones and head tumours. The discrepancies in cause-effect relationships in the epidemiological studies - how do they arise?

    PubMed

    Levis, Angelo G; Minicuci, Nadia; Ricci, Paolo; Gennaro, Valerio; Garbisa, Spiridione

    2011-06-17

    Whether or not there is a relationship between use of mobile phones (analogue and digital cellulars, and cordless) and head tumour risk (brain tumours, acoustic neuromas, and salivary gland tumours) is still a matter of debate; progress requires a critical analysis of the methodological elements necessary for an impartial evaluation of contradictory studies. A close examination of the protocols and results from all case-control and cohort studies, pooled- and meta-analyses on head tumour risk for mobile phone users was carried out, and for each study the elements necessary for evaluating its reliability were identified. In addition, new meta-analyses of the literature data were undertaken. These were limited to subjects with mobile phone latency time compatible with the progression of the examined tumours, and with analysis of the laterality of head tumour localisation corresponding to the habitual laterality of mobile phone use. Blind protocols, free from errors, bias, and financial conditioning factors, give positive results that reveal a cause-effect relationship between long-term mobile phone use or latency and statistically significant increase of ipsilateral head tumour risk, with biological plausibility. Non-blind protocols, which instead are affected by errors, bias, and financial conditioning factors, give negative results with systematic underestimate of such risk. However, also in these studies a statistically significant increase in risk of ipsilateral head tumours is quite common after more than 10 years of mobile phone use or latency. The meta-analyses, our included, examining only data on ipsilateral tumours in subjects using mobile phones since or for at least 10 years, show large and statistically significant increases in risk of ipsilateral brain gliomas and acoustic neuromas. Our analysis of the literature studies and of the results from meta-analyses of the significant data alone shows an almost doubling of the risk of head tumours induced by long-term mobile phone use or latency.

  18. Mobile phones and head tumours. The discrepancies in cause-effect relationships in the epidemiological studies - how do they arise?

    PubMed Central

    2011-01-01

    Background Whether or not there is a relationship between use of mobile phones (analogue and digital cellulars, and cordless) and head tumour risk (brain tumours, acoustic neuromas, and salivary gland tumours) is still a matter of debate; progress requires a critical analysis of the methodological elements necessary for an impartial evaluation of contradictory studies. Methods A close examination of the protocols and results from all case-control and cohort studies, pooled- and meta-analyses on head tumour risk for mobile phone users was carried out, and for each study the elements necessary for evaluating its reliability were identified. In addition, new meta-analyses of the literature data were undertaken. These were limited to subjects with mobile phone latency time compatible with the progression of the examined tumours, and with analysis of the laterality of head tumour localisation corresponding to the habitual laterality of mobile phone use. Results Blind protocols, free from errors, bias, and financial conditioning factors, give positive results that reveal a cause-effect relationship between long-term mobile phone use or latency and statistically significant increase of ipsilateral head tumour risk, with biological plausibility. Non-blind protocols, which instead are affected by errors, bias, and financial conditioning factors, give negative results with systematic underestimate of such risk. However, also in these studies a statistically significant increase in risk of ipsilateral head tumours is quite common after more than 10 years of mobile phone use or latency. The meta-analyses, our included, examining only data on ipsilateral tumours in subjects using mobile phones since or for at least 10 years, show large and statistically significant increases in risk of ipsilateral brain gliomas and acoustic neuromas. Conclusions Our analysis of the literature studies and of the results from meta-analyses of the significant data alone shows an almost doubling of the risk of head tumours induced by long-term mobile phone use or latency. PMID:21679472

  19. The Marburg-Münster Affective Disorders Cohort Study (MACS): A quality assurance protocol for MR neuroimaging data.

    PubMed

    Vogelbacher, Christoph; Möbius, Thomas W D; Sommer, Jens; Schuster, Verena; Dannlowski, Udo; Kircher, Tilo; Dempfle, Astrid; Jansen, Andreas; Bopp, Miriam H A

    2018-05-15

    Large, longitudinal, multi-center MR neuroimaging studies require comprehensive quality assurance (QA) protocols for assessing the general quality of the compiled data, indicating potential malfunctions in the scanning equipment, and evaluating inter-site differences that need to be accounted for in subsequent analyses. We describe the implementation of a QA protocol for functional magnet resonance imaging (fMRI) data based on the regular measurement of an MRI phantom and an extensive variety of currently published QA statistics. The protocol is implemented in the MACS (Marburg-Münster Affective Disorders Cohort Study, http://for2107.de/), a two-center research consortium studying the neurobiological foundations of affective disorders. Between February 2015 and October 2016, 1214 phantom measurements have been acquired using a standard fMRI protocol. Using 444 healthy control subjects which have been measured between 2014 and 2016 in the cohort, we investigate the extent of between-site differences in contrast to the dependence on subject-specific covariates (age and sex) for structural MRI, fMRI, and diffusion tensor imaging (DTI) data. We show that most of the presented QA statistics differ severely not only between the two scanners used for the cohort but also between experimental settings (e.g. hardware and software changes), demonstrate that some of these statistics depend on external variables (e.g. time of day, temperature), highlight their strong dependence on proper handling of the MRI phantom, and show how the use of a phantom holder may balance this dependence. Site effects, however, do not only exist for the phantom data, but also for human MRI data. Using T1-weighted structural images, we show that total intracranial (TIV), grey matter (GMV), and white matter (WMV) volumes significantly differ between the MR scanners, showing large effect sizes. Voxel-based morphometry (VBM) analyses show that these structural differences observed between scanners are most pronounced in the bilateral basal ganglia, thalamus, and posterior regions. Using DTI data, we also show that fractional anisotropy (FA) differs between sites in almost all regions assessed. When pooling data from multiple centers, our data show that it is a necessity to account not only for inter-site differences but also for hardware and software changes of the scanning equipment. Also, the strong dependence of the QA statistics on the reliable placement of the MRI phantom shows that the use of a phantom holder is recommended to reduce the variance of the QA statistics and thus to increase the probability of detecting potential scanner malfunctions. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. The sumLINK statistic for genetic linkage analysis in the presence of heterogeneity.

    PubMed

    Christensen, G B; Knight, S; Camp, N J

    2009-11-01

    We present the "sumLINK" statistic--the sum of multipoint LOD scores for the subset of pedigrees with nominally significant linkage evidence at a given locus--as an alternative to common methods to identify susceptibility loci in the presence of heterogeneity. We also suggest the "sumLOD" statistic (the sum of positive multipoint LOD scores) as a companion to the sumLINK. sumLINK analysis identifies genetic regions of extreme consistency across pedigrees without regard to negative evidence from unlinked or uninformative pedigrees. Significance is determined by an innovative permutation procedure based on genome shuffling that randomizes linkage information across pedigrees. This procedure for generating the empirical null distribution may be useful for other linkage-based statistics as well. Using 500 genome-wide analyses of simulated null data, we show that the genome shuffling procedure results in the correct type 1 error rates for both the sumLINK and sumLOD. The power of the statistics was tested using 100 sets of simulated genome-wide data from the alternative hypothesis from GAW13. Finally, we illustrate the statistics in an analysis of 190 aggressive prostate cancer pedigrees from the International Consortium for Prostate Cancer Genetics, where we identified a new susceptibility locus. We propose that the sumLINK and sumLOD are ideal for collaborative projects and meta-analyses, as they do not require any sharing of identifiable data between contributing institutions. Further, loci identified with the sumLINK have good potential for gene localization via statistical recombinant mapping, as, by definition, several linked pedigrees contribute to each peak.

  1. Statistical analysis of fNIRS data: a comprehensive review.

    PubMed

    Tak, Sungho; Ye, Jong Chul

    2014-01-15

    Functional near-infrared spectroscopy (fNIRS) is a non-invasive method to measure brain activities using the changes of optical absorption in the brain through the intact skull. fNIRS has many advantages over other neuroimaging modalities such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), or magnetoencephalography (MEG), since it can directly measure blood oxygenation level changes related to neural activation with high temporal resolution. However, fNIRS signals are highly corrupted by measurement noises and physiology-based systemic interference. Careful statistical analyses are therefore required to extract neuronal activity-related signals from fNIRS data. In this paper, we provide an extensive review of historical developments of statistical analyses of fNIRS signal, which include motion artifact correction, short source-detector separation correction, principal component analysis (PCA)/independent component analysis (ICA), false discovery rate (FDR), serially-correlated errors, as well as inference techniques such as the standard t-test, F-test, analysis of variance (ANOVA), and statistical parameter mapping (SPM) framework. In addition, to provide a unified view of various existing inference techniques, we explain a linear mixed effect model with restricted maximum likelihood (ReML) variance estimation, and show that most of the existing inference methods for fNIRS analysis can be derived as special cases. Some of the open issues in statistical analysis are also described. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Statistical approaches in published ophthalmic clinical science papers: a comparison to statistical practice two decades ago.

    PubMed

    Zhang, Harrison G; Ying, Gui-Shuang

    2018-02-09

    The aim of this study is to evaluate the current practice of statistical analysis of eye data in clinical science papers published in British Journal of Ophthalmology ( BJO ) and to determine whether the practice of statistical analysis has improved in the past two decades. All clinical science papers (n=125) published in BJO in January-June 2017 were reviewed for their statistical analysis approaches for analysing primary ocular measure. We compared our findings to the results from a previous paper that reviewed BJO papers in 1995. Of 112 papers eligible for analysis, half of the studies analysed the data at an individual level because of the nature of observation, 16 (14%) studies analysed data from one eye only, 36 (32%) studies analysed data from both eyes at ocular level, one study (1%) analysed the overall summary of ocular finding per individual and three (3%) studies used the paired comparison. Among studies with data available from both eyes, 50 (89%) of 56 papers in 2017 did not analyse data from both eyes or ignored the intereye correlation, as compared with in 60 (90%) of 67 papers in 1995 (P=0.96). Among studies that analysed data from both eyes at an ocular level, 33 (92%) of 36 studies completely ignored the intereye correlation in 2017, as compared with in 16 (89%) of 18 studies in 1995 (P=0.40). A majority of studies did not analyse the data properly when data from both eyes were available. The practice of statistical analysis did not improve in the past two decades. Collaborative efforts should be made in the vision research community to improve the practice of statistical analysis for ocular data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Effect of open rhinoplasty on the smile line.

    PubMed

    Tabrizi, Reza; Mirmohamadsadeghi, Hoori; Daneshjoo, Danadokht; Zare, Samira

    2012-05-01

    Open rhinoplasty is an esthetic surgical technique that is becoming increasingly popular, and can affect the nose and upper lip compartments. The aim of this study was to evaluate the effect of open rhinoplasty on tooth show and the smile line. The study participants were 61 patients with a mean age of 24.3 years (range, 17.2 to 39.6 years). The surgical procedure consisted of an esthetic open rhinoplasty without alar resection. Analysis of tooth show was limited to pre- and postoperative (at 12 months) tooth show measurements at rest and the maximum smile with a ruler (when participants held their heads naturally). Statistical analyses were performed with SPSS 13.0, and paired-sample t tests were used to compare tooth show means before and after the operation. Analysis of the rest position showed no statistically significant change in tooth show (P = .15), but analysis of participants' maximum smile data showed a statistically significant increase in tooth show after surgery (P < .05). In contrast, Pearson correlation analysis showed a positive relation between rhinoplasty and tooth show increases in maximum smile, especially in subjects with high smile lines. This study shows that the nasolabial compartment is a single unit and any change in 1 part may influence the other parts. Further studies should be conducted to investigate these interactions. Copyright © 2012 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  4. Using R-Project for Free Statistical Analysis in Extension Research

    ERIC Educational Resources Information Center

    Mangiafico, Salvatore S.

    2013-01-01

    One option for Extension professionals wishing to use free statistical software is to use online calculators, which are useful for common, simple analyses. A second option is to use a free computing environment capable of performing statistical analyses, like R-project. R-project is free, cross-platform, powerful, and respected, but may be…

  5. Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: a primer and applications.

    PubMed

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E

    2014-04-01

    This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  6. Euclidean distance can identify the mannitol level that produces the most remarkable integral effect on sugarcane micropropagation in temporary immersion bioreactors.

    PubMed

    Gómez, Daviel; Hernández, L Ázaro; Yabor, Lourdes; Beemster, Gerrit T S; Tebbe, Christoph C; Papenbrock, Jutta; Lorenzo, José Carlos

    2018-03-15

    Plant scientists usually record several indicators in their abiotic factor experiments. The common statistical management involves univariate analyses. Such analyses generally create a split picture of the effects of experimental treatments since each indicator is addressed independently. The Euclidean distance combined with the information of the control treatment could have potential as an integrating indicator. The Euclidean distance has demonstrated its usefulness in many scientific fields but, as far as we know, it has not yet been employed for plant experimental analyses. To exemplify the use of the Euclidean distance in this field, we performed an experiment focused on the effects of mannitol on sugarcane micropropagation in temporary immersion bioreactors. Five mannitol concentrations were compared: 0, 50, 100, 150 and 200 mM. As dependent variables we recorded shoot multiplication rate, fresh weight, and levels of aldehydes, chlorophylls, carotenoids and phenolics. The statistical protocol which we then carried out integrated all dependent variables to easily identify the mannitol concentration that produced the most remarkable integral effect. Results provided by the Euclidean distance demonstrate a gradually increasing distance from the control in function of increasing mannitol concentrations. 200 mM mannitol caused the most significant alteration of sugarcane biochemistry and physiology under the experimental conditions described here. This treatment showed the longest statistically significant Euclidean distance to the control treatment (2.38). In contrast, 50 and 100 mM mannitol showed the lowest Euclidean distances (0.61 and 0.84, respectively) and thus poor integrated effects of mannitol. The analysis shown here indicates that the use of the Euclidean distance can contribute to establishing a more integrated evaluation of the contrasting mannitol treatments.

  7. Racial disparities in diabetes mortality in the 50 most populous US cities.

    PubMed

    Rosenstock, Summer; Whitman, Steve; West, Joseph F; Balkin, Michael

    2014-10-01

    While studies have consistently shown that in the USA, non-Hispanic Blacks (Blacks) have higher diabetes prevalence, complication and death rates than non-Hispanic Whites (Whites), there are no studies that compare disparities in diabetes mortality across the largest US cities. This study presents and compares Black/White age-adjusted diabetes mortality rate ratios (RRs), calculated using national death files and census data, for the 50 most populous US cities. Relationships between city-level diabetes mortality RRs and 12 ecological variables were explored using bivariate correlation analyses. Multivariate analyses were conducted using negative binomial regression to examine how much of the disparity could be explained by these variables. Blacks had statistically significantly higher mortality rates compared to Whites in 39 of the 41 cities included in analyses, with statistically significant rate ratios ranging from 1.57 (95 % CI: 1.33-1.86) in Baltimore to 3.78 (95 % CI: 2.84-5.02) in Washington, DC. Analyses showed that economic inequality was strongly correlated with the diabetes mortality disparity, driven by differences in White poverty levels. This was followed by segregation. Multivariate analyses showed that adjusting for Black/White poverty alone explained 58.5 % of the disparity. Adjusting for Black/White poverty and segregation explained 72.6 % of the disparity. This study emphasizes the role that inequalities in social and economic determinants, rather than for example poverty on its own, play in Black/White diabetes mortality disparities. It also highlights how the magnitude of the disparity and the factors that influence it can vary greatly across cities, underscoring the importance of using local data to identify context specific barriers and develop effective interventions to eliminate health disparities.

  8. Statistical analysis of hydrological response in urbanising catchments based on adaptive sampling using inter-amount times

    NASA Astrophysics Data System (ADS)

    ten Veldhuis, Marie-Claire; Schleiss, Marc

    2017-04-01

    Urban catchments are typically characterised by a more flashy nature of the hydrological response compared to natural catchments. Predicting flow changes associated with urbanisation is not straightforward, as they are influenced by interactions between impervious cover, basin size, drainage connectivity and stormwater management infrastructure. In this study, we present an alternative approach to statistical analysis of hydrological response variability and basin flashiness, based on the distribution of inter-amount times. We analyse inter-amount time distributions of high-resolution streamflow time series for 17 (semi-)urbanised basins in North Carolina, USA, ranging from 13 to 238 km2 in size. We show that in the inter-amount-time framework, sampling frequency is tuned to the local variability of the flow pattern, resulting in a different representation and weighting of high and low flow periods in the statistical distribution. This leads to important differences in the way the distribution quantiles, mean, coefficient of variation and skewness vary across scales and results in lower mean intermittency and improved scaling. Moreover, we show that inter-amount-time distributions can be used to detect regulation effects on flow patterns, identify critical sampling scales and characterise flashiness of hydrological response. The possibility to use both the classical approach and the inter-amount-time framework to identify minimum observable scales and analyse flow data opens up interesting areas for future research.

  9. A Framework for Assessing High School Students' Statistical Reasoning.

    PubMed

    Chan, Shiau Wei; Ismail, Zaleha; Sumintono, Bambang

    2016-01-01

    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students' statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students' statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework's cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments.

  10. A Framework for Assessing High School Students' Statistical Reasoning

    PubMed Central

    2016-01-01

    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students’ statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students’ statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework’s cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments. PMID:27812091

  11. Distinguishing synchronous and time-varying synergies using point process interval statistics: motor primitives in frog and rat

    PubMed Central

    Hart, Corey B.; Giszter, Simon F.

    2013-01-01

    We present and apply a method that uses point process statistics to discriminate the forms of synergies in motor pattern data, prior to explicit synergy extraction. The method uses electromyogram (EMG) pulse peak timing or onset timing. Peak timing is preferable in complex patterns where pulse onsets may be overlapping. An interval statistic derived from the point processes of EMG peak timings distinguishes time-varying synergies from synchronous synergies (SS). Model data shows that the statistic is robust for most conditions. Its application to both frog hindlimb EMG and rat locomotion hindlimb EMG show data from these preparations is clearly most consistent with synchronous synergy models (p < 0.001). Additional direct tests of pulse and interval relations in frog data further bolster the support for synchronous synergy mechanisms in these data. Our method and analyses support separated control of rhythm and pattern of motor primitives, with the low level execution primitives comprising pulsed SS in both frog and rat, and both episodic and rhythmic behaviors. PMID:23675341

  12. Geographically Sourcing Cocaine's Origin - Delineation of the Nineteen Major Coca Growing Regions in South America.

    PubMed

    Mallette, Jennifer R; Casale, John F; Jordan, James; Morello, David R; Beyer, Paul M

    2016-03-23

    Previously, geo-sourcing to five major coca growing regions within South America was accomplished. However, the expansion of coca cultivation throughout South America made sub-regional origin determinations increasingly difficult. The former methodology was recently enhanced with additional stable isotope analyses ((2)H and (18)O) to fully characterize cocaine due to the varying environmental conditions in which the coca was grown. An improved data analysis method was implemented with the combination of machine learning and multivariate statistical analysis methods to provide further partitioning between growing regions. Here, we show how the combination of trace cocaine alkaloids, stable isotopes, and multivariate statistical analyses can be used to classify illicit cocaine as originating from one of 19 growing regions within South America. The data obtained through this approach can be used to describe current coca cultivation and production trends, highlight trafficking routes, as well as identify new coca growing regions.

  13. Geographically Sourcing Cocaine’s Origin - Delineation of the Nineteen Major Coca Growing Regions in South America

    NASA Astrophysics Data System (ADS)

    Mallette, Jennifer R.; Casale, John F.; Jordan, James; Morello, David R.; Beyer, Paul M.

    2016-03-01

    Previously, geo-sourcing to five major coca growing regions within South America was accomplished. However, the expansion of coca cultivation throughout South America made sub-regional origin determinations increasingly difficult. The former methodology was recently enhanced with additional stable isotope analyses (2H and 18O) to fully characterize cocaine due to the varying environmental conditions in which the coca was grown. An improved data analysis method was implemented with the combination of machine learning and multivariate statistical analysis methods to provide further partitioning between growing regions. Here, we show how the combination of trace cocaine alkaloids, stable isotopes, and multivariate statistical analyses can be used to classify illicit cocaine as originating from one of 19 growing regions within South America. The data obtained through this approach can be used to describe current coca cultivation and production trends, highlight trafficking routes, as well as identify new coca growing regions.

  14. A weighted U-statistic for genetic association analyses of sequencing data.

    PubMed

    Wei, Changshuai; Li, Ming; He, Zihuai; Vsevolozhskaya, Olga; Schaid, Daniel J; Lu, Qing

    2014-12-01

    With advancements in next-generation sequencing technology, a massive amount of sequencing data is generated, which offers a great opportunity to comprehensively investigate the role of rare variants in the genetic etiology of complex diseases. Nevertheless, the high-dimensional sequencing data poses a great challenge for statistical analysis. The association analyses based on traditional statistical methods suffer substantial power loss because of the low frequency of genetic variants and the extremely high dimensionality of the data. We developed a Weighted U Sequencing test, referred to as WU-SEQ, for the high-dimensional association analysis of sequencing data. Based on a nonparametric U-statistic, WU-SEQ makes no assumption of the underlying disease model and phenotype distribution, and can be applied to a variety of phenotypes. Through simulation studies and an empirical study, we showed that WU-SEQ outperformed a commonly used sequence kernel association test (SKAT) method when the underlying assumptions were violated (e.g., the phenotype followed a heavy-tailed distribution). Even when the assumptions were satisfied, WU-SEQ still attained comparable performance to SKAT. Finally, we applied WU-SEQ to sequencing data from the Dallas Heart Study (DHS), and detected an association between ANGPTL 4 and very low density lipoprotein cholesterol. © 2014 WILEY PERIODICALS, INC.

  15. Assessing potential effects of highway runoff on receiving-water quality at selected sites in Oregon with the Stochastic Empirical Loading and Dilution Model (SELDM)

    USGS Publications Warehouse

    Risley, John C.; Granato, Gregory E.

    2014-01-01

    6. An analysis of the use of grab sampling and nonstochastic upstream modeling methods was done to evaluate the potential effects on modeling outcomes. Additional analyses using surrogate water-quality datasets for the upstream basin and highway catchment were provided for six Oregon study sites to illustrate the risk-based information that SELDM will produce. These analyses show that the potential effects of highway runoff on receiving-water quality downstream of the outfall depends on the ratio of drainage areas (dilution), the quality of the receiving water upstream of the highway, and the concentration of the criteria of the constituent of interest. These analyses also show that the probability of exceeding a water-quality criterion may depend on the input statistics used, thus careful selection of representative values is important.

  16. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  17. Quantitative Susceptibility Mapping after Sports-Related Concussion.

    PubMed

    Koch, K M; Meier, T B; Karr, R; Nencka, A S; Muftuler, L T; McCrea, M

    2018-06-07

    Quantitative susceptibility mapping using MR imaging can assess changes in brain tissue structure and composition. This report presents preliminary results demonstrating changes in tissue magnetic susceptibility after sports-related concussion. Longitudinal quantitative susceptibility mapping metrics were produced from imaging data acquired from cohorts of concussed and control football athletes. One hundred thirty-six quantitative susceptibility mapping datasets were analyzed across 3 separate visits (24 hours after injury, 8 days postinjury, and 6 months postinjury). Longitudinal quantitative susceptibility mapping group analyses were performed on stability-thresholded brain tissue compartments and selected subregions. Clinical concussion metrics were also measured longitudinally in both cohorts and compared with the measured quantitative susceptibility mapping. Statistically significant increases in white matter susceptibility were identified in the concussed athlete group during the acute (24 hour) and subacute (day 8) period. These effects were most prominent at the 8-day visit but recovered and showed no significant difference from controls at the 6-month visit. The subcortical gray matter showed no statistically significant group differences. Observed susceptibility changes after concussion appeared to outlast self-reported clinical recovery metrics at a group level. At an individual subject level, susceptibility increases within the white matter showed statistically significant correlations with return-to-play durations. The results of this preliminary investigation suggest that sports-related concussion can induce physiologic changes to brain tissue that can be detected using MR imaging-based magnetic susceptibility estimates. In group analyses, the observed tissue changes appear to persist beyond those detected on clinical outcome assessments and were associated with return-to-play duration after sports-related concussion. © 2018 by American Journal of Neuroradiology.

  18. Dynamic properties of small-scale solar wind plasma fluctuations.

    PubMed

    Riazantseva, M O; Budaev, V P; Zelenyi, L M; Zastenker, G N; Pavlos, G P; Safrankova, J; Nemecek, Z; Prech, L; Nemec, F

    2015-05-13

    The paper presents the latest results of the studies of small-scale fluctuations in a turbulent flow of solar wind (SW) using measurements with extremely high temporal resolution (up to 0.03 s) of the bright monitor of SW (BMSW) plasma spectrometer operating on astrophysical SPECTR-R spacecraft at distances up to 350,000 km from the Earth. The spectra of SW ion flux fluctuations in the range of scales between 0.03 and 100 s are systematically analysed. The difference of slopes in low- and high-frequency parts of spectra and the frequency of the break point between these two characteristic slopes was analysed for different conditions in the SW. The statistical properties of the SW ion flux fluctuations were thoroughly analysed on scales less than 10 s. A high level of intermittency is demonstrated. The extended self-similarity of SW ion flux turbulent flow is constantly observed. The approximation of non-Gaussian probability distribution function of ion flux fluctuations by the Tsallis statistics shows the non-extensive character of SW fluctuations. Statistical characteristics of ion flux fluctuations are compared with the predictions of a log-Poisson model. The log-Poisson parametrization of the structure function scaling has shown that well-defined filament-like plasma structures are, as a rule, observed in the turbulent SW flows. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  19. Space-Time Point Pattern Analysis of Flavescence Dorée Epidemic in a Grapevine Field: Disease Progression and Recovery

    PubMed Central

    Maggi, Federico; Bosco, Domenico; Galetto, Luciana; Palmano, Sabrina; Marzachì, Cristina

    2017-01-01

    Analyses of space-time statistical features of a flavescence dorée (FD) epidemic in Vitis vinifera plants are presented. FD spread was surveyed from 2011 to 2015 in a vineyard of 17,500 m2 surface area in the Piemonte region, Italy; count and position of symptomatic plants were used to test the hypothesis of epidemic Complete Spatial Randomness and isotropicity in the space-time static (year-by-year) point pattern measure. Space-time dynamic (year-to-year) point pattern analyses were applied to newly infected and recovered plants to highlight statistics of FD progression and regression over time. Results highlighted point patterns ranging from disperse (at small scales) to aggregated (at large scales) over the years, suggesting that the FD epidemic is characterized by multiscale properties that may depend on infection incidence, vector population, and flight behavior. Dynamic analyses showed moderate preferential progression and regression along rows. Nearly uniform distributions of direction and negative exponential distributions of distance of newly symptomatic and recovered plants relative to existing symptomatic plants highlighted features of vector mobility similar to Brownian motion. These evidences indicate that space-time epidemics modeling should include environmental setting (e.g., vineyard geometry and topography) to capture anisotropicity as well as statistical features of vector flight behavior, plant recovery and susceptibility, and plant mortality. PMID:28111581

  20. Angular Baryon Acoustic Oscillation measure at z=2.225 from the SDSS quasar survey

    NASA Astrophysics Data System (ADS)

    de Carvalho, E.; Bernui, A.; Carvalho, G. C.; Novaes, C. P.; Xavier, H. S.

    2018-04-01

    Following a quasi model-independent approach we measure the transversal BAO mode at high redshift using the two-point angular correlation function (2PACF). The analyses done here are only possible now with the quasar catalogue from the twelfth data release (DR12Q) from the Sloan Digital Sky Survey, because it is spatially dense enough to allow the measurement of the angular BAO signature with moderate statistical significance and acceptable precision. Our analyses with quasars in the redshift interval z in [2.20,2.25] produce the angular BAO scale θBAO = 1.77° ± 0.31° with a statistical significance of 2.12 σ (i.e., 97% confidence level), calculated through a likelihood analysis performed using the theoretical covariance matrix sourced by the analytical power spectra expected in the ΛCDM concordance model. Additionally, we show that the BAO signal is robust—although with less statistical significance—under diverse bin-size choices and under small displacements of the quasars' angular coordinates. Finally, we also performed cosmological parameter analyses comparing the θBAO predictions for wCDM and w(a)CDM models with angular BAO data available in the literature, including the measurement obtained here, jointly with CMB data. The constraints on the parameters ΩM, w0 and wa are in excellent agreement with the ΛCDM concordance model.

  1. Tract-Specific Analyses of Diffusion Tensor Imaging Show Widespread White Matter Compromise in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Shukla, Dinesh K.; Keehn, Brandon; Muller, Ralph-Axel

    2011-01-01

    Background: Previous diffusion tensor imaging (DTI) studies have shown white matter compromise in children and adults with autism spectrum disorder (ASD), which may relate to reduced connectivity and impaired function of distributed networks. However, tract-specific evidence remains limited in ASD. We applied tract-based spatial statistics (TBSS)…

  2. Learner Characteristics Predict Performance and Confidence in E-Learning: An Analysis of User Behavior and Self-Evaluation

    ERIC Educational Resources Information Center

    Jeske, Debora; Roßnagell, Christian Stamov; Backhaus, Joy

    2014-01-01

    We examined the role of learner characteristics as predictors of four aspects of e-learning performance, including knowledge test performance, learning confidence, learning efficiency, and navigational effectiveness. We used both self reports and log file records to compute the relevant statistics. Regression analyses showed that both need for…

  3. Parenting and Socialization of Only Children in Urban China: An Example of Authoritative Parenting

    ERIC Educational Resources Information Center

    Lu, Hui Jing; Chang, Lei

    2013-01-01

    The authors report a semistructured interview of 328 urban Chinese parents regarding their parenting beliefs and practices with respect to their only children. Statistical analyses of the coded parental interviews and peer nomination data from the children show none of the traditional Chinese parenting or child behaviors that have been widely…

  4. OdorMapComparer: an application for quantitative analyses and comparisons of fMRI brain odor maps.

    PubMed

    Liu, Nian; Xu, Fuqiang; Miller, Perry L; Shepherd, Gordon M

    2007-01-01

    Brain odor maps are reconstructed flat images that describe the spatial activity patterns in the glomerular layer of the olfactory bulbs in animals exposed to different odor stimuli. We have developed a software application, OdorMapComparer, to carry out quantitative analyses and comparisons of the fMRI odor maps. This application is an open-source window program that first loads two odor map images being compared. It allows image transformations including scaling, flipping, rotating, and warping so that the two images can be appropriately aligned to each other. It performs simple subtraction, addition, and average of signals in the two images. It also provides comparative statistics including the normalized correlation (NC) and spatial correlation coefficient. Experimental studies showed that the rodent fMRI odor maps for aliphatic aldehydes displayed spatial activity patterns that are similar in gross outlines but somewhat different in specific subregions. Analyses with OdorMapComparer indicate that the similarity between odor maps decreases with increasing difference in the length of carbon chains. For example, the map of butanal is more closely related to that of pentanal (with a NC = 0.617) than to that of octanal (NC = 0.082), which is consistent with animal behavioral studies. The study also indicates that fMRI odor maps are statistically odor-specific and repeatable across both the intra- and intersubject trials. OdorMapComparer thus provides a tool for quantitative, statistical analyses and comparisons of fMRI odor maps in a fashion that is integrated with the overall odor mapping techniques.

  5. Combined Analyses of Bacterial, Fungal and Nematode Communities in Andosolic Agricultural Soils in Japan

    PubMed Central

    Bao, Zhihua; Ikunaga, Yoko; Matsushita, Yuko; Morimoto, Sho; Takada-Hoshino, Yuko; Okada, Hiroaki; Oba, Hirosuke; Takemoto, Shuhei; Niwa, Shigeru; Ohigashi, Kentaro; Suzuki, Chika; Nagaoka, Kazunari; Takenaka, Makoto; Urashima, Yasufumi; Sekiguchi, Hiroyuki; Kushida, Atsuhiko; Toyota, Koki; Saito, Masanori; Tsushima, Seiya

    2012-01-01

    We simultaneously examined the bacteria, fungi and nematode communities in Andosols from four agro-geographical sites in Japan using polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE) and statistical analyses to test the effects of environmental factors including soil properties on these communities depending on geographical sites. Statistical analyses such as Principal component analysis (PCA) and Redundancy analysis (RDA) revealed that the compositions of the three soil biota communities were strongly affected by geographical sites, which were in turn strongly associated with soil characteristics such as total C (TC), total N (TN), C/N ratio and annual mean soil temperature (ST). In particular, the TC, TN and C/N ratio had stronger effects on bacterial and fungal communities than on the nematode community. Additionally, two-way cluster analysis using the combined DGGE profile also indicated that all soil samples were classified into four clusters corresponding to the four sites, showing high site specificity of soil samples, and all DNA bands were classified into four clusters, showing the coexistence of specific DGGE bands of bacteria, fungi and nematodes in Andosol fields. The results of this study suggest that geography relative to soil properties has a simultaneous impact on soil microbial and nematode community compositions. This is the first combined profile analysis of bacteria, fungi and nematodes at different sites with agricultural Andosols. PMID:22223474

  6. Combined analyses of bacterial, fungal and nematode communities in andosolic agricultural soils in Japan.

    PubMed

    Bao, Zhihua; Ikunaga, Yoko; Matsushita, Yuko; Morimoto, Sho; Takada-Hoshino, Yuko; Okada, Hiroaki; Oba, Hirosuke; Takemoto, Shuhei; Niwa, Shigeru; Ohigashi, Kentaro; Suzuki, Chika; Nagaoka, Kazunari; Takenaka, Makoto; Urashima, Yasufumi; Sekiguchi, Hiroyuki; Kushida, Atsuhiko; Toyota, Koki; Saito, Masanori; Tsushima, Seiya

    2012-01-01

    We simultaneously examined the bacteria, fungi and nematode communities in Andosols from four agro-geographical sites in Japan using polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE) and statistical analyses to test the effects of environmental factors including soil properties on these communities depending on geographical sites. Statistical analyses such as Principal component analysis (PCA) and Redundancy analysis (RDA) revealed that the compositions of the three soil biota communities were strongly affected by geographical sites, which were in turn strongly associated with soil characteristics such as total C (TC), total N (TN), C/N ratio and annual mean soil temperature (ST). In particular, the TC, TN and C/N ratio had stronger effects on bacterial and fungal communities than on the nematode community. Additionally, two-way cluster analysis using the combined DGGE profile also indicated that all soil samples were classified into four clusters corresponding to the four sites, showing high site specificity of soil samples, and all DNA bands were classified into four clusters, showing the coexistence of specific DGGE bands of bacteria, fungi and nematodes in Andosol fields. The results of this study suggest that geography relative to soil properties has a simultaneous impact on soil microbial and nematode community compositions. This is the first combined profile analysis of bacteria, fungi and nematodes at different sites with agricultural Andosols.

  7. Biomechanical Analysis of Military Boots. Phase 1. Materials Testing of Military and Commercial Footwear

    DTIC Science & Technology

    1992-10-01

    N=8) and Results of 44 Statistical Analyses for Impact Test Performed on Forefoot of Unworn Footwear A-2. Summary Statistics (N=8) and Results of...on Forefoot of Worn Footwear Vlll Tables (continued) Table Page B-2. Summary Statistics (N=4) and Results of 76 Statistical Analyses for Impact...used tests to assess heel and forefoot shock absorption, upper and sole durability, and flexibility (Cavanagh, 1978). Later, the number of tests was

  8. Quantifying, displaying and accounting for heterogeneity in the meta-analysis of RCTs using standard and generalised Q statistics

    PubMed Central

    2011-01-01

    Background Clinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis. Heterogeneity is usually assessed via the well known Q and I2 statistics, along with the random effects estimate they imply. In recent years, alternative methods for quantifying heterogeneity have been proposed, that are based on a 'generalised' Q statistic. Methods We review 18 IPD meta-analyses of RCTs into treatments for cancer, in order to quantify the amount of heterogeneity present and also to discuss practical methods for explaining heterogeneity. Results Differing results were obtained when the standard Q and I2 statistics were used to test for the presence of heterogeneity. The two meta-analyses with the largest amount of heterogeneity were investigated further, and on inspection the straightforward application of a random effects model was not deemed appropriate. Compared to the standard Q statistic, the generalised Q statistic provided a more accurate platform for estimating the amount of heterogeneity in the 18 meta-analyses. Conclusions Explaining heterogeneity via the pre-specification of trial subgroups, graphical diagnostic tools and sensitivity analyses produced a more desirable outcome than an automatic application of the random effects model. Generalised Q statistic methods for quantifying and adjusting for heterogeneity should be incorporated as standard into statistical software. Software is provided to help achieve this aim. PMID:21473747

  9. Gender stereotyping in television advertisements: a study of French and Danish television.

    PubMed

    Furnham, A; Babitzkow, M; Uguccioni, S

    2000-02-01

    Two similar, but not identical, content analyses of the portrayals of men and women in French and Danish television advertisements are reported. By partially replicating and extending past investigations conducted in America, Australia, Britain, Hong Kong, Indonesia, Italy, Kenya, and New Zealand, it was predicted that there would be more gender stereotyping in French television advertisements and less gender stereotyping in Danish television advertisements. In the first study, 165 French television advertisements were analyzed by following established coding categories (A. Furnham & E. Skae, 1997; L. Z. McArthur & B. G. Resko, 1975). Contrary to prediction, the results showed that traditional gender role portrayal on French television was no different from that found in other countries. Separate statistical analyses were carried out for visually versus aurally classified central figures, yet this yielded relatively few significant differences. In the second study, a sample of 151 Danish advertisements was analyzed; results showed that Danish television was generally less gender stereotypic than French television in its portrayal of women. Exactly half (5) of the coding categories showed significant differences. Finally, an international statistical comparison between these two studies and similar research in Australia, Britain, and Italy was carried out. The methodological implications of these results are discussed as well as the theoretical issues arising from other studies of this sort.

  10. Production and characterization of curcumin microcrystals and evaluation of the antimicrobial and sensory aspects in minimally processed carrots.

    PubMed

    Silva, Anderson Clayton da; Santos, Priscila Dayane de Freitas; Palazzi, Nicole Campezato; Leimann, Fernanda Vitória; Fuchs, Renata Hernandez Barros; Bracht, Lívia; Gonçalves, Odinei Hess

    2017-05-24

    Nontoxic conserving agents are in demand by the food industry due to consumers concern about synthetic conservatives, especially in minimally processed food. The antimicrobial activity of curcumin, a natural phenolic compound, has been extensively investigated but hydrophobicity is an issue when applying curcumin to foodstuff. The objective of this work was to evaluate curcumin microcrystals as an antimicrobial agent in minimally processed carrots. The antimicrobial activity of curcumin microcrystals was evaluated in vitro against Gram-positive (Bacillus cereus and Staphylococcus aureus) and Gram-negative (Escherichia coli and Pseudomonas aeruginosa) microorganisms, showing a statistically significant (p < 0.05) decrease in the minimum inhibitory concentration compared to in natura, pristine curcumin. Curcumin microcrystals were effective in inhibiting psychrotrophic and mesophile microorganisms in minimally processed carrots. Sensory analyses were carried out showing no significant difference (p < 0.05) between curcumin microcrystal-treated carrots and non-treated carrots in triangular and tetrahedral discriminative tests. Sensory tests also showed that curcumin microcrystals could be added as a natural preservative in minimally processed carrots without causing noticeable differences that could be detected by the consumer. One may conclude that the analyses of the minimally processed carrots demonstrated that curcumin microcrystals are a suitable natural compound to inhibit the natural microbiota of carrots from a statistical point of view.

  11. Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression.

    PubMed

    Chen, Yanguang

    2016-01-01

    In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson's statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran's index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China's regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test.

  12. Evaluating Intervention Programs with a Pretest-Posttest Design: A Structural Equation Modeling Approach

    PubMed Central

    Alessandri, Guido; Zuffianò, Antonio; Perinelli, Enrico

    2017-01-01

    A common situation in the evaluation of intervention programs is the researcher's possibility to rely on two waves of data only (i.e., pretest and posttest), which profoundly impacts on his/her choice about the possible statistical analyses to be conducted. Indeed, the evaluation of intervention programs based on a pretest-posttest design has been usually carried out by using classic statistical tests, such as family-wise ANOVA analyses, which are strongly limited by exclusively analyzing the intervention effects at the group level. In this article, we showed how second order multiple group latent curve modeling (SO-MG-LCM) could represent a useful methodological tool to have a more realistic and informative assessment of intervention programs with two waves of data. We offered a practical step-by-step guide to properly implement this methodology, and we outlined the advantages of the LCM approach over classic ANOVA analyses. Furthermore, we also provided a real-data example by re-analyzing the implementation of the Young Prosocial Animation, a universal intervention program aimed at promoting prosociality among youth. In conclusion, albeit there are previous studies that pointed to the usefulness of MG-LCM to evaluate intervention programs (Muthén and Curran, 1997; Curran and Muthén, 1999), no previous study showed that it is possible to use this approach even in pretest-posttest (i.e., with only two time points) designs. Given the advantages of latent variable analyses in examining differences in interindividual and intraindividual changes (McArdle, 2009), the methodological and substantive implications of our proposed approach are discussed. PMID:28303110

  13. Evaluating Intervention Programs with a Pretest-Posttest Design: A Structural Equation Modeling Approach.

    PubMed

    Alessandri, Guido; Zuffianò, Antonio; Perinelli, Enrico

    2017-01-01

    A common situation in the evaluation of intervention programs is the researcher's possibility to rely on two waves of data only (i.e., pretest and posttest), which profoundly impacts on his/her choice about the possible statistical analyses to be conducted. Indeed, the evaluation of intervention programs based on a pretest-posttest design has been usually carried out by using classic statistical tests, such as family-wise ANOVA analyses, which are strongly limited by exclusively analyzing the intervention effects at the group level. In this article, we showed how second order multiple group latent curve modeling (SO-MG-LCM) could represent a useful methodological tool to have a more realistic and informative assessment of intervention programs with two waves of data. We offered a practical step-by-step guide to properly implement this methodology, and we outlined the advantages of the LCM approach over classic ANOVA analyses. Furthermore, we also provided a real-data example by re-analyzing the implementation of the Young Prosocial Animation, a universal intervention program aimed at promoting prosociality among youth. In conclusion, albeit there are previous studies that pointed to the usefulness of MG-LCM to evaluate intervention programs (Muthén and Curran, 1997; Curran and Muthén, 1999), no previous study showed that it is possible to use this approach even in pretest-posttest (i.e., with only two time points) designs. Given the advantages of latent variable analyses in examining differences in interindividual and intraindividual changes (McArdle, 2009), the methodological and substantive implications of our proposed approach are discussed.

  14. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  15. A concept for holistic whole body MRI data analysis, Imiomics

    PubMed Central

    Malmberg, Filip; Johansson, Lars; Lind, Lars; Sundbom, Magnus; Ahlström, Håkan; Kullberg, Joel

    2017-01-01

    Purpose To present and evaluate a whole-body image analysis concept, Imiomics (imaging–omics) and an image registration method that enables Imiomics analyses by deforming all image data to a common coordinate system, so that the information in each voxel can be compared between persons or within a person over time and integrated with non-imaging data. Methods The presented image registration method utilizes relative elasticity constraints of different tissue obtained from whole-body water-fat MRI. The registration method is evaluated by inverse consistency and Dice coefficients and the Imiomics concept is evaluated by example analyses of importance for metabolic research using non-imaging parameters where we know what to expect. The example analyses include whole body imaging atlas creation, anomaly detection, and cross-sectional and longitudinal analysis. Results The image registration method evaluation on 128 subjects shows low inverse consistency errors and high Dice coefficients. Also, the statistical atlas with fat content intensity values shows low standard deviation values, indicating successful deformations to the common coordinate system. The example analyses show expected associations and correlations which agree with explicit measurements, and thereby illustrate the usefulness of the proposed Imiomics concept. Conclusions The registration method is well-suited for Imiomics analyses, which enable analyses of relationships to non-imaging data, e.g. clinical data, in new types of holistic targeted and untargeted big-data analysis. PMID:28241015

  16. Meta-analyses on intra-aortic balloon pump in cardiogenic shock complicating acute myocardial infarction may provide biased results.

    PubMed

    Acconcia, M C; Caretta, Q; Romeo, F; Borzi, M; Perrone, M A; Sergi, D; Chiarotti, F; Calabrese, C M; Sili Scavalli, A; Gaudio, C

    2018-04-01

    Intra-aortic balloon pump (IABP) is the device most commonly investigated in patients with cardiogenic shock (CS) complicating acute myocardial infarction (AMI). Recently meta-analyses on this topic showed opposite results: some complied with the actual guideline recommendations, while others did not, due to the presence of bias. We investigated the reasons for the discrepancy among meta-analyses and strategies employed to avoid the potential source of bias. Scientific databases were searched for meta-analyses of IABP support in AMI complicated by CS. The presence of clinical diversity, methodological diversity and statistical heterogeneity were analyzed. When we found clinical or methodological diversity, we reanalyzed the data by comparing the patients selected for homogeneous groups. When the fixed effect model was employed despite the presence of statistical heterogeneity, the meta-analysis was repeated adopting the random effect model, with the same estimator used in the original meta-analysis. Twelve meta-analysis were selected. Six meta-analyses of randomized controlled trials (RCTs) were inconclusive because underpowered to detect the IABP effect. Five included RCTs and observational studies (Obs) and one only Obs. Some meta-analyses on RCTs and Obs had biased results due to presence of clinical and/or methodological diversity. The reanalysis of data reallocated for homogeneous groups was no more in contrast with guidelines recommendations. Meta-analyses performed without controlling for clinical and/or methodological diversity, represent a confounding message against a good clinical practice. The reanalysis of data demonstrates the validity of the current guidelines recommendations in addressing clinical decision making in providing IABP support in AMI complicated by CS.

  17. Performance of new gellan gum hydrogels combined with human articular chondrocytes for cartilage regeneration when subcutaneously implanted in nude mice.

    PubMed

    Oliveira, J T; Santos, T C; Martins, L; Silva, M A; Marques, A P; Castro, A G; Neves, N M; Reis, R L

    2009-10-01

    Gellan gum is a polysaccharide that has been recently proposed by our group for cartilage tissue-engineering applications. It is commonly used in the food and pharmaceutical industry and has the ability to form stable gels without the use of harsh reagents. Gellan gum can function as a minimally invasive injectable system, gelling inside the body in situ under physiological conditions and efficiently adapting to the defect site. In this work, gellan gum hydrogels were combined with human articular chondrocytes (hACs) and were subcutaneously implanted in nude mice for 4 weeks. The implants were collected for histological (haematoxylin and eosin and Alcian blue staining), biochemical [dimethylmethylene blue (GAG) assay], molecular (real-time PCR analyses for collagen types I, II and X, aggrecan) and immunological analyses (immunolocalization of collagen types I and II). The results showed a homogeneous cell distribution and the typical round-shaped morphology of the chondrocytes within the matrix upon implantation. Proteoglycans synthesis was detected by Alcian blue staining and a statistically significant increase of proteoglycans content was measured with the GAG assay quantified from 1 to 4 weeks of implantation. Real-time PCR analyses showed a statistically significant upregulation of collagen type II and aggrecan levels in the same periods. The immunological assays suggest deposition of collagen type II along with some collagen type I. The overall data shows that gellan gum hydrogels adequately support the growth and ECM deposition of human articular chondrocytes when implanted subcutaneously in nude mice. Copyright (c) 2009 John Wiley & Sons, Ltd.

  18. Statistical analysis of hydrological response in urbanising catchments based on adaptive sampling using inter-amount times

    NASA Astrophysics Data System (ADS)

    ten Veldhuis, Marie-Claire; Schleiss, Marc

    2017-04-01

    In this study, we introduced an alternative approach for analysis of hydrological flow time series, using an adaptive sampling framework based on inter-amount times (IATs). The main difference with conventional flow time series is the rate at which low and high flows are sampled: the unit of analysis for IATs is a fixed flow amount, instead of a fixed time window. We analysed statistical distributions of flows and IATs across a wide range of sampling scales to investigate sensitivity of statistical properties such as quantiles, variance, skewness, scaling parameters and flashiness indicators to the sampling scale. We did this based on streamflow time series for 17 (semi)urbanised basins in North Carolina, US, ranging from 13 km2 to 238 km2 in size. Results showed that adaptive sampling of flow time series based on inter-amounts leads to a more balanced representation of low flow and peak flow values in the statistical distribution. While conventional sampling gives a lot of weight to low flows, as these are most ubiquitous in flow time series, IAT sampling gives relatively more weight to high flow values, when given flow amounts are accumulated in shorter time. As a consequence, IAT sampling gives more information about the tail of the distribution associated with high flows, while conventional sampling gives relatively more information about low flow periods. We will present results of statistical analyses across a range of subdaily to seasonal scales and will highlight some interesting insights that can be derived from IAT statistics with respect to basin flashiness and impact urbanisation on hydrological response.

  19. Differentiation of women with premenstrual dysphoric disorder, recurrent brief depression, and healthy controls by daily mood rating dynamics.

    PubMed

    Pincus, Steven M; Schmidt, Peter J; Palladino-Negro, Paula; Rubinow, David R

    2008-04-01

    Enhanced statistical characterization of mood-rating data holds the potential to more precisely classify and sub-classify recurrent mood disorders like premenstrual dysphoric disorder (PMDD) and recurrent brief depressive disorder (RBD). We applied several complementary statistical methods to differentiate mood rating dynamics among women with PMDD, RBD, and normal controls (NC). We compared three subgroups of women: NC (n=8); PMDD (n=15); and RBD (n=9) on the basis of daily self-ratings of sadness, study lengths between 50 and 120 days. We analyzed mean levels; overall variability, SD; sequential irregularity, approximate entropy (ApEn); and a quantification of the extent of brief and staccato dynamics, denoted 'Spikiness'. For each of SD, irregularity (ApEn), and Spikiness, we showed highly significant subgroup differences, ANOVA0.001 for each statistic; additionally, many paired subgroup comparisons showed highly significant differences. In contrast, mean levels were indistinct among the subgroups. For SD, normal controls had much smaller levels than the other subgroups, with RBD intermediate. ApEn showed PMDD to be significantly more regular than the other subgroups. Spikiness showed NC and RBD data sets to be much more staccato than their PMDD counterparts, and appears to suitably characterize the defining feature of RBD dynamics. Compound criteria based on these statistical measures discriminated diagnostic subgroups with high sensitivity and specificity. Taken together, the statistical suite provides well-defined specifications of each subgroup. This can facilitate accurate diagnosis, and augment the prediction and evaluation of response to treatment. The statistical methodologies have broad and direct applicability to behavioral studies for many psychiatric disorders, and indeed to similar analyses of associated biological signals across multiple axes.

  20. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study.

    PubMed

    van der Krieke, Lian; Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith Gm; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-08-07

    Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher's tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use.

  1. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study

    PubMed Central

    Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith GM; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-01-01

    Background Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. Objective This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. Methods We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher’s tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). Results An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Conclusions Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use. PMID:26254160

  2. Separate-channel analysis of two-channel microarrays: recovering inter-spot information.

    PubMed

    Smyth, Gordon K; Altman, Naomi S

    2013-05-26

    Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.

  3. 40 CFR 91.512 - Request for public hearing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... plans and statistical analyses have been properly applied (specifically, whether sampling procedures and statistical analyses specified in this subpart were followed and whether there exists a basis for... will be made available to the public during Agency business hours. ...

  4. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    ERIC Educational Resources Information Center

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  5. A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula

    PubMed Central

    Giordano, Bruno L.; Kayser, Christoph; Rousselet, Guillaume A.; Gross, Joachim; Schyns, Philippe G.

    2016-01-01

    Abstract We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open‐source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541–1573, 2017. © 2016 Wiley Periodicals, Inc. PMID:27860095

  6. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.

  7. Identification of natural images and computer-generated graphics based on statistical and textural features.

    PubMed

    Peng, Fei; Li, Jiao-ting; Long, Min

    2015-03-01

    To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. © 2014 American Academy of Forensic Sciences.

  8. Within-individual versus between-individual predictors of antisocial behaviour: A longitudinal study of young people in Victoria, Australia

    PubMed Central

    Hemphill, Sheryl A; Heerde, Jessica A; Herrenkohl, Todd I; Farrington, David P

    2016-01-01

    In an influential 2002 paper, Farrington and colleagues argued that to understand ‘causes’ of delinquency, within-individual analyses of longitudinal data are required (compared to the vast majority of analyses that have focused on between-individual differences). The current paper aimed to complete similar analyses to those conducted by Farrington and colleagues by focusing on the developmental correlates and risk factors for antisocial behaviour and by comparing within-individual and between-individual predictors of antisocial behaviour using data from the youngest Victorian cohort of the International Youth Development Study, a state-wide representative sample of 927 students from Victoria, Australia. Data analysed in the current paper are from participants in Year 6 (age 11–12 years) in 2003 to Year 11 (age 16–17 years) in 2008 (N = 791; 85% retention) with data collected almost annually. Participants completed a self-report survey of risk and protective factors and antisocial behaviour. Complete data were available for 563 participants. The results of this study showed all but one of the forward- (family conflict) and backward-lagged (low attachment to parents) correlations were statistically significant for the within-individual analyses compared with all analyses being statistically significant for the between-individual analyses. In general, between-individual correlations were greater in magnitude than within-individual correlations. Given that forward-lagged within-individual correlations provide more salient measures of causes of delinquency, it is important that longitudinal studies with multi-wave data analyse and report their data using both between-individual and within-individual correlations to inform current prevention and early intervention programs seeking to reduce rates of antisocial behaviour. PMID:28123186

  9. Effects of Psychological and Social Work Factors on Self-Reported Sleep Disturbance and Difficulties Initiating Sleep.

    PubMed

    Vleeshouwers, Jolien; Knardahl, Stein; Christensen, Jan Olav

    2016-04-01

    This prospective cohort study examined previously underexplored relations between psychological/social work factors and troubled sleep in order to provide practical information about specific, modifiable factors at work. A comprehensive evaluation of a range of psychological/social work factors was obtained by several designs; i.e., cross-sectional analyses at baseline and follow-up, prospective analyses with baseline predictors (T1), prospective analyses with average exposure across waves as predictor ([T1 + T2] / 2), and prospective analyses with change in exposure from baseline to follow-up as predictor. Participants consisted of a sample of Norwegian employees from a broad spectrum of occupations, who completed a questionnaire at two points in time, approximately two years apart. Cross-sectional analyses at T1 comprised 7,459 participants, cross-sectional analyses at T2 included 6,688 participants. Prospective analyses comprised a sample 5,070 of participants who responded at both T1 and T2. Univariable and multivariable ordinal logistic regressions were performed. Thirteen psychological/social work factors and two aspects of troubled sleep, namely difficulties initiating sleep and disturbed sleep, were studied. Ordinal logistic regressions revealed statistically significant associations for all psychological and social work factors in at least one of the analyses. Psychological and social work factors predicted sleep problems in the short term as well as the long term. All work factors investigated showed statistically significant associations with both sleep items, however quantitative job demands, decision control, role conflict, and support from superior were the most robust predictors and may therefore be suitable targets of interventions aimed at improving employee sleep. © 2016 Associated Professional Sleep Societies, LLC.

  10. Algorithm for Identifying Erroneous Rain-Gauge Readings

    NASA Technical Reports Server (NTRS)

    Rickman, Doug

    2005-01-01

    An algorithm analyzes rain-gauge data to identify statistical outliers that could be deemed to be erroneous readings. Heretofore, analyses of this type have been performed in burdensome manual procedures that have involved subjective judgements. Sometimes, the analyses have included computational assistance for detecting values falling outside of arbitrary limits. The analyses have been performed without statistically valid knowledge of the spatial and temporal variations of precipitation within rain events. In contrast, the present algorithm makes it possible to automate such an analysis, makes the analysis objective, takes account of the spatial distribution of rain gauges in conjunction with the statistical nature of spatial variations in rainfall readings, and minimizes the use of arbitrary criteria. The algorithm implements an iterative process that involves nonparametric statistics.

  11. Citation of previous meta-analyses on the same topic: a clue to perpetuation of incorrect methods?

    PubMed

    Li, Tianjing; Dickersin, Kay

    2013-06-01

    Systematic reviews and meta-analyses serve as a basis for decision-making and clinical practice guidelines and should be carried out using appropriate methodology to avoid incorrect inferences. We describe the characteristics, statistical methods used for meta-analyses, and citation patterns of all 21 glaucoma systematic reviews we identified pertaining to the effectiveness of prostaglandin analog eye drops in treating primary open-angle glaucoma, published between December 2000 and February 2012. We abstracted data, assessed whether appropriate statistical methods were applied in meta-analyses, and examined citation patterns of included reviews. We identified two forms of problematic statistical analyses in 9 of the 21 systematic reviews examined. Except in 1 case, none of the 9 reviews that used incorrect statistical methods cited a previously published review that used appropriate methods. Reviews that used incorrect methods were cited 2.6 times more often than reviews that used appropriate statistical methods. We speculate that by emulating the statistical methodology of previous systematic reviews, systematic review authors may have perpetuated incorrect approaches to meta-analysis. The use of incorrect statistical methods, perhaps through emulating methods described in previous research, calls conclusions of systematic reviews into question and may lead to inappropriate patient care. We urge systematic review authors and journal editors to seek the advice of experienced statisticians before undertaking or accepting for publication a systematic review and meta-analysis. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  12. Reporting quality of statistical methods in surgical observational studies: protocol for systematic review.

    PubMed

    Wu, Robert; Glen, Peter; Ramsay, Tim; Martel, Guillaume

    2014-06-28

    Observational studies dominate the surgical literature. Statistical adjustment is an important strategy to account for confounders in observational studies. Research has shown that published articles are often poor in statistical quality, which may jeopardize their conclusions. The Statistical Analyses and Methods in the Published Literature (SAMPL) guidelines have been published to help establish standards for statistical reporting.This study will seek to determine whether the quality of statistical adjustment and the reporting of these methods are adequate in surgical observational studies. We hypothesize that incomplete reporting will be found in all surgical observational studies, and that the quality and reporting of these methods will be of lower quality in surgical journals when compared with medical journals. Finally, this work will seek to identify predictors of high-quality reporting. This work will examine the top five general surgical and medical journals, based on a 5-year impact factor (2007-2012). All observational studies investigating an intervention related to an essential component area of general surgery (defined by the American Board of Surgery), with an exposure, outcome, and comparator, will be included in this systematic review. Essential elements related to statistical reporting and quality were extracted from the SAMPL guidelines and include domains such as intent of analysis, primary analysis, multiple comparisons, numbers and descriptive statistics, association and correlation analyses, linear regression, logistic regression, Cox proportional hazard analysis, analysis of variance, survival analysis, propensity analysis, and independent and correlated analyses. Each article will be scored as a proportion based on fulfilling criteria in relevant analyses used in the study. A logistic regression model will be built to identify variables associated with high-quality reporting. A comparison will be made between the scores of surgical observational studies published in medical versus surgical journals. Secondary outcomes will pertain to individual domains of analysis. Sensitivity analyses will be conducted. This study will explore the reporting and quality of statistical analyses in surgical observational studies published in the most referenced surgical and medical journals in 2013 and examine whether variables (including the type of journal) can predict high-quality reporting.

  13. The Effect of Folate and Folate Plus Zinc Supplementation on Endocrine Parameters and Sperm Characteristics in Sub-Fertile Men: A Systematic Review and Meta-Analysis.

    PubMed

    Irani, Morvarid; Amirian, Malihe; Sadeghi, Ramin; Lez, Justine Le; Latifnejad Roudsari, Robab

    2017-08-29

    To evaluate the effect of folate and folate plus zinc supplementation on endocrine parameters and sperm characteristics in sub fertile men. We conducted a systematic review and meta-analysis. Electronic databases of Medline, Scopus , Google scholar and Persian databases (SID, Iran medex, Magiran, Medlib, Iran doc) were searched from 1966 to December 2016 using a set of relevant keywords including "folate or folic acid AND (infertility, infertile, sterility)".All available randomized controlled trials (RCTs), conducted on a sample of sub fertile men with semen analyses, who took oral folic acid or folate plus zinc, were included. Data collected included endocrine parameters and sperm characteristics. Statistical analyses were done by Comprehensive Meta-analysis Version 2. In total, seven studies were included. Six studies had sufficient data for meta-analysis. "Sperm concentration was statistically higher in men supplemented with folate than with placebo (P < .001)". However, folate supplementation alone did not seem to be more effective than the placebo on the morphology (P = .056) and motility of the sperms (P = .652). Folate plus zinc supplementation did not show any statistically different effect on serum testosterone (P = .86), inhibin B (P = .84), FSH (P = .054), and sperm motility (P = .169) as compared to the placebo. Yet, folate plus zinc showed statistically higher effect on the sperm concentration (P < .001), morphology (P < .001), and serum folate level (P < .001) as compared to placebo. Folate plus zinc supplementation has a positive effect on sperm characteristics in sub fertile men. However, these results should be interpreted with caution due to the important heterogeneity of the studies included in this meta-analysis. Further trials are still needed to confirm the current findings.

  14. Statistical analyses of commercial vehicle accident factors. Volume 1 Part 1

    DOT National Transportation Integrated Search

    1978-02-01

    Procedures for conducting statistical analyses of commercial vehicle accidents have been established and initially applied. A file of some 3,000 California Highway Patrol accident reports from two areas of California during a period of about one year...

  15. 40 CFR 90.712 - Request for public hearing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sampling plans and statistical analyses have been properly applied (specifically, whether sampling procedures and statistical analyses specified in this subpart were followed and whether there exists a basis... Clerk and will be made available to the public during Agency business hours. ...

  16. Effects of Argumentation on Group Micro-Creativity: Statistical Discourse Analyses of Algebra Students' Collaborative Problem Solving

    ERIC Educational Resources Information Center

    Chiu, Ming Ming

    2008-01-01

    The micro-time context of group processes (such as argumentation) can affect a group's micro-creativity (new ideas). Eighty high school students worked in groups of four on an algebra problem. Groups with higher mathematics grades showed greater micro-creativity, and both were linked to better problem solving outcomes. Dynamic multilevel analyses…

  17. Correlation-based network analysis of metabolite and enzyme profiles reveals a role of citrate biosynthesis in modulating N and C metabolism in zea mays

    USDA-ARS?s Scientific Manuscript database

    To investigate the natural variability of leaf metabolism and enzymatic activity in a maize inbred population, statistical and network analyses were employed on metabolite and enzyme profiles. The test of coefficient of variation showed that sugars and amino acids displayed opposite trends in their ...

  18. Classroom Assessments of 6000 Teachers: What Do the Results Show about the Effectiveness of Teaching and Learning?

    ERIC Educational Resources Information Center

    Hill, Flo H.; And Others

    This paper presents the results of a series of summary analyses of descriptive statistics concerning 5,720 Louisiana teachers who were assessed with the System for Teaching and Learning Assessment and Review (STAR)--a comprehensive on-the-job statewide teacher assessment system--during the second pilot year (1989-90). Data were collected by about…

  19. Second Chance Education Matters! Income Trajectories of Poorly Educated Non-Nordics in Sweden

    ERIC Educational Resources Information Center

    Nordlund, Madelene; Bonfanti, Sara; Strandh, Mattias

    2015-01-01

    In this study we examine the long-term impact of second chance education (SCE) on incomes of poorly educated individuals who live in Sweden but were not born in a Nordic country, using data on income changes from 1992 to 2003 compiled by Statistics Sweden. Ordinary Least Squares regression analyses show that participation in SCE increased the work…

  20. Revealing Future Research Capacity from an Analysis of a National Database of Discipline-Coded Australian PhD Thesis Records

    ERIC Educational Resources Information Center

    Pittayachawan, Siddhi; Macauley, Peter; Evans, Terry

    2016-01-01

    This article reports how statistical analyses of PhD thesis records can reveal future research capacities for disciplines beyond their primary fields. The previous research showed that most theses contributed to and/or used methodologies from more than one discipline. In Australia, there was a concern for declining mathematical teaching and…

  1. Detecting trend on ecological river status - how to deal with short incomplete bioindicator time series? Methodological and operational issues

    NASA Astrophysics Data System (ADS)

    Cernesson, Flavie; Tournoud, Marie-George; Lalande, Nathalie

    2018-06-01

    Among the various parameters monitored in river monitoring networks, bioindicators provide very informative data. Analysing time variations in bioindicator data is tricky for water managers because the data sets are often short, irregular, and non-normally distributed. It is then a challenging methodological issue for scientists, as it is in Saône basin (30 000 km2, France) where, between 1998 and 2010, among 812 IBGN (French macroinvertebrate bioindicator) monitoring stations, only 71 time series have got more than 10 data values and were studied here. Combining various analytical tools (three parametric and non-parametric statistical tests plus a graphical analysis), 45 IBGN time series were classified as stationary and 26 as non-stationary (only one of which showing a degradation). Series from sampling stations located within the same hydroecoregion showed similar trends, while river size classes seemed to be non-significant to explain temporal trends. So, from a methodological point of view, combining statistical tests and graphical analysis is a relevant option when striving to improve trend detection. Moreover, it was possible to propose a way to summarise series in order to analyse links between ecological river quality indicators and land use stressors.

  2. Analysis of ground-water data for selected wells near Holloman Air Force Base, New Mexico, 1950-95

    USGS Publications Warehouse

    Huff, G.F.

    1996-01-01

    Ground-water-level, ground-water-withdrawal, and ground- water-quality data were evaluated for trends. Holloman Air Force Base is located in the west-central part of Otero County, New Mexico. Ground-water-data analyses include assembly and inspection of U.S. Geological Survey and Holloman Air Force Base data, including ground-water-level data for public-supply and observation wells and withdrawal and water-quality data for public-supply wells in the area. Well Douglas 4 shows a statistically significant decreasing trend in water levels for 1972-86 and a statistically significant increasing trend in water levels for 1986-90. Water levels in wells San Andres 5 and San Andres 6 show statistically significant decreasing trends for 1972-93 and 1981-89, respectively. A mixture of statistically significant increasing trends, statistically significant decreasing trends, and lack of statistically significant trends over periods ranging from the early 1970's to the early 1990's are indicated for the Boles wells and wells near the Boles wells. Well Boles 5 shows a statistically significant increasing trend in water levels for 1981-90. Well Boles 5 and well 17S.09E.25.343 show no statistically significant trends in water levels for 1990-93 and 1988-93, respectively. For 1986-93, well Frenchy 1 shows a statistically significant decreasing trend in water levels. Ground-water withdrawal from the San Andres and Douglas wells regularly exceeded estimated ground-water recharge from San Andres Canyon for 1963-87. For 1951-57 and 1960-86, ground-water withdrawal from the Boles wells regularly exceeded total estimated ground-water recharge from Mule, Arrow, and Lead Canyons. Ground-water withdrawal from the San Andres and Douglas wells and from the Boles wells nearly equaled estimated ground- water recharge for 1989-93 and 1986-93, respectively. For 1987- 93, ground-water withdrawal from the Escondido well regularly exceeded estimated ground-water recharge from Escondido Canyon, and ground-water withdrawal from the Frenchy wells regularly exceeded total estimated ground-water recharge from Dog and Deadman Canyons. Water-quality samples were collected from selected Douglas, San Andres, and Boles public-supply wells from December 1994 to February 1995. Concentrations of dissolved nitrate show the most consistent increases between current and historical data. Current concentrations of dissolved nitrate are greater than historical concentrations in 7 of 10 wells.

  3. How Genes Modulate Patterns of Aging-Related Changes on the Way to 100: Biodemographic Models and Methods in Genetic Analyses of Longitudinal Data

    PubMed Central

    Yashin, Anatoliy I.; Arbeev, Konstantin G.; Wu, Deqing; Arbeeva, Liubov; Kulminski, Alexander; Kulminskaya, Irina; Akushevich, Igor; Ukraintseva, Svetlana V.

    2016-01-01

    Background and Objective To clarify mechanisms of genetic regulation of human aging and longevity traits, a number of genome-wide association studies (GWAS) of these traits have been performed. However, the results of these analyses did not meet expectations of the researchers. Most detected genetic associations have not reached a genome-wide level of statistical significance, and suffered from the lack of replication in the studies of independent populations. The reasons for slow progress in this research area include low efficiency of statistical methods used in data analyses, genetic heterogeneity of aging and longevity related traits, possibility of pleiotropic (e.g., age dependent) effects of genetic variants on such traits, underestimation of the effects of (i) mortality selection in genetically heterogeneous cohorts, (ii) external factors and differences in genetic backgrounds of individuals in the populations under study, the weakness of conceptual biological framework that does not fully account for above mentioned factors. One more limitation of conducted studies is that they did not fully realize the potential of longitudinal data that allow for evaluating how genetic influences on life span are mediated by physiological variables and other biomarkers during the life course. The objective of this paper is to address these issues. Data and Methods We performed GWAS of human life span using different subsets of data from the original Framingham Heart Study cohort corresponding to different quality control (QC) procedures and used one subset of selected genetic variants for further analyses. We used simulation study to show that approach to combining data improves the quality of GWAS. We used FHS longitudinal data to compare average age trajectories of physiological variables in carriers and non-carriers of selected genetic variants. We used stochastic process model of human mortality and aging to investigate genetic influence on hidden biomarkers of aging and on dynamic interaction between aging and longevity. We investigated properties of genes related to selected variants and their roles in signaling and metabolic pathways. Results We showed that the use of different QC procedures results in different sets of genetic variants associated with life span. We selected 24 genetic variants negatively associated with life span. We showed that the joint analyses of genetic data at the time of bio-specimen collection and follow up data substantially improved significance of associations of selected 24 SNPs with life span. We also showed that aging related changes in physiological variables and in hidden biomarkers of aging differ for the groups of carriers and non-carriers of selected variants. Conclusions . The results of these analyses demonstrated benefits of using biodemographic models and methods in genetic association studies of these traits. Our findings showed that the absence of a large number of genetic variants with deleterious effects may make substantial contribution to exceptional longevity. These effects are dynamically mediated by a number of physiological variables and hidden biomarkers of aging. The results of these research demonstrated benefits of using integrative statistical models of mortality risks in genetic studies of human aging and longevity. PMID:27773987

  4. Effects of Heterogeniety on Spatial Pattern Analysis of Wild Pistachio Trees in Zagros Woodlands, Iran

    NASA Astrophysics Data System (ADS)

    Erfanifard, Y.; Rezayan, F.

    2014-10-01

    Vegetation heterogeneity biases second-order summary statistics, e.g., Ripley's K-function, applied for spatial pattern analysis in ecology. Second-order investigation based on Ripley's K-function and related statistics (i.e., L- and pair correlation function g) is widely used in ecology to develop hypothesis on underlying processes by characterizing spatial patterns of vegetation. The aim of this study was to demonstrate effects of underlying heterogeneity of wild pistachio (Pistacia atlantica Desf.) trees on the second-order summary statistics of point pattern analysis in a part of Zagros woodlands, Iran. The spatial distribution of 431 wild pistachio trees was accurately mapped in a 40 ha stand in the Wild Pistachio & Almond Research Site, Fars province, Iran. Three commonly used second-order summary statistics (i.e., K-, L-, and g-functions) were applied to analyse their spatial pattern. The two-sample Kolmogorov-Smirnov goodness-of-fit test showed that the observed pattern significantly followed an inhomogeneous Poisson process null model in the study region. The results also showed that heterogeneous pattern of wild pistachio trees biased the homogeneous form of K-, L-, and g-functions, demonstrating a stronger aggregation of the trees at the scales of 0-50 m than actually existed and an aggregation at scales of 150-200 m, while regularly distributed. Consequently, we showed that heterogeneity of point patterns may bias the results of homogeneous second-order summary statistics and we also suggested applying inhomogeneous summary statistics with related null models for spatial pattern analysis of heterogeneous vegetations.

  5. A statistical anomaly indicates symbiotic origins of eukaryotic membranes

    PubMed Central

    Bansal, Suneyna; Mittal, Aditya

    2015-01-01

    Compositional analyses of nucleic acids and proteins have shed light on possible origins of living cells. In this work, rigorous compositional analyses of ∼5000 plasma membrane lipid constituents of 273 species in the three life domains (archaea, eubacteria, and eukaryotes) revealed a remarkable statistical paradox, indicating symbiotic origins of eukaryotic cells involving eubacteria. For lipids common to plasma membranes of the three domains, the number of carbon atoms in eubacteria was found to be similar to that in eukaryotes. However, mutually exclusive subsets of same data show exactly the opposite—the number of carbon atoms in lipids of eukaryotes was higher than in eubacteria. This statistical paradox, called Simpson's paradox, was absent for lipids in archaea and for lipids not common to plasma membranes of the three domains. This indicates the presence of interaction(s) and/or association(s) in lipids forming plasma membranes of eubacteria and eukaryotes but not for those in archaea. Further inspection of membrane lipid structures affecting physicochemical properties of plasma membranes provides the first evidence (to our knowledge) on the symbiotic origins of eukaryotic cells based on the “third front” (i.e., lipids) in addition to the growing compositional data from nucleic acids and proteins. PMID:25631820

  6. How to Get Statistically Significant Effects in Any ERP Experiment (and Why You Shouldn’t)

    PubMed Central

    Luck, Steven J.; Gaspelin, Nicholas

    2016-01-01

    Event-related potential (ERP) experiments generate massive data sets, often containing thousands of values for each participant, even after averaging. The richness of these data sets can be very useful in testing sophisticated hypotheses, but this richness also creates many opportunities to obtain effects that are statistically significant but do not reflect true differences among groups or conditions (bogus effects). The purpose of this paper is to demonstrate how common and seemingly innocuous methods for quantifying and analyzing ERP effects can lead to very high rates of significant-but-bogus effects, with the likelihood of obtaining at least one such bogus effect exceeding 50% in many experiments. We focus on two specific problems: using the grand average data to select the time windows and electrode sites for quantifying component amplitudes and latencies, and using one or more multi-factor statistical analyses. Re-analyses of prior data and simulations of typical experimental designs are used to show how these problems can greatly increase the likelihood of significant-but-bogus results. Several strategies are described for avoiding these problems and for increasing the likelihood that significant effects actually reflect true differences among groups or conditions. PMID:28000253

  7. Statistical Analysis of Categorical Time Series of Atmospheric Elementary Circulation Mechanisms - Dzerdzeevski Classification for the Northern Hemisphere

    PubMed Central

    Brenčič, Mihael

    2016-01-01

    Northern hemisphere elementary circulation mechanisms, defined with the Dzerdzeevski classification and published on a daily basis from 1899–2012, are analysed with statistical methods as continuous categorical time series. Classification consists of 41 elementary circulation mechanisms (ECM), which are assigned to calendar days. Empirical marginal probabilities of each ECM were determined. Seasonality and the periodicity effect were investigated with moving dispersion filters and randomisation procedure on the ECM categories as well as with the time analyses of the ECM mode. The time series were determined as being non-stationary with strong time-dependent trends. During the investigated period, periodicity interchanges with periods when no seasonality is present. In the time series structure, the strongest division is visible at the milestone of 1986, showing that the atmospheric circulation pattern reflected in the ECM has significantly changed. This change is result of the change in the frequency of ECM categories; before 1986, the appearance of ECM was more diverse, and afterwards fewer ECMs appear. The statistical approach applied to the categorical climatic time series opens up new potential insight into climate variability and change studies that have to be performed in the future. PMID:27116375

  8. [Comorbidity of different forms of anxiety disorders and depression].

    PubMed

    Małyszczak, Krzysztof; Szechiński, Marcin

    2004-01-01

    Comorbidity of some anxiety disorders and depression were examined in order to compare their statistical closeness. Patients treated in an out-patient care center for psychiatric disorders and/or family medicine were recruited. Persons that have anxiety and depressive symptoms as a consequence of somatic illnesses or consequence of other psychiatric disorders were excluded. Disorders were diagnosed a with diagnostic questionnaire based on Schedule for Assessment in Neuropsychiatry (SCAN), version 2.0, according to ICD-10 criteria. Analyses include selected disorders: generalized anxiety disorder, panic disorder, agoraphobia, specific phobias, social phobia and depression. 104 patients were included. 35 of them (33.7%) had anxiety disorders, 13 persons (12.5%) have depression. Analyses show that in patients with generalized anxiety disorder, depression occurred at least twice as often as in the remaining patients (odds ratio = 7.1), while in patients with agoraphobia the occurrence of panic disorder increased at least by 2.88 times (odds ratio = 11.9). In other disorders the odds ratios was greater than 1, but the differences were not statistically significant. Depression/generalized anxiety disorder and agoraphobia/panic disorder were shown to be statistically closer than other disorders.

  9. Statistical Analysis of Categorical Time Series of Atmospheric Elementary Circulation Mechanisms - Dzerdzeevski Classification for the Northern Hemisphere.

    PubMed

    Brenčič, Mihael

    2016-01-01

    Northern hemisphere elementary circulation mechanisms, defined with the Dzerdzeevski classification and published on a daily basis from 1899-2012, are analysed with statistical methods as continuous categorical time series. Classification consists of 41 elementary circulation mechanisms (ECM), which are assigned to calendar days. Empirical marginal probabilities of each ECM were determined. Seasonality and the periodicity effect were investigated with moving dispersion filters and randomisation procedure on the ECM categories as well as with the time analyses of the ECM mode. The time series were determined as being non-stationary with strong time-dependent trends. During the investigated period, periodicity interchanges with periods when no seasonality is present. In the time series structure, the strongest division is visible at the milestone of 1986, showing that the atmospheric circulation pattern reflected in the ECM has significantly changed. This change is result of the change in the frequency of ECM categories; before 1986, the appearance of ECM was more diverse, and afterwards fewer ECMs appear. The statistical approach applied to the categorical climatic time series opens up new potential insight into climate variability and change studies that have to be performed in the future.

  10. Detecting Genomic Clustering of Risk Variants from Sequence Data: Cases vs. Controls

    PubMed Central

    Schaid, Daniel J.; Sinnwell, Jason P.; McDonnell, Shannon K.; Thibodeau, Stephen N.

    2013-01-01

    As the ability to measure dense genetic markers approaches the limit of the DNA sequence itself, taking advantage of possible clustering of genetic variants in, and around, a gene would benefit genetic association analyses, and likely provide biological insights. The greatest benefit might be realized when multiple rare variants cluster in a functional region. Several statistical tests have been developed, one of which is based on the popular Kulldorff scan statistic for spatial clustering of disease. We extended another popular spatial clustering method – Tango’s statistic – to genomic sequence data. An advantage of Tango’s method is that it is rapid to compute, and when single test statistic is computed, its distribution is well approximated by a scaled chi-square distribution, making computation of p-values very rapid. We compared the Type-I error rates and power of several clustering statistics, as well as the omnibus sequence kernel association test (SKAT). Although our version of Tango’s statistic, which we call “Kernel Distance” statistic, took approximately half the time to compute than the Kulldorff scan statistic, it had slightly less power than the scan statistic. Our results showed that the Ionita-Laza version of Kulldorff’s scan statistic had the greatest power over a range of clustering scenarios. PMID:23842950

  11. Quantifying the impact of between-study heterogeneity in multivariate meta-analyses

    PubMed Central

    Jackson, Dan; White, Ian R; Riley, Richard D

    2012-01-01

    Measures that quantify the impact of heterogeneity in univariate meta-analysis, including the very popular I2 statistic, are now well established. Multivariate meta-analysis, where studies provide multiple outcomes that are pooled in a single analysis, is also becoming more commonly used. The question of how to quantify heterogeneity in the multivariate setting is therefore raised. It is the univariate R2 statistic, the ratio of the variance of the estimated treatment effect under the random and fixed effects models, that generalises most naturally, so this statistic provides our basis. This statistic is then used to derive a multivariate analogue of I2, which we call . We also provide a multivariate H2 statistic, the ratio of a generalisation of Cochran's heterogeneity statistic and its associated degrees of freedom, with an accompanying generalisation of the usual I2 statistic, . Our proposed heterogeneity statistics can be used alongside all the usual estimates and inferential procedures used in multivariate meta-analysis. We apply our methods to some real datasets and show how our statistics are equally appropriate in the context of multivariate meta-regression, where study level covariate effects are included in the model. Our heterogeneity statistics may be used when applying any procedure for fitting the multivariate random effects model. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22763950

  12. Psychopathological Symptoms and Psychological Wellbeing in Mexican Undergraduate Students

    PubMed Central

    Contreras, Mariel; de León, Ana Mariela; Martínez, Estela; Peña, Elsa Melissa; Marques, Luana; Gallegos, Julia

    2017-01-01

    College life involves a process of adaptation to changes that have an impact on the psycho-emotional development of students. Successful adaptation to this stage involves the balance between managing personal resources and potential stressors that generate distress. This epidemiological descriptive and transversal study estimates the prevalence of psychopathological symptomatology and psychological well-being among 516 college students, 378 (73.26%) women and 138 (26.74%) men, ages between 17 and 24, from the city of Monterrey in Mexico. It describes the relationship between psychopathological symptomatology and psychological well-being, and explores gender differences. For data collection, two measures were used: The Symptom Checklist Revised and the Scale of Psychological Well-being. Statistical analyses used were t test for independent samples, Pearson’s r and regression analysis with the Statistical Package for the Social Sciences (SPSS v21.0). Statistical analyses showed that the prevalence of psychopathological symptoms was 10–13%, being Aggression the highest. The dimension of psychological well-being with the lowest scores was Environmental Mastery. Participants with a higher level of psychological well-being had a lower level of psychopathological symptoms, which shows the importance of early identification and prevention. Gender differences were found on some subscales of the psychopathological symptomatology and of the psychological well-being measures. This study provides a basis for future research and development of resources to promote the psychological well-being and quality of life of university students. PMID:29104876

  13. Supply Chain Collaboration: Information Sharing in a Tactical Operating Environment

    DTIC Science & Technology

    2013-06-01

    architecture, there are four tiers: Client (Web Application Clients ), Presentation (Web-Server), Processing (Application-Server), Data (Database...organization in each period. This data will be collected to analyze. i) Analyses and Validation: We will do a statistics test in this data, Pareto ...notes, outstanding deliveries, and inventory. i) Analyses and Validation: We will do a statistics test in this data, Pareto analyses and confirmation

  14. Walking through the statistical black boxes of plant breeding.

    PubMed

    Xavier, Alencar; Muir, William M; Craig, Bruce; Rainey, Katy Martin

    2016-10-01

    The main statistical procedures in plant breeding are based on Gaussian process and can be computed through mixed linear models. Intelligent decision making relies on our ability to extract useful information from data to help us achieve our goals more efficiently. Many plant breeders and geneticists perform statistical analyses without understanding the underlying assumptions of the methods or their strengths and pitfalls. In other words, they treat these statistical methods (software and programs) like black boxes. Black boxes represent complex pieces of machinery with contents that are not fully understood by the user. The user sees the inputs and outputs without knowing how the outputs are generated. By providing a general background on statistical methodologies, this review aims (1) to introduce basic concepts of machine learning and its applications to plant breeding; (2) to link classical selection theory to current statistical approaches; (3) to show how to solve mixed models and extend their application to pedigree-based and genomic-based prediction; and (4) to clarify how the algorithms of genome-wide association studies work, including their assumptions and limitations.

  15. Walking execution is not affected by divided attention in patients with multiple sclerosis with no disability, but there is a motor planning impairment.

    PubMed

    Nogueira, Leandro Alberto Calazans; Santos, Luciano Teixeira Dos; Sabino, Pollyane Galinari; Alvarenga, Regina Maria Papais; Thuler, Luiz Claudio Santos

    2013-08-01

    We analysed the cognitive influence on walking in multiple sclerosis (MS) patients, in the absence of clinical disability. A case-control study was conducted with 12 MS patients with no disability and 12 matched healthy controls. Subjects were referred for completion a timed walk test of 10 m and a 3D-kinematic analysis. Participants were instructed to walk at a comfortable speed in a dual-task (arithmetic task) condition, and motor planning was measured by mental chronometry. Scores of walking speed and cadence showed no statistically significant differences between the groups in the three conditions. The dual-task condition showed an increase in the double support duration in both groups. Motor imagery analysis showed statistically significant differences between real and imagined walking in patients. MS patients with no disability did not show any influence of divided attention on walking execution. However, motor planning was overestimated as compared with real walking.

  16. Research of Extension of the Life Cycle of Helicopter Rotor Blade in Hungary

    DTIC Science & Technology

    2003-02-01

    Radiography (DXR), and (iii) Vibration Diagnostics (VD) with Statistical Energy Analysis (SEA) were semi- simultaneously applied [1]. The used three...2.2. Vibration Diagnostics (VD)) Parallel to the NDT measurements the Statistical Energy Analysis (SEA) as a vibration diagnostical tool were...noises were analysed with a dual-channel real time frequency analyser (BK2035). In addition to the Statistical Energy Analysis measurement a small

  17. Sunspot activity and influenza pandemics: a statistical assessment of the purported association.

    PubMed

    Towers, S

    2017-10-01

    Since 1978, a series of papers in the literature have claimed to find a significant association between sunspot activity and the timing of influenza pandemics. This paper examines these analyses, and attempts to recreate the three most recent statistical analyses by Ertel (1994), Tapping et al. (2001), and Yeung (2006), which all have purported to find a significant relationship between sunspot numbers and pandemic influenza. As will be discussed, each analysis had errors in the data. In addition, in each analysis arbitrary selections or assumptions were also made, and the authors did not assess the robustness of their analyses to changes in those arbitrary assumptions. Varying the arbitrary assumptions to other, equally valid, assumptions negates the claims of significance. Indeed, an arbitrary selection made in one of the analyses appears to have resulted in almost maximal apparent significance; changing it only slightly yields a null result. This analysis applies statistically rigorous methodology to examine the purported sunspot/pandemic link, using more statistically powerful un-binned analysis methods, rather than relying on arbitrarily binned data. The analyses are repeated using both the Wolf and Group sunspot numbers. In all cases, no statistically significant evidence of any association was found. However, while the focus in this particular analysis was on the purported relationship of influenza pandemics to sunspot activity, the faults found in the past analyses are common pitfalls; inattention to analysis reproducibility and robustness assessment are common problems in the sciences, that are unfortunately not noted often enough in review.

  18. Therapeutic whole-body hypothermia reduces mortality in severe traumatic brain injury if the cooling index is sufficiently high: meta-analyses of the effect of single cooling parameters and their integrated measure.

    PubMed

    Olah, Emoke; Poto, Laszlo; Hegyi, Peter; Szabo, Imre; Hartmann, Petra; Solymar, Margit; Petervari, Erika; Balasko, Marta; Habon, Tamas; Rumbus, Zoltan; Tenk, Judit; Rostas, Ildiko; Weinberg, Jordan; Romanovsky, Andrej A; Garami, Andras

    2018-04-21

    Therapeutic hypothermia was investigated repeatedly as a tool to improve the outcome of severe traumatic brain injury (TBI), but previous clinical trials and meta-analyses found contradictory results. We aimed to determine the effectiveness of therapeutic whole-body hypothermia on the mortality of adult patients with severe TBI by using a novel approach of meta-analysis. We searched the PubMed, EMBASE, and Cochrane Library databases from inception to February 2017. The identified human studies were evaluated regarding statistical, clinical, and methodological designs to ensure inter-study homogeneity. We extracted data on TBI severity, body temperature, mortality, and cooling parameters; then we calculated the cooling index, an integrated measure of therapeutic hypothermia. Forest plot of all identified studies showed no difference in the outcome of TBI between cooled and not cooled patients, but inter-study heterogeneity was high. On the contrary, by meta-analysis of RCTs which were homogenous with regards to statistical, clinical designs and precisely reported the cooling protocol, we showed decreased odds ratio for mortality in therapeutic hypothermia compared to no cooling. As independent factors, milder and longer cooling, and rewarming at < 0.25°C/h were associated with better outcome. Therapeutic hypothermia was beneficial only if the cooling index (measure of combination of cooling parameters) was sufficiently high. We conclude that high methodological and statistical inter-study heterogeneity could underlie the contradictory results obtained in previous studies. By analyzing methodologically homogenous studies, we show that cooling improves the outcome of severe TBI and this beneficial effect depends on certain cooling parameters and on their integrated measure, the cooling index.

  19. [Analysis of the technical efficiency of hospitals in the Spanish National Health Service].

    PubMed

    Pérez-Romero, Carmen; Ortega-Díaz, M Isabel; Ocaña-Riola, Ricardo; Martín-Martín, José Jesús

    To analyse the technical efficiency and productivity of general hospitals in the Spanish National Health Service (NHS) (2010-2012) and identify explanatory hospital and regional variables. 230 NHS hospitals were analysed by data envelopment analysis for overall, technical and scale efficiency, and Malmquist index. The robustness of the analysis is contrasted with alternative input-output models. A fixed effects multilevel cross-sectional linear model was used to analyse the explanatory efficiency variables. The average rate of overall technical efficiency (OTE) was 0.736 in 2012; there was considerable variability by region. Malmquist index (2010-2012) is 1.013. A 23% variability in OTE is attributable to the region in question. Statistically significant exogenous variables (residents per 100 physicians, aging index, average annual income per household, essential public service expenditure and public health expenditure per capita) explain 42% of the OTE variability between hospitals and 64% between regions. The number of residents showed a statistically significant relationship. As regards regions, there is a statistically significant direct linear association between OTE and annual income per capita and essential public service expenditure, and an indirect association with the aging index and annual public health expenditure per capita. The significant room for improvement in the efficiency of hospitals is conditioned by region-specific characteristics, specifically aging, wealth and the public expenditure policies of each one. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  20. Methodological and Reporting Quality of Systematic Reviews and Meta-analyses in Endodontics.

    PubMed

    Nagendrababu, Venkateshbabu; Pulikkotil, Shaju Jacob; Sultan, Omer Sheriff; Jayaraman, Jayakumar; Peters, Ove A

    2018-06-01

    The aim of this systematic review (SR) was to evaluate the quality of SRs and meta-analyses (MAs) in endodontics. A comprehensive literature search was conducted to identify relevant articles in the electronic databases from January 2000 to June 2017. Two reviewers independently assessed the articles for eligibility and data extraction. SRs and MAs on interventional studies with a minimum of 2 therapeutic strategies in endodontics were included in this SR. Methodologic and reporting quality were assessed using A Measurement Tool to Assess Systematic Reviews (AMSTAR) and Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA), respectively. The interobserver reliability was calculated using the Cohen kappa statistic. Statistical analysis with the level of significance at P < .05 was performed using Kruskal-Wallis tests and simple linear regression analysis. A total of 30 articles were selected for the current SR. Using AMSTAR, the item related to the scientific quality of studies used in conclusion was adhered by less than 40% of studies. Using PRISMA, 3 items were reported by less than 40% of studies, which were on objectives, protocol registration, and funding. No association was evident comparing the number of authors and country with quality. Statistical significance was observed when quality was compared among journals, with studies published as Cochrane reviews superior to those published in other journals. AMSTAR and PRISMA scores were significantly related. SRs in endodontics showed variability in both methodologic and reporting quality. Copyright © 2018 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  1. A reply to Zigler and Seitz.

    PubMed

    Neman, R

    1975-03-01

    The Zigler and Seitz (1975) critique was carefully examined with respect to the conclusions of the Neman et al. (1975) study. Particular attention was given to the following questions: (a) did experimenter bias or commitment account for the results, (b) were unreliable and invalid psychometric instruments used, (c) were the statistical analyses insufficient or incorrect, (d) did the results reflect no more than the operation of chance, and (e) were the results biased by artifactually inflated profile scores. Experimenter bias and commitment were shown to be insufficient to account for the results; a further review of Buros (1972) showed that there was no need for apprehension about the testing instruments; the statistical analyses were shown to exceed prevailing standards for research reporting; the results were shown to reflect valid findings at the .05 probability level; and the Neman et al. (1975) results for the profile measure were equally significant using either "raw" neurological scores or "scales" neurological age scores. Zigler, Seitz, and I agreed on the needs for (a) using multivariate analyses, where applicable, in studies having more than one dependent variable; (b) defining the population for which sensorimotor training procedures may be appropriately prescribed; and (c) validating the profile measure as a tool to assess neurological disorganization.

  2. Moisture Forecast Bias Correction in GEOS DAS

    NASA Technical Reports Server (NTRS)

    Dee, D.

    1999-01-01

    Data assimilation methods rely on numerous assumptions about the errors involved in measuring and forecasting atmospheric fields. One of the more disturbing of these is that short-term model forecasts are assumed to be unbiased. In case of atmospheric moisture, for example, observational evidence shows that the systematic component of errors in forecasts and analyses is often of the same order of magnitude as the random component. we have implemented a sequential algorithm for estimating forecast moisture bias from rawinsonde data in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The algorithm is designed to remove the systematic component of analysis errors and can be easily incorporated in an existing statistical data assimilation system. We will present results of initial experiments that show a significant reduction of bias in the GEOS DAS moisture analyses.

  3. Advancements in RNASeqGUI towards a Reproducible Analysis of RNA-Seq Experiments

    PubMed Central

    Russo, Francesco; Righelli, Dario

    2016-01-01

    We present the advancements and novelties recently introduced in RNASeqGUI, a graphical user interface that helps biologists to handle and analyse large data collected in RNA-Seq experiments. This work focuses on the concept of reproducible research and shows how it has been incorporated in RNASeqGUI to provide reproducible (computational) results. The novel version of RNASeqGUI combines graphical interfaces with tools for reproducible research, such as literate statistical programming, human readable report, parallel executions, caching, and interactive and web-explorable tables of results. These features allow the user to analyse big datasets in a fast, efficient, and reproducible way. Moreover, this paper represents a proof of concept, showing a simple way to develop computational tools for Life Science in the spirit of reproducible research. PMID:26977414

  4. [Cluster analysis applicability to fitness evaluation of cosmonauts on long-term missions of the International space station].

    PubMed

    Egorov, A D; Stepantsov, V I; Nosovskiĭ, A M; Shipov, A A

    2009-01-01

    Cluster analysis was applied to evaluate locomotion training (running and running intermingled with walking) of 13 cosmonauts on long-term ISS missions by the parameters of duration (min), distance (m) and intensity (km/h). Based on the results of analyses, the cosmonauts were distributed into three steady groups of 2, 5 and 6 persons. Distance and speed showed a statistical rise (p < 0.03) from group 1 to group 3. Duration of physical locomotion training was not statistically different in the groups (p = 0.125). Therefore, cluster analysis is an adequate method of evaluating fitness of cosmonauts on long-term missions.

  5. Predictors of workplace violence among female sex workers in Tijuana, Mexico.

    PubMed

    Katsulis, Yasmina; Durfee, Alesha; Lopez, Vera; Robillard, Alyssa

    2015-05-01

    For sex workers, differences in rates of exposure to workplace violence are likely influenced by a variety of risk factors, including where one works and under what circumstances. Economic stressors, such as housing insecurity, may also increase the likelihood of exposure. Bivariate analyses demonstrate statistically significant associations between workplace violence and selected predictor variables, including age, drug use, exchanging sex for goods, soliciting clients outdoors, and experiencing housing insecurity. Multivariate regression analysis shows that after controlling for each of these variables in one model, only soliciting clients outdoors and housing insecurity emerge as statistically significant predictors for workplace violence. © The Author(s) 2014.

  6. Systematic review of wireless phone use and brain cancer and other head tumors.

    PubMed

    Repacholi, Michael H; Lerchl, Alexander; Röösli, Martin; Sienkiewicz, Zenon; Auvinen, Anssi; Breckenkamp, Jürgen; d'Inzeo, Guglielmo; Elliott, Paul; Frei, Patrizia; Heinrich, Sabine; Lagroye, Isabelle; Lahkola, Anna; McCormick, David L; Thomas, Silke; Vecchia, Paolo

    2012-04-01

    We conducted a systematic review of scientific studies to evaluate whether the use of wireless phones is linked to an increased incidence of the brain cancer glioma or other tumors of the head (meningioma, acoustic neuroma, and parotid gland), originating in the areas of the head that most absorb radiofrequency (RF) energy from wireless phones. Epidemiology and in vivo studies were evaluated according to an agreed protocol; quality criteria were used to evaluate the studies for narrative synthesis but not for meta-analyses or pooling of results. The epidemiology study results were heterogeneous, with sparse data on long-term use (≥ 10 years). Meta-analyses of the epidemiology studies showed no statistically significant increase in risk (defined as P < 0.05) for adult brain cancer or other head tumors from wireless phone use. Analyses of the in vivo oncogenicity, tumor promotion, and genotoxicity studies also showed no statistically significant relationship between exposure to RF fields and genotoxic damage to brain cells, or the incidence of brain cancers or other tumors of the head. Assessment of the review results using the Hill criteria did not support a causal relationship between wireless phone use and the incidence of adult cancers in the areas of the head that most absorb RF energy from the use of wireless phones. There are insufficient data to make any determinations about longer-term use (≥ 10 years). © 2011 Wiley Periodicals, Inc.

  7. A statistical study of magnetopause structures: Tangential versus rotational discontinuities

    NASA Astrophysics Data System (ADS)

    Chou, Y.-C.; Hau, L.-N.

    2012-08-01

    A statistical study of the structure of Earth's magnetopause is carried out by analyzing two-year AMPTE/IRM plasma and magnetic field data. The analyses are based on the minimum variance analysis (MVA), the deHoffmann-Teller (HT) frame analysis and the Walén relation. A total of 328 magnetopause crossings are identified and error estimates associated with MVA and HT frame analyses are performed for each case. In 142 out of 328 events both MVA and HT frame analyses yield high quality results which are classified as either tangential-discontinuity (TD) or rotational-discontinuity (RD) structures based only on the Walén relation: Events withSWA ≤ 0.4 (SWA ≥ 0.5) are classified as TD (RD), and rest (with 0.4 < SWA < 0.5) is classified as "uncertain," where SWA refers to the Walén slope. With this criterion, 84% of 142 events are TDs, 12% are RDs, and 4% are uncertain events. There are a large portion of TD events which exhibit a finite normal magnetic field component Bnbut have insignificant flow as compared to the Alfvén velocity in the HT frame. Two-dimensional Grad-Shafranov reconstruction of forty selected TD and RD events show that single or multiple X-line accompanied with magnetic islands are common feature of magnetopause current. A survey plot of the HT velocity associated with TD structures projected onto the magnetopause shows that the flow is diverted at the subsolar point and accelerated toward the dawn and dusk flanks.

  8. Transformation (normalization) of slope gradient and surface curvatures, automated for statistical analyses from DEMs

    NASA Astrophysics Data System (ADS)

    Csillik, O.; Evans, I. S.; Drăguţ, L.

    2015-03-01

    Automated procedures are developed to alleviate long tails in frequency distributions of morphometric variables. They minimize the skewness of slope gradient frequency distributions, and modify the kurtosis of profile and plan curvature distributions toward that of the Gaussian (normal) model. Box-Cox (for slope) and arctangent (for curvature) transformations are tested on nine digital elevation models (DEMs) of varying origin and resolution, and different landscapes, and shown to be effective. Resulting histograms are illustrated and show considerable improvements over those for previously recommended slope transformations (sine, square root of sine, and logarithm of tangent). Unlike previous approaches, the proposed method evaluates the frequency distribution of slope gradient values in a given area and applies the most appropriate transform if required. Sensitivity of the arctangent transformation is tested, showing that Gaussian-kurtosis transformations are acceptable also in terms of histogram shape. Cube root transformations of curvatures produced bimodal histograms. The transforms are applicable to morphometric variables and many others with skewed or long-tailed distributions. By avoiding long tails and outliers, they permit parametric statistics such as correlation, regression and principal component analyses to be applied, with greater confidence that requirements for linearity, additivity and even scatter of residuals (constancy of error variance) are likely to be met. It is suggested that such transformations should be routinely applied in all parametric analyses of long-tailed variables. Our Box-Cox and curvature automated transformations are based on a Python script, implemented as an easy-to-use script tool in ArcGIS.

  9. Exploring longitudinal course and treatment-baseline severity interactions in secondary outcomes of smoking cessation treatment in individuals with attention-deficit hyperactivity disorder.

    PubMed

    Luo, Sean X; Wall, Melanie; Covey, Lirio; Hu, Mei-Chen; Scodes, Jennifer M; Levin, Frances R; Nunes, Edward V; Winhusen, Theresa

    2018-01-25

    A double blind, placebo-controlled randomized trial (NCT00253747) evaluating osmotic-release oral system methylphenidate (OROS-MPH) for smoking-cessation revealed a significant interaction effect in which participants with higher baseline ADHD severity had better abstinence outcomes with OROS-MPH while participants with lower baseline ADHD severity had worse outcomes. This current report examines secondary outcomes that might bear on the mechanism for this differential treatment effect. Longitudinal analyses were conducted to evaluate the effect of OROS-MPH on three secondary outcomes (ADHD symptom severity, nicotine craving, and withdrawal) in the total sample (N = 255, 56% Male), and in the high (N = 134) and low (N = 121) baseline ADHD severity groups. OROS-MPH significantly improved ADHD symptoms and nicotine withdrawal symptoms in the total sample, and exploratory analyses showed that in both higher and lower baseline severity groups, OROS-MPH statistically significantly improved these two outcomes. No effect on craving overall was detected, though exploratory analyses showed statistically significantly decreased craving in the high ADHD severity participants on OROS-MPH. No treatment by ADHD baseline severity interaction was detected for the outcomes. Methylphenidate improved secondary outcomes during smoking cessation independent of baseline ADHD severity, with no evident treatment-baseline severity interaction. Our results suggest divergent responses to smoking cessation treatment in the higher and lower severity groups cannot be explained by concordant divergence in craving, withdrawal and ADHD symptom severity, and alternative hypotheses may need to be identified.

  10. A meta-analysis of neuropsychological outcome after mild traumatic brain injury: re-analyses and reconsiderations of Binder et al. (1997), Frencham et al. (2005), and Pertab et al. (2009).

    PubMed

    Rohling, Martin L; Binder, Laurence M; Demakis, George J; Larrabee, Glenn J; Ploetz, Danielle M; Langhinrichsen-Rohling, Jennifer

    2011-05-01

    The meta-analytic findings of Binder et al. (1997) and Frencham et al. (2005) showed that the neuropsychological effect of mild traumatic brain injury (mTBI) was negligible in adults by 3 months post injury. Pertab et al. (2009) reported that verbal paired associates, coding tasks, and digit span yielded significant differences between mTBI and control groups. We re-analyzed data from the 25 studies used in the prior meta-analyses, correcting statistical and methodological limitations of previous efforts, and analyzed the chronicity data by discrete epochs. Three months post injury the effect size of -0.07 was not statistically different from zero and similar to that which has been found in several other meta-analyses (Belanger et al., 2005; Schretlen & Shapiro, 2003). The effect size 7 days post injury was -0.39. The effect of mTBI immediately post injury was largest on Verbal and Visual Memory domains. However, 3 months post injury all domains improved to show non-significant effect sizes. These findings indicate that mTBI has an initial small effect on neuropsychological functioning that dissipates quickly. The evidence of recovery in the present meta-analysis is consistent with previous conclusions of both Binder et al. and Frencham et al. Our findings may not apply to people with a history of multiple concussions or complicated mTBIs.

  11. The GnRH analogue triptorelin confers ovarian radio-protection to adult female rats.

    PubMed

    Camats, N; García, F; Parrilla, J J; Calaf, J; Martín-Mateo, M; Caldés, M Garcia

    2009-10-02

    There is a controversy regarding the effects of the analogues of the gonadotrophin-releasing hormone (GnRH) in radiotherapy. This has led us to study the possible radio-protection of the ovarian function of a GnRH agonist analogue (GnRHa), triptorelin, in adult, female rats (Rattus norvegicus sp.). The effects of the X-irradiation on the oocytes of ovarian primordial follicles, with and without GnRHa treatment, were compared, directly in the female rats (F(0)) with reproductive parameters, and in the somatic cells of the resulting foetuses (F(1)) with cytogenetical parameters. In order to do this, the ovaries and uteri from 82 females were extracted for the reproductive analysis and 236 foetuses were obtained for cytogenetical analysis. The cytogenetical study was based on the data from 22,151 metaphases analysed. The cytogenetical parameters analysed to assess the existence of chromosomal instability were the number of aberrant metaphases (2234) and the number (2854) and type of structural chromosomal aberrations, including gaps and breaks. Concerning the reproductive analysis of the ovaries and the uteri, the parameters analysed were the number of corpora lutea, implantations, implantation losses and foetuses. Triptorelin confers radio-protection of the ovaries in front of chromosomal instability, which is different, with respect to the single and fractioned dose. The cytogenetical analysis shows a general decrease in most of the parameters of the triptorelin-treated groups, with respect to their controls, and some of these differences were considered to be statistically significant. The reproductive analysis indicates that there is also radio-protection by the agonist, although minor to the cytogenetical one. Only some of the analysed parameters show a statistically significant decrease in the triptorelin-treated groups.

  12. Plant selection for ethnobotanical uses on the Amalfi Coast (Southern Italy).

    PubMed

    Savo, V; Joy, R; Caneva, G; McClatchey, W C

    2015-07-15

    Many ethnobotanical studies have investigated selection criteria for medicinal and non-medicinal plants. In this paper we test several statistical methods using different ethnobotanical datasets in order to 1) define to which extent the nature of the datasets can affect the interpretation of results; 2) determine if the selection for different plant uses is based on phylogeny, or other selection criteria. We considered three different ethnobotanical datasets: two datasets of medicinal plants and a dataset of non-medicinal plants (handicraft production, domestic and agro-pastoral practices) and two floras of the Amalfi Coast. We performed residual analysis from linear regression, the binomial test and the Bayesian approach for calculating under-used and over-used plant families within ethnobotanical datasets. Percentages of agreement were calculated to compare the results of the analyses. We also analyzed the relationship between plant selection and phylogeny, chorology, life form and habitat using the chi-square test. Pearson's residuals for each of the significant chi-square analyses were examined for investigating alternative hypotheses of plant selection criteria. The three statistical analysis methods differed within the same dataset, and between different datasets and floras, but with some similarities. In the two medicinal datasets, only Lamiaceae was identified in both floras as an over-used family by all three statistical methods. All statistical methods in one flora agreed that Malvaceae was over-used and Poaceae under-used, but this was not found to be consistent with results of the second flora in which one statistical result was non-significant. All other families had some discrepancy in significance across methods, or floras. Significant over- or under-use was observed in only a minority of cases. The chi-square analyses were significant for phylogeny, life form and habitat. Pearson's residuals indicated a non-random selection of woody species for non-medicinal uses and an under-use of plants of temperate forests for medicinal uses. Our study showed that selection criteria for plant uses (including medicinal) are not always based on phylogeny. The comparison of different statistical methods (regression, binomial and Bayesian) under different conditions led to the conclusion that the most conservative results are obtained using regression analysis.

  13. Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression

    PubMed Central

    Chen, Yanguang

    2016-01-01

    In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson’s statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran’s index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China’s regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test. PMID:26800271

  14. Comment on ‘Are physicists afraid of mathematics?’

    NASA Astrophysics Data System (ADS)

    Higginson, Andrew D.; Fawcett, Tim W.

    2016-11-01

    In 2012, we showed that the citation count for articles in ecology and evolutionary biology declines with increasing density of equations. Kollmer et al (2015 New J. Phys. 17 013036) claim this effect is an artefact of the manner in which we plotted the data. They also present citation data from Physical Review Letters and argue, based on graphs, that citation counts are unrelated to equation density. Here we show that both claims are misguided. We identified the effects in biology not by visual means, but using the most appropriate statistical analysis. Since Kollmer et al did not carry out any statistical analysis, they cannot draw reliable inferences about the citation patterns in physics. We show that when statistically analysed their data actually do provide evidence that in physics, as in biology, citation counts are lower for articles with a high density of equations. This indicates that a negative relationship between equation density and citations may extend across the breadth of the sciences, even those in which researchers are well accustomed to mathematical descriptions of natural phenomena. We restate our assessment that this is a genuine problem and discuss what we think should be done about it.

  15. Neutralising antibody titration in 25,000 sera of dogs and cats vaccinated against rabies in France, in the framework of the new regulations that offer an alternative to quarantine.

    PubMed

    Cliquet, F; Verdier, Y; Sagné, L; Aubert, M; Schereffer, J L; Selve, M; Wasniewski, M; Servat, A

    2003-12-01

    Regulations governing international movements of domestic carnivores from rabies-infected to rabies-free countries have recently been loosened, with the adoption of a system that combines vaccination against rabies and serological surveillance (neutralising antibody titration test with a threshold of 0.5 UI/ml). Since 1993, the Research Laboratory for Rabies and Wild Animal Pathology in Nancy, France, has analysed over 25,000 sera from dogs and cats using a viral seroneutralisation technique. The statistical analyses performed during this time show that cats respond better than dogs. Although no significant difference in titres was observed between primovaccinated and repeat-vaccinated cats, repeat-vaccinated dogs had titres above 0.5 IU/ml more frequently. In primovaccinated dogs, monovalent vaccines offered a better serological conversion rate than multivalent ones. Finally, the results of these analyses showed a strong correlation between antibody counts and the time that elapsed between the last vaccination and the blood sampling.

  16. Introspective Minds: Using ALE Meta-Analyses to Study Commonalities in the Neural Correlates of Emotional Processing, Social & Unconstrained Cognition

    PubMed Central

    Schilbach, Leonhard; Bzdok, Danilo; Timmermans, Bert; Fox, Peter T.; Laird, Angela R.; Vogeley, Kai; Eickhoff, Simon B.

    2012-01-01

    Previous research suggests overlap between brain regions that show task-induced deactivations and those activated during the performance of social-cognitive tasks. Here, we present results of quantitative meta-analyses of neuroimaging studies, which confirm a statistical convergence in the neural correlates of social and resting state cognition. Based on the idea that both social and unconstrained cognition might be characterized by introspective processes, which are also thought to be highly relevant for emotional experiences, a third meta-analysis was performed investigating studies on emotional processing. By using conjunction analyses across all three sets of studies, we can demonstrate significant overlap of task-related signal change in dorso-medial prefrontal and medial parietal cortex, brain regions that have, indeed, recently been linked to introspective abilities. Our findings, therefore, provide evidence for the existence of a core neural network, which shows task-related signal change during socio-emotional tasks and during resting states. PMID:22319593

  17. Epidemiology of Skin Cancer in the German Population: Impact of Socioeconomic and Geographic Factors.

    PubMed

    Augustin, J; Kis, A; Sorbe, C; Schäfer, I; Augustin, M

    2018-04-06

    Skin cancer being the most common cancer in Germany has shown increasing incidence in the past decade. Since mostly caused by excessive UV exposure, skin cancer is largely related to behaviour. So far, the impact of regional and sociodemographic factors on the development of skin cancer in Germany is unclear. The current study aimed to investigate the association of potential predictive factors with the prevalence of skin cancers in Germany. Nationwide ambulatory care claims data from persons insured in statutory health insurances (SHI) with malignant melanoma (MM, ICD-10 C43) and non-melanoma skin cancer (NMSC, ICD-10 C44) in the years 2009-2015 were analysed. In addition, sociodemographic population data and satellite based UV and solar radiation data were associated. Descriptive as well as multivariate (spatial) statistical analyses (for example Bayes' Smoothing) were conducted on county level. Data from 70.1 million insured persons were analysed. Age standardized prevalences per 100,000 SHI insured persons for MM and NMSC were 284.7 and 1126.9 in 2009 and 378.5 and 1708.2 in 2015. Marked regional variations were observed with prevalences between 32.9% and 51.6%. Multivariate analysis show statistically significant positive correlations between higher income and education, and MM/NMSC prevalence. Prevalence of MM and NMSC in Germany shows spatio-temporal dynamics. Our results show that regional UV radiation, sunshine hours and sociodemographic factors have significant impact on skin cancer prevalence in Germany. Individual behaviour obviously is a major determinant which should be subject to preventive interventions. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. Ecological adaptations in Douglas-fir (Pseudotsuga menziesii var. glauca) populations: I. North Idaho and North-East Washington

    Treesearch

    Gerald E. Rehfeldt

    1979-01-01

    Growth, phenology and frost tolerance of seedlings from 50 populations of Douglas-fir (Pseudotsuga menziesii var. glauca) were compared in 12 environments. Statistical analyses of six variables (bud burst, bud set, 3-year height, spring and fall frost injuries, and deviation from regression of 3-year height on 2-year height) showed that populations not only differed in...

  19. The Relationship between SAT Scores and Retention to the Second Year: 2007 SAT Validity Sample. Statistical Report No. 2011-4

    ERIC Educational Resources Information Center

    Mattern, Krista D.; Patterson, Brian F.

    2011-01-01

    This report presents the findings from a replication of the analyses from the report, "Is Performance on the SAT Related to College Retention?" (Mattern & Patterson, 2009). The tables presented herein are based on the 2007 sample and the findings are largely the same as those presented in the original report, and show SAT scores are…

  20. GSimp: A Gibbs sampler based left-censored missing value imputation approach for metabolomics studies

    PubMed Central

    Jia, Erik; Chen, Tianlu

    2018-01-01

    Left-censored missing values commonly exist in targeted metabolomics datasets and can be considered as missing not at random (MNAR). Improper data processing procedures for missing values will cause adverse impacts on subsequent statistical analyses. However, few imputation methods have been developed and applied to the situation of MNAR in the field of metabolomics. Thus, a practical left-censored missing value imputation method is urgently needed. We developed an iterative Gibbs sampler based left-censored missing value imputation approach (GSimp). We compared GSimp with other three imputation methods on two real-world targeted metabolomics datasets and one simulation dataset using our imputation evaluation pipeline. The results show that GSimp outperforms other imputation methods in terms of imputation accuracy, observation distribution, univariate and multivariate analyses, and statistical sensitivity. Additionally, a parallel version of GSimp was developed for dealing with large scale metabolomics datasets. The R code for GSimp, evaluation pipeline, tutorial, real-world and simulated targeted metabolomics datasets are available at: https://github.com/WandeRum/GSimp. PMID:29385130

  1. Climate sensitivity to the lower stratospheric ozone variations

    NASA Astrophysics Data System (ADS)

    Kilifarska, N. A.

    2012-12-01

    The strong sensitivity of the Earth's radiation balance to variations in the lower stratospheric ozone—reported previously—is analysed here by the use of non-linear statistical methods. Our non-linear model of the land air temperature (T)—driven by the measured Arosa total ozone (TOZ)—explains 75% of total variability of Earth's T variations during the period 1926-2011. We have analysed also the factors which could influence the TOZ variability and found that the strongest impact belongs to the multi-decadal variations of galactic cosmic rays. Constructing a statistical model of the ozone variability, we have been able to predict the tendency in the land air T evolution till the end of the current decade. Results show that Earth is facing a weak cooling of the surface T by 0.05-0.25 K (depending on the ozone model) until the end of the current solar cycle. A new mechanism for O3 influence on climate is proposed.

  2. Geographically Sourcing Cocaine’s Origin – Delineation of the Nineteen Major Coca Growing Regions in South America

    PubMed Central

    Mallette, Jennifer R.; Casale, John F.; Jordan, James; Morello, David R.; Beyer, Paul M.

    2016-01-01

    Previously, geo-sourcing to five major coca growing regions within South America was accomplished. However, the expansion of coca cultivation throughout South America made sub-regional origin determinations increasingly difficult. The former methodology was recently enhanced with additional stable isotope analyses (2H and 18O) to fully characterize cocaine due to the varying environmental conditions in which the coca was grown. An improved data analysis method was implemented with the combination of machine learning and multivariate statistical analysis methods to provide further partitioning between growing regions. Here, we show how the combination of trace cocaine alkaloids, stable isotopes, and multivariate statistical analyses can be used to classify illicit cocaine as originating from one of 19 growing regions within South America. The data obtained through this approach can be used to describe current coca cultivation and production trends, highlight trafficking routes, as well as identify new coca growing regions. PMID:27006288

  3. Automodification of PARP and fatty acid-based membrane lipidome as a promising integrated biomarker panel in molecular medicine.

    PubMed

    Bianchi, Anna Rita; Ferreri, Carla; Ruggiero, Simona; Deplano, Simone; Sunda, Valentina; Galloro, Giuseppe; Formisano, Cesare; Mennella, Maria Rosaria Faraone

    2016-01-01

    Establishing by statistical analyses whether the analyses of auto-modified poly(ADP-ribose)polymerase and erythrocyte membrane fatty acid composition (Fat Profile(®)), separately or in tandem, help monitoring the physio-pathology of the cell, and correlate with diseases, if present. Ninety five subjects were interviewed and analyzed blindly. Blood lymphocytes and erythrocytes were prepared to assay poly(ADP-ribose)polymerase automodification and fatty acid based membrane lipidome, respectively. Poly(ADP-ribose)polymerase automodification levels confirmed their correlation with DNA damage extent, and allowed monitoring disease activity, upon surgical/therapeutic treatment. Membrane lipidome profiles showed lipid unbalance mainly linked to inflammatory states. Statistically both tests were separately significant, and correlated each other within some pathologies. In the laboratory routine, both tests, separately or in tandem, might be a preliminary and helpful step to investigate the occurrence of a given disease. Their combination represents a promising integrated panel for sensible, noninvasive and routine health monitoring.

  4. The effect of the involvement of the dominant or non-dominant hand on grip/pinch strengths and the Levine score in patients with carpal tunnel syndrome.

    PubMed

    Zyluk, A; Walaszek, I

    2012-06-01

    The Levine questionnaire is a disease-oriented instrument developed for outcome measurement of carpal tunnel syndrome (CTS) management. The objective of this study was to compare Levine scores in patients with unilateral CTS, involving the dominant or non-dominant hand, before and after carpal tunnel release. Records of 144 patients, 126 women (87%) and 18 men (13%) aged a mean of 58 years with unilateral CTS, treated operatively, were analysed. The dominant hand was involved in 100 patients (69%), the non-dominant in 44 (31%). The parameters were analysed pre-operatively, and at 1 and 6 months post-operatively. A comparison of Levine scores in patients with the involvement of the dominant or non-dominant hand showed no statistically significant differences at baseline and any of the follow-up measurements. Statistically significant differences were noted in total grip strength at baseline and at 6 month assessments and in key-pinch strength at 1 and 6 months.

  5. Mindful attention and awareness: relationships with psychopathology and emotion regulation.

    PubMed

    Gregório, Sónia; Pinto-Gouveia, José

    2013-01-01

    The growing interest in mindfulness from the scientific community has originated several self-report measures of this psychological construct. The Mindful Attention and Awareness Scale (MAAS) is a self-report measure of mindfulness at a trait-level. This paper aims at exploring MAAS psychometric characteristics and validating it for the Portuguese population. The first two studies replicate some of the original author's statistical procedures in two different samples from the Portuguese general community population, in particular confirmatory factor analyses. Results from both analyses confirmed the scale single-factor structure and indicated a very good reliability. Moreover, cross-validation statistics showed that this single-factor structure is valid for different respondents from the general community population. In the third study the Portuguese version of the MAAS was found to have good convergent and discriminant validities. Overall the findings support the psychometric validity of the Portuguese version of MAAS and suggest this is a reliable self-report measure of trait-mindfulness, a central construct in Clinical Psychology research and intervention fields.

  6. Global atmospheric circulation statistics, 1000-1 mb

    NASA Technical Reports Server (NTRS)

    Randel, William J.

    1992-01-01

    The atlas presents atmospheric general circulation statistics derived from twelve years (1979-90) of daily National Meteorological Center (NMC) operational geopotential height analyses; it is an update of a prior atlas using data over 1979-1986. These global analyses are available on pressure levels covering 1000-1 mb (approximately 0-50 km). The geopotential grids are a combined product of the Climate Analysis Center (which produces analyses over 70-1 mb) and operational NMC analyses (over 1000-100 mb). Balance horizontal winds and hydrostatic temperatures are derived from the geopotential fields.

  7. A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula.

    PubMed

    Ince, Robin A A; Giordano, Bruno L; Kayser, Christoph; Rousselet, Guillaume A; Gross, Joachim; Schyns, Philippe G

    2017-03-01

    We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open-source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541-1573, 2017. © 2016 Wiley Periodicals, Inc. 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  8. Secondary Analysis of National Longitudinal Transition Study 2 Data

    ERIC Educational Resources Information Center

    Hicks, Tyler A.; Knollman, Greg A.

    2015-01-01

    This review examines published secondary analyses of National Longitudinal Transition Study 2 (NLTS2) data, with a primary focus upon statistical objectives, paradigms, inferences, and methods. Its primary purpose was to determine which statistical techniques have been common in secondary analyses of NLTS2 data. The review begins with an…

  9. A Nonparametric Geostatistical Method For Estimating Species Importance

    Treesearch

    Andrew J. Lister; Rachel Riemann; Michael Hoppus

    2001-01-01

    Parametric statistical methods are not always appropriate for conducting spatial analyses of forest inventory data. Parametric geostatistical methods such as variography and kriging are essentially averaging procedures, and thus can be affected by extreme values. Furthermore, non normal distributions violate the assumptions of analyses in which test statistics are...

  10. "Who Was 'Shadow'?" The Computer Knows: Applying Grammar-Program Statistics in Content Analyses to Solve Mysteries about Authorship.

    ERIC Educational Resources Information Center

    Ellis, Barbara G.; Dick, Steven J.

    1996-01-01

    Employs the statistics-documentation portion of a word-processing program's grammar-check feature together with qualitative analyses to determine that Henry Watterson, long-time editor of the "Louisville Courier-Journal," was probably the South's famed Civil War correspondent "Shadow." (TB)

  11. Volumetric MRI study of brain in children with intrauterine exposure to cocaine, alcohol, tobacco, and marijuana.

    PubMed

    Rivkin, Michael J; Davis, Peter E; Lemaster, Jennifer L; Cabral, Howard J; Warfield, Simon K; Mulkern, Robert V; Robson, Caroline D; Rose-Jacobs, Ruth; Frank, Deborah A

    2008-04-01

    The objective of this study was to use volumetric MRI to study brain volumes in 10- to 14-year-old children with and without intrauterine exposure to cocaine, alcohol, cigarettes, or marijuana. Volumetric MRI was performed on 35 children (mean age: 12.3 years; 14 with intrauterine exposure to cocaine, 21 with no intrauterine exposure to cocaine) to determine the effect of prenatal drug exposure on volumes of cortical gray matter; white matter; subcortical gray matter; cerebrospinal fluid; and total parenchymal volume. Head circumference was also obtained. Analyses of each individual substance were adjusted for demographic characteristics and the remaining 3 prenatal substance exposures. Regression analyses adjusted for demographic characteristics showed that children with intrauterine exposure to cocaine had lower mean cortical gray matter and total parenchymal volumes and smaller mean head circumference than comparison children. After adjustment for other prenatal exposures, these volumes remained smaller but lost statistical significance. Similar analyses conducted for prenatal ethanol exposure adjusted for demographics showed significant reduction in mean cortical gray matter; total parenchymal volumes; and head circumference, which remained smaller but lost statistical significance after adjustment for the remaining 3 exposures. Notably, prenatal cigarette exposure was associated with significant reductions in cortical gray matter and total parenchymal volumes and head circumference after adjustment for demographics that retained marginal significance after adjustment for the other 3 exposures. Finally, as the number of exposures to prenatal substances grew, cortical gray matter and total parenchymal volumes and head circumference declined significantly with smallest measures found among children exposed to all 4. CONCLUSIONS; These data suggest that intrauterine exposures to cocaine, alcohol, and cigarettes are individually related to reduced head circumference; cortical gray matter; and total parenchymal volumes as measured by MRI at school age. Adjustment for other substance exposures precludes determination of statistically significant individual substance effect on brain volume in this small sample; however, these substances may act cumulatively during gestation to exert lasting effects on brain size and volume.

  12. Corrosion Analysis of an Experimental Noble Alloy on Commercially Pure Titanium Dental Implants

    PubMed Central

    Bortagaray, Manuel Alberto; Ibañez, Claudio Arturo Antonio; Ibañez, Maria Constanza; Ibañez, Juan Carlos

    2016-01-01

    Objective: To determine whether the Noble Bond® Argen® alloy was electrochemically suitable for the manufacturing of prosthetic superstructures over commercially pure titanium (c.p. Ti) implants. Also, the electrolytic corrosion effects over three types of materials used on prosthetic suprastructures that were coupled with titanium implants were analysed: Noble Bond® (Argen®), Argelite 76sf +® (Argen®), and commercially pure titanium. Materials and Methods: 15 samples were studied, consisting in 1 abutment and one c.p. titanium implant each. They were divided into three groups, namely: Control group: five c.p Titanium abutments (B&W®), Test group 1: five Noble Bond® (Argen®) cast abutments and, Test group 2: five Argelite 76sf +® (Argen®) abutments. In order to observe the corrosion effects, the surface topography was imaged using a confocal microscope. Thus, three metric parameters (Sa: Arithmetical mean height of the surface. Sp: Maximum height of peaks. Sv: Maximum height of valleys.), were measured at three different areas: abutment neck, implant neck and implant body. The samples were immersed in artificial saliva for 3 months, after which the procedure was repeated. The metric parameters were compared by statistical analysis. Results: The analysis of the Sa at the level of the implant neck, abutment neck and implant body, showed no statistically significant differences on combining c.p. Ti implants with the three studied alloys. The Sp showed no statistically significant differences between the three alloys. The Sv showed no statistically significant differences between the three alloys. Conclusion: The effects of electrogalvanic corrosion on each of the materials used when they were in contact with c.p. Ti showed no statistically significant differences. PMID:27733875

  13. Statistical inference for classification of RRIM clone series using near IR reflectance properties

    NASA Astrophysics Data System (ADS)

    Ismail, Faridatul Aima; Madzhi, Nina Korlina; Hashim, Hadzli; Abdullah, Noor Ezan; Khairuzzaman, Noor Aishah; Azmi, Azrie Faris Mohd; Sampian, Ahmad Faiz Mohd; Harun, Muhammad Hafiz

    2015-08-01

    RRIM clone is a rubber breeding series produced by RRIM (Rubber Research Institute of Malaysia) through "rubber breeding program" to improve latex yield and producing clones attractive to farmers. The objective of this work is to analyse measurement of optical sensing device on latex of selected clone series. The device using transmitting NIR properties and its reflectance is converted in terms of voltage. The obtained reflectance index value via voltage was analyzed using statistical technique in order to find out the discrimination among the clones. From the statistical results using error plots and one-way ANOVA test, there is an overwhelming evidence showing discrimination of RRIM 2002, RRIM 2007 and RRIM 3001 clone series with p value = 0.000. RRIM 2008 cannot be discriminated with RRIM 2014; however both of these groups are distinct from the other clones.

  14. Aircraft Maneuvers for the Evaluation of Flying Qualities and Agility. Volume 1. Maneuver Development Process and Initial Maneuver Set

    DTIC Science & Technology

    1993-08-01

    subtitled "Simulation Data," consists of detailed infonrnation on the design parmneter variations tested, subsequent statistical analyses conducted...used with confidence during the design process. The data quality can be examined in various forms such as statistical analyses of measure of merit data...merit, such as time to capture or nmaximurn pitch rate, can be calculated from the simulation time history data. Statistical techniques are then used

  15. Impact of an integrated science and reading intervention (INSCIREAD) on bilingual students' misconceptions, reading comprehension, and transferability of strategies

    NASA Astrophysics Data System (ADS)

    Martinez, Patricia

    This thesis describes a research study that resulted in an instructional model directed at helping fourth grade diverse students improve their science knowledge, their reading comprehension, their awareness of the relationship between science and reading, and their ability to transfer strategies. The focus of the instructional model emerged from the intersection of constructs in science and reading literacy; the model identifies cognitive strategies that can be used in science and reading, and inquiry-based instruction related to the science content read by participants. The intervention is termed INSCIREAD (Instruction in Science and Reading). The GoInquire web-based system (2006) was used to develop students' content knowledge in slow landform change. Seventy-eight students participated in the study. The treatment group comprised 49 students without disabilities and 8 students with disabilities. The control group comprised 21 students without disabilities. The design of the study is a combination of a mixed-methods quasi-experimental design (Study 1), and a single subject design with groups as the unit of analysis (Study 2). The results from the quantitative measures demonstrated that the text recall data analysis from Study 1 yielded near significant statistical levels when comparing the performance of students without disabilities in the treatment group to that of the control group. Visual analyses of the results from the text recall data from Study 2 showed at least minimal change in all groups. The results of the data analysis of the level of the generated questions show there was a statistically significant increase in the scores students without disabilities obtained in the questions they generated from the pre to the posttest. The analyses conducted to detect incongruities, to summarize and rate importance, and to determine the number of propositions on a science and reading concept map data showed a statistically significant difference between students without disabilities in the treatment and the control groups on post-intervention scores. The analysis of the data from the number of misconceptions of students without disabilities showed that the frequency of 4 of the 11 misconceptions changed significantly from pre to post elicitation stages. The analyses of the qualitative measures of the think alouds and interviews generally supported the above findings.

  16. An order statistics approach to the halo model for galaxies

    NASA Astrophysics Data System (ADS)

    Paul, Niladri; Paranjape, Aseem; Sheth, Ravi K.

    2017-04-01

    We use the halo model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models - one in which this luminosity function p(L) is universal - naturally produces a number of features associated with previous analyses based on the 'central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the lognormal distribution around this mean and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering; however, this model predicts no luminosity dependence of large-scale clustering. We then show that an extended version of this model, based on the order statistics of a halo mass dependent luminosity function p(L|m), is in much better agreement with the clustering data as well as satellite luminosities, but systematically underpredicts central luminosities. This brings into focus the idea that central galaxies constitute a distinct population that is affected by different physical processes than are the satellites. We model this physical difference as a statistical brightening of the central luminosities, over and above the order statistics prediction. The magnitude gap between the brightest and second brightest group galaxy is predicted as a by-product, and is also in good agreement with observations. We propose that this order statistics framework provides a useful language in which to compare the halo model for galaxies with more physically motivated galaxy formation models.

  17. Kidney function changes with aging in adults: comparison between cross-sectional and longitudinal data analyses in renal function assessment.

    PubMed

    Chung, Sang M; Lee, David J; Hand, Austin; Young, Philip; Vaidyanathan, Jayabharathi; Sahajwalla, Chandrahas

    2015-12-01

    The study evaluated whether the renal function decline rate per year with age in adults varies based on two primary statistical analyses: cross-section (CS), using one observation per subject, and longitudinal (LT), using multiple observations per subject over time. A total of 16628 records (3946 subjects; age range 30-92 years) of creatinine clearance and relevant demographic data were used. On average, four samples per subject were collected for up to 2364 days (mean: 793 days). A simple linear regression and random coefficient models were selected for CS and LT analyses, respectively. The renal function decline rates per year were 1.33 and 0.95 ml/min/year for CS and LT analyses, respectively, and were slower when the repeated individual measurements were considered. The study confirms that rates are different based on statistical analyses, and that a statistically robust longitudinal model with a proper sampling design provides reliable individual as well as population estimates of the renal function decline rates per year with age in adults. In conclusion, our findings indicated that one should be cautious in interpreting the renal function decline rate with aging information because its estimation was highly dependent on the statistical analyses. From our analyses, a population longitudinal analysis (e.g. random coefficient model) is recommended if individualization is critical, such as a dose adjustment based on renal function during a chronic therapy. Copyright © 2015 John Wiley & Sons, Ltd.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Fuyu; Collins, William D.; Wehner, Michael F.

    High-resolution climate models have been shown to improve the statistics of tropical storms and hurricanes compared to low-resolution models. The impact of increasing horizontal resolution in the tropical storm simulation is investigated exclusively using a series of Atmospheric Global Climate Model (AGCM) runs with idealized aquaplanet steady-state boundary conditions and a fixed operational storm-tracking algorithm. The results show that increasing horizontal resolution helps to detect more hurricanes, simulate stronger extreme rainfall, and emulate better storm structures in the models. However, increasing model resolution does not necessarily produce stronger hurricanes in terms of maximum wind speed, minimum sea level pressure, andmore » mean precipitation, as the increased number of storms simulated by high-resolution models is mainly associated with weaker storms. The spatial scale at which the analyses are conducted appears to have more important control on these meteorological statistics compared to horizontal resolution of the model grid. When the simulations are analyzed on common low-resolution grids, the statistics of the hurricanes, particularly the hurricane counts, show reduced sensitivity to the horizontal grid resolution and signs of scale invariant.« less

  19. PEPA test: fast and powerful differential analysis from relative quantitative proteomics data using shared peptides.

    PubMed

    Jacob, Laurent; Combes, Florence; Burger, Thomas

    2018-06-18

    We propose a new hypothesis test for the differential abundance of proteins in mass-spectrometry based relative quantification. An important feature of this type of high-throughput analyses is that it involves an enzymatic digestion of the sample proteins into peptides prior to identification and quantification. Due to numerous homology sequences, different proteins can lead to peptides with identical amino acid chains, so that their parent protein is ambiguous. These so-called shared peptides make the protein-level statistical analysis a challenge and are often not accounted for. In this article, we use a linear model describing peptide-protein relationships to build a likelihood ratio test of differential abundance for proteins. We show that the likelihood ratio statistic can be computed in linear time with the number of peptides. We also provide the asymptotic null distribution of a regularized version of our statistic. Experiments on both real and simulated datasets show that our procedures outperforms state-of-the-art methods. The procedures are available via the pepa.test function of the DAPAR Bioconductor R package.

  20. Cluster mass inference via random field theory.

    PubMed

    Zhang, Hui; Nichols, Thomas E; Johnson, Timothy D

    2009-01-01

    Cluster extent and voxel intensity are two widely used statistics in neuroimaging inference. Cluster extent is sensitive to spatially extended signals while voxel intensity is better for intense but focal signals. In order to leverage strength from both statistics, several nonparametric permutation methods have been proposed to combine the two methods. Simulation studies have shown that of the different cluster permutation methods, the cluster mass statistic is generally the best. However, to date, there is no parametric cluster mass inference available. In this paper, we propose a cluster mass inference method based on random field theory (RFT). We develop this method for Gaussian images, evaluate it on Gaussian and Gaussianized t-statistic images and investigate its statistical properties via simulation studies and real data. Simulation results show that the method is valid under the null hypothesis and demonstrate that it can be more powerful than the cluster extent inference method. Further, analyses with a single subject and a group fMRI dataset demonstrate better power than traditional cluster size inference, and good accuracy relative to a gold-standard permutation test.

  1. Agriculture, population growth, and statistical analysis of the radiocarbon record.

    PubMed

    Zahid, H Jabran; Robinson, Erick; Kelly, Robert L

    2016-01-26

    The human population has grown significantly since the onset of the Holocene about 12,000 y ago. Despite decades of research, the factors determining prehistoric population growth remain uncertain. Here, we examine measurements of the rate of growth of the prehistoric human population based on statistical analysis of the radiocarbon record. We find that, during most of the Holocene, human populations worldwide grew at a long-term annual rate of 0.04%. Statistical analysis of the radiocarbon record shows that transitioning farming societies experienced the same rate of growth as contemporaneous foraging societies. The same rate of growth measured for populations dwelling in a range of environments and practicing a variety of subsistence strategies suggests that the global climate and/or endogenous biological factors, not adaptability to local environment or subsistence practices, regulated the long-term growth of the human population during most of the Holocene. Our results demonstrate that statistical analyses of large ensembles of radiocarbon dates are robust and valuable for quantitatively investigating the demography of prehistoric human populations worldwide.

  2. An ANOVA approach for statistical comparisons of brain networks.

    PubMed

    Fraiman, Daniel; Fraiman, Ricardo

    2018-03-16

    The study of brain networks has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. We identify, among other variables, that the amount of sleep the days before the scan is a relevant variable that must be controlled. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.

  3. Median statistics estimates of Hubble and Newton's constants

    NASA Astrophysics Data System (ADS)

    Bethapudi, Suryarao; Desai, Shantanu

    2017-02-01

    Robustness of any statistics depends upon the number of assumptions it makes about the measured data. We point out the advantages of median statistics using toy numerical experiments and demonstrate its robustness, when the number of assumptions we can make about the data are limited. We then apply the median statistics technique to obtain estimates of two constants of nature, Hubble constant (H0) and Newton's gravitational constant ( G , both of which show significant differences between different measurements. For H0, we update the analyses done by Chen and Ratra (2011) and Gott et al. (2001) using 576 measurements. We find after grouping the different results according to their primary type of measurement, the median estimates are given by H0 = 72.5^{+2.5}_{-8} km/sec/Mpc with errors corresponding to 95% c.l. (2 σ) and G=6.674702^{+0.0014}_{-0.0009} × 10^{-11} Nm2kg-2 corresponding to 68% c.l. (1σ).

  4. Quantile regression for the statistical analysis of immunological data with many non-detects.

    PubMed

    Eilers, Paul H C; Röder, Esther; Savelkoul, Huub F J; van Wijk, Roy Gerth

    2012-07-07

    Immunological parameters are hard to measure. A well-known problem is the occurrence of values below the detection limit, the non-detects. Non-detects are a nuisance, because classical statistical analyses, like ANOVA and regression, cannot be applied. The more advanced statistical techniques currently available for the analysis of datasets with non-detects can only be used if a small percentage of the data are non-detects. Quantile regression, a generalization of percentiles to regression models, models the median or higher percentiles and tolerates very high numbers of non-detects. We present a non-technical introduction and illustrate it with an implementation to real data from a clinical trial. We show that by using quantile regression, groups can be compared and that meaningful linear trends can be computed, even if more than half of the data consists of non-detects. Quantile regression is a valuable addition to the statistical methods that can be used for the analysis of immunological datasets with non-detects.

  5. The Development of Statistics Textbook Supported with ICT and Portfolio-Based Assessment

    NASA Astrophysics Data System (ADS)

    Hendikawati, Putriaji; Yuni Arini, Florentina

    2016-02-01

    This research was development research that aimed to develop and produce a Statistics textbook model that supported with information and communication technology (ICT) and Portfolio-Based Assessment. This book was designed for students of mathematics at the college to improve students’ ability in mathematical connection and communication. There were three stages in this research i.e. define, design, and develop. The textbooks consisted of 10 chapters which each chapter contains introduction, core materials and include examples and exercises. The textbook developed phase begins with the early stages of designed the book (draft 1) which then validated by experts. Revision of draft 1 produced draft 2 which then limited test for readability test book. Furthermore, revision of draft 2 produced textbook draft 3 which simulated on a small sample to produce a valid model textbook. The data were analysed with descriptive statistics. The analysis showed that the Statistics textbook model that supported with ICT and Portfolio-Based Assessment valid and fill up the criteria of practicality.

  6. Texture Profile and Color Determination on Local and Imported Meat Available in Semarang City, Indonesia

    NASA Astrophysics Data System (ADS)

    Arifin, Mukh; Ni'matullah Al-Baarri, Ahmad; Etza Setiani, Bhakti; Fazriyati Siregar, Risa

    2018-02-01

    This study was done for analysing the texture profile and colour performance in local and imported meat in Semarang, Indonesia. Two types of available meat were compared in the hardness, cohesiveness, springiness, adhesiveness and the colour L*a*b* performance. Five fresh beef cut of round meats from local and imported meat were used in this experiments. Data were analysed statistically using T-test. The results showed that local beef exhibit higher in the springiness than imported beef resulting in the remarkable differences. The colour analysis showed that imported beef provided remarkable higher in L* value than local beef. Resulting significant differences among two types of beef. As conclusion, these value might provide the notable of differences among local and imported meat and may give preferences status to the user for further application in meat processing.

  7. Lungworm Infections in German Dairy Cattle Herds — Seroprevalence and GIS-Supported Risk Factor Analysis

    PubMed Central

    Schunn, Anne-Marie; Conraths, Franz J.; Staubach, Christoph; Fröhlich, Andreas; Forbes, Andrew; Strube, Christina

    2013-01-01

    In November 2008, a total of 19,910 bulk tank milk (BTM) samples were obtained from dairy farms from all over Germany, corresponding to about 20% of all German dairy herds, and analysed for antibodies against the bovine lungworm Dictyocaulus viviparus by use of the recombinant MSP-ELISA. A total number of 3,397 (17.1%; n = 19,910) BTM samples tested seropositive. The prevalences in individual German federal states varied between 0.0% and 31.2% positive herds. A geospatial map was drawn to show the distribution of seropositive and seronegative herds per postal code area. ELISA results were further analysed for associations with land-use and climate data. Bivariate statistical analysis was used to identify potential spatial risk factors for dictyocaulosis. Statistically significant positive associations were found between lungworm seropositive herds and the proportion of water bodies and grassed area per postal code area. Variables that showed a statistically significant association with a positive BTM test were included in a logistic regression model, which was further refined by controlled stepwise selection of variables. The low Pseudo R2 values (0.08 for the full model and 0.06 for the final model) and further evaluation of the model by ROC analysis indicate that additional, unrecorded factors (e.g. management factors) or random effects may substantially contribute to lungworm infections in dairy cows. Veterinarians should include lungworms in the differential diagnosis of respiratory disease in dairy cattle, particularly those at pasture. Monitoring of herds through BTM screening for antibodies can help farmers and veterinarians plan and implement appropriate control measures. PMID:24040243

  8. Inferential Statistics in "Language Teaching Research": A Review and Ways Forward

    ERIC Educational Resources Information Center

    Lindstromberg, Seth

    2016-01-01

    This article reviews all (quasi)experimental studies appearing in the first 19 volumes (1997-2015) of "Language Teaching Research" (LTR). Specifically, it provides an overview of how statistical analyses were conducted in these studies and of how the analyses were reported. The overall conclusion is that there has been a tight adherence…

  9. DISSCO: direct imputation of summary statistics allowing covariates

    PubMed Central

    Xu, Zheng; Duan, Qing; Yan, Song; Chen, Wei; Li, Mingyao; Lange, Ethan; Li, Yun

    2015-01-01

    Background: Imputation of individual level genotypes at untyped markers using an external reference panel of genotyped or sequenced individuals has become standard practice in genetic association studies. Direct imputation of summary statistics can also be valuable, for example in meta-analyses where individual level genotype data are not available. Two methods (DIST and ImpG-Summary/LD), that assume a multivariate Gaussian distribution for the association summary statistics, have been proposed for imputing association summary statistics. However, both methods assume that the correlations between association summary statistics are the same as the correlations between the corresponding genotypes. This assumption can be violated in the presence of confounding covariates. Methods: We analytically show that in the absence of covariates, correlation among association summary statistics is indeed the same as that among the corresponding genotypes, thus serving as a theoretical justification for the recently proposed methods. We continue to prove that in the presence of covariates, correlation among association summary statistics becomes the partial correlation of the corresponding genotypes controlling for covariates. We therefore develop direct imputation of summary statistics allowing covariates (DISSCO). Results: We consider two real-life scenarios where the correlation and partial correlation likely make practical difference: (i) association studies in admixed populations; (ii) association studies in presence of other confounding covariate(s). Application of DISSCO to real datasets under both scenarios shows at least comparable, if not better, performance compared with existing correlation-based methods, particularly for lower frequency variants. For example, DISSCO can reduce the absolute deviation from the truth by 3.9–15.2% for variants with minor allele frequency <5%. Availability and implementation: http://www.unc.edu/∼yunmli/DISSCO. Contact: yunli@med.unc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25810429

  10. DISSCO: direct imputation of summary statistics allowing covariates.

    PubMed

    Xu, Zheng; Duan, Qing; Yan, Song; Chen, Wei; Li, Mingyao; Lange, Ethan; Li, Yun

    2015-08-01

    Imputation of individual level genotypes at untyped markers using an external reference panel of genotyped or sequenced individuals has become standard practice in genetic association studies. Direct imputation of summary statistics can also be valuable, for example in meta-analyses where individual level genotype data are not available. Two methods (DIST and ImpG-Summary/LD), that assume a multivariate Gaussian distribution for the association summary statistics, have been proposed for imputing association summary statistics. However, both methods assume that the correlations between association summary statistics are the same as the correlations between the corresponding genotypes. This assumption can be violated in the presence of confounding covariates. We analytically show that in the absence of covariates, correlation among association summary statistics is indeed the same as that among the corresponding genotypes, thus serving as a theoretical justification for the recently proposed methods. We continue to prove that in the presence of covariates, correlation among association summary statistics becomes the partial correlation of the corresponding genotypes controlling for covariates. We therefore develop direct imputation of summary statistics allowing covariates (DISSCO). We consider two real-life scenarios where the correlation and partial correlation likely make practical difference: (i) association studies in admixed populations; (ii) association studies in presence of other confounding covariate(s). Application of DISSCO to real datasets under both scenarios shows at least comparable, if not better, performance compared with existing correlation-based methods, particularly for lower frequency variants. For example, DISSCO can reduce the absolute deviation from the truth by 3.9-15.2% for variants with minor allele frequency <5%. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Multi-year objective analyses of warm season ground-level ozone and PM2.5 over North America using real-time observations and Canadian operational air quality models

    NASA Astrophysics Data System (ADS)

    Robichaud, A.; Ménard, R.

    2014-02-01

    Multi-year objective analyses (OA) on a high spatiotemporal resolution for the warm season period (1 May to 31 October) for ground-level ozone and for fine particulate matter (diameter less than 2.5 microns (PM2.5)) are presented. The OA used in this study combines model outputs from the Canadian air quality forecast suite with US and Canadian observations from various air quality surface monitoring networks. The analyses are based on an optimal interpolation (OI) with capabilities for adaptive error statistics for ozone and PM2.5 and an explicit bias correction scheme for the PM2.5 analyses. The estimation of error statistics has been computed using a modified version of the Hollingsworth-Lönnberg (H-L) method. The error statistics are "tuned" using a χ2 (chi-square) diagnostic, a semi-empirical procedure that provides significantly better verification than without tuning. Successful cross-validation experiments were performed with an OA setup using 90% of data observations to build the objective analyses and with the remainder left out as an independent set of data for verification purposes. Furthermore, comparisons with other external sources of information (global models and PM2.5 satellite surface-derived or ground-based measurements) show reasonable agreement. The multi-year analyses obtained provide relatively high precision with an absolute yearly averaged systematic error of less than 0.6 ppbv (parts per billion by volume) and 0.7 μg m-3 (micrograms per cubic meter) for ozone and PM2.5, respectively, and a random error generally less than 9 ppbv for ozone and under 12 μg m-3 for PM2.5. This paper focuses on two applications: (1) presenting long-term averages of OA and analysis increments as a form of summer climatology; and (2) analyzing long-term (decadal) trends and inter-annual fluctuations using OA outputs. The results show that high percentiles of ozone and PM2.5 were both following a general decreasing trend in North America, with the eastern part of the United States showing the most widespread decrease, likely due to more effective pollution controls. Some locations, however, exhibited an increasing trend in the mean ozone and PM2.5, such as the northwestern part of North America (northwest US and Alberta). Conversely, the low percentiles are generally rising for ozone, which may be linked to the intercontinental transport of increased emissions from emerging countries. After removing the decadal trend, the inter-annual fluctuations of the high percentiles are largely explained by the temperature fluctuations for ozone and to a lesser extent by precipitation fluctuations for PM2.5. More interesting is the economic short-term change (as expressed by the variation of the US gross domestic product growth rate), which explains 37% of the total variance of inter-annual fluctuations of PM2.5 and 15% in the case of ozone.

  12. Controlling false-negative errors in microarray differential expression analysis: a PRIM approach.

    PubMed

    Cole, Steve W; Galic, Zoran; Zack, Jerome A

    2003-09-22

    Theoretical considerations suggest that current microarray screening algorithms may fail to detect many true differences in gene expression (Type II analytic errors). We assessed 'false negative' error rates in differential expression analyses by conventional linear statistical models (e.g. t-test), microarray-adapted variants (e.g. SAM, Cyber-T), and a novel strategy based on hold-out cross-validation. The latter approach employs the machine-learning algorithm Patient Rule Induction Method (PRIM) to infer minimum thresholds for reliable change in gene expression from Boolean conjunctions of fold-induction and raw fluorescence measurements. Monte Carlo analyses based on four empirical data sets show that conventional statistical models and their microarray-adapted variants overlook more than 50% of genes showing significant up-regulation. Conjoint PRIM prediction rules recover approximately twice as many differentially expressed transcripts while maintaining strong control over false-positive (Type I) errors. As a result, experimental replication rates increase and total analytic error rates decline. RT-PCR studies confirm that gene inductions detected by PRIM but overlooked by other methods represent true changes in mRNA levels. PRIM-based conjoint inference rules thus represent an improved strategy for high-sensitivity screening of DNA microarrays. Freestanding JAVA application at http://microarray.crump.ucla.edu/focus

  13. Effect of exercise on depression in university students: a meta-analysis of randomized controlled trials.

    PubMed

    Yan, Shi; Jin, YinZhe; Oh, YongSeok; Choi, YoungJun

    2016-06-01

    The aim of this study was to assess the effect of exercise on depression in university students. A systematic literature search was conducted in PubMed, EMBASE and the Cochrane library from their inception through December 10, 2014 to identify relevant articles. The heterogeneity across studies was examined by Cochran's Q statistic and the I2 statistic. Standardized mean difference (SMD) and 95% confidence interval (CI) were pooled to evaluate the effect of exercise on depression. Then, sensitivity and subgroup analyses were performed. In addition, publication bias was assessed by drawing a funnel plot. A total of 352 participants (154 cases and 182 controls) from eight included trials were included. Our pooled result showed a significant alleviative depression after exercise (SMD=-0.50, 95% CI: -0.97 to -0.03, P=0.04) with significant heterogeneity (P=0.003, I2=67%). Sensitivity analyses showed that the pooled result may be unstable. Subgroup analysis indicated that sample size may be a source of heterogeneity. Moreover, no publication bias was observed in this study. Exercise may be an effective therapy for treating depression in university students. However, further clinical studies with strict design and large samples focused on this specific population should be warranted in the future.

  14. What do results from coordinate-based meta-analyses tell us?

    PubMed

    Albajes-Eizagirre, Anton; Radua, Joaquim

    2018-08-01

    Coordinate-based meta-analyses (CBMA) methods, such as Activation Likelihood Estimation (ALE) and Seed-based d Mapping (SDM), have become an invaluable tool for summarizing the findings of voxel-based neuroimaging studies. However, the progressive sophistication of these methods may have concealed two particularities of their statistical tests. Common univariate voxelwise tests (such as the t/z-tests used in SPM and FSL) detect voxels that activate, or voxels that show differences between groups. Conversely, the tests conducted in CBMA test for "spatial convergence" of findings, i.e., they detect regions where studies report "more peaks than in most regions", regions that activate "more than most regions do", or regions that show "larger differences between groups than most regions do". The first particularity is that these tests rely on two spatial assumptions (voxels are independent and have the same probability to have a "false" peak), whose violation may make their results either conservative or liberal, though fortunately current versions of ALE, SDM and some other methods consider these assumptions. The second particularity is that the use of these tests involves an important paradox: the statistical power to detect a given effect is higher if there are no other effects in the brain, whereas lower in presence of multiple effects. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Symptoms of depression as reported by Norwegian adolescents on the Short Mood and Feelings Questionnaire

    PubMed Central

    Lundervold, Astri J.; Breivik, Kyrre; Posserud, Maj-Britt; Stormark, Kjell Morten; Hysing, Mari

    2013-01-01

    The present study investigated sex-differences in reports of depressive symptoms on a Norwegian translation of the short version of the Mood and Feelings Questionnaire (SMFQ). The sample comprised 9702 Norwegian adolescents (born 1993–1995, 54.9% girls), mainly attending highschool. A set of statistical analyses were run to investigate the dimensionality of the SMFQ. Girls scored significantly higher than boys on the SMFQ and used the most severe response-category far more frequently. Overall, the statistical analyses supported the essential unidimensionality of SMFQ. However, the items with the highest loadings according to the bifactor analysis, reflecting problems related to tiredness, restlessness and concentration difficulties, indicated that some of the symptoms may both be independent of and part of the symptomatology of depression. Measurement invariance analysis showed that girls scored slightly higher on some items when taking the latent variable into account; girls had a lower threshold for reporting mood problems and problems related to tiredness than boys, who showed a marginally lower threshold for reporting that no-one loved them. However, the effect on the total SMFQ score was marginal, supporting the use of the Norwegian translation of SMFQ as a continuous variable in further studies of adolescents. PMID:24062708

  16. Statistical assessment of changes in extreme maximum temperatures over Saudi Arabia, 1985-2014

    NASA Astrophysics Data System (ADS)

    Raggad, Bechir

    2018-05-01

    In this study, two statistical approaches were adopted in the analysis of observed maximum temperature data collected from fifteen stations over Saudi Arabia during the period 1985-2014. In the first step, the behavior of extreme temperatures was analyzed and their changes were quantified with respect to the Expert Team on Climate Change Detection Monitoring indices. The results showed a general warming trend over most stations, in maximum temperature-related indices, during the period of analysis. In the second step, stationary and non-stationary extreme-value analyses were conducted for the temperature data. The results revealed that the non-stationary model with increasing linear trend in its location parameter outperforms the other models for two-thirds of the stations. Additionally, the 10-, 50-, and 100-year return levels were found to change with time considerably and that the maximum temperature could start to reappear in the different T-year return period for most stations. This analysis shows the importance of taking account the change over time in the estimation of return levels and therefore justifies the use of the non-stationary generalized extreme value distribution model to describe most of the data. Furthermore, these last findings are in line with the result of significant warming trends found in climate indices analyses.

  17. Field Synopsis and Re-analysis of Systematic Meta-analyses of Genetic Association Studies in Multiple Sclerosis: a Bayesian Approach.

    PubMed

    Park, Jae Hyon; Kim, Joo Hi; Jo, Kye Eun; Na, Se Whan; Eisenhut, Michael; Kronbichler, Andreas; Lee, Keum Hwa; Shin, Jae Il

    2018-07-01

    To provide an up-to-date summary of multiple sclerosis-susceptible gene variants and assess the noteworthiness in hopes of finding true associations, we investigated the results of 44 meta-analyses on gene variants and multiple sclerosis published through December 2016. Out of 70 statistically significant genotype associations, roughly a fifth (21%) of the comparisons showed noteworthy false-positive rate probability (FPRP) at a statistical power to detect an OR of 1.5 and at a prior probability of 10 -6 assumed for a random single nucleotide polymorphism. These associations (IRF8/rs17445836, STAT3/rs744166, HLA/rs4959093, HLA/rs2647046, HLA/rs7382297, HLA/rs17421624, HLA/rs2517646, HLA/rs9261491, HLA/rs2857439, HLA/rs16896944, HLA/rs3132671, HLA/rs2857435, HLA/rs9261471, HLA/rs2523393, HLA-DRB1/rs3135388, RGS1/rs2760524, PTGER4/rs9292777) also showed a noteworthy Bayesian false discovery probability (BFDP) and one additional association (CD24 rs8734/rs52812045) was also noteworthy via BFDP computation. Herein, we have identified several noteworthy biomarkers of multiple sclerosis susceptibility. We hope these data are used to study multiple sclerosis genetics and inform future screening programs.

  18. Meta-analyses of Adverse Effects Data Derived from Randomised Controlled Trials as Compared to Observational Studies: Methodological Overview

    PubMed Central

    Golder, Su; Loke, Yoon K.; Bland, Martin

    2011-01-01

    Background There is considerable debate as to the relative merits of using randomised controlled trial (RCT) data as opposed to observational data in systematic reviews of adverse effects. This meta-analysis of meta-analyses aimed to assess the level of agreement or disagreement in the estimates of harm derived from meta-analysis of RCTs as compared to meta-analysis of observational studies. Methods and Findings Searches were carried out in ten databases in addition to reference checking, contacting experts, citation searches, and hand-searching key journals, conference proceedings, and Web sites. Studies were included where a pooled relative measure of an adverse effect (odds ratio or risk ratio) from RCTs could be directly compared, using the ratio of odds ratios, with the pooled estimate for the same adverse effect arising from observational studies. Nineteen studies, yielding 58 meta-analyses, were identified for inclusion. The pooled ratio of odds ratios of RCTs compared to observational studies was estimated to be 1.03 (95% confidence interval 0.93–1.15). There was less discrepancy with larger studies. The symmetric funnel plot suggests that there is no consistent difference between risk estimates from meta-analysis of RCT data and those from meta-analysis of observational studies. In almost all instances, the estimates of harm from meta-analyses of the different study designs had 95% confidence intervals that overlapped (54/58, 93%). In terms of statistical significance, in nearly two-thirds (37/58, 64%), the results agreed (both studies showing a significant increase or significant decrease or both showing no significant difference). In only one meta-analysis about one adverse effect was there opposing statistical significance. Conclusions Empirical evidence from this overview indicates that there is no difference on average in the risk estimate of adverse effects of an intervention derived from meta-analyses of RCTs and meta-analyses of observational studies. This suggests that systematic reviews of adverse effects should not be restricted to specific study types. Please see later in the article for the Editors' Summary PMID:21559325

  19. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    PubMed

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.

  20. Fast and accurate imputation of summary statistics enhances evidence of functional enrichment

    PubMed Central

    Pasaniuc, Bogdan; Zaitlen, Noah; Shi, Huwenbo; Bhatia, Gaurav; Gusev, Alexander; Pickrell, Joseph; Hirschhorn, Joel; Strachan, David P.; Patterson, Nick; Price, Alkes L.

    2014-01-01

    Motivation: Imputation using external reference panels (e.g. 1000 Genomes) is a widely used approach for increasing power in genome-wide association studies and meta-analysis. Existing hidden Markov models (HMM)-based imputation approaches require individual-level genotypes. Here, we develop a new method for Gaussian imputation from summary association statistics, a type of data that is becoming widely available. Results: In simulations using 1000 Genomes (1000G) data, this method recovers 84% (54%) of the effective sample size for common (>5%) and low-frequency (1–5%) variants [increasing to 87% (60%) when summary linkage disequilibrium information is available from target samples] versus the gold standard of 89% (67%) for HMM-based imputation, which cannot be applied to summary statistics. Our approach accounts for the limited sample size of the reference panel, a crucial step to eliminate false-positive associations, and it is computationally very fast. As an empirical demonstration, we apply our method to seven case–control phenotypes from the Wellcome Trust Case Control Consortium (WTCCC) data and a study of height in the British 1958 birth cohort (1958BC). Gaussian imputation from summary statistics recovers 95% (105%) of the effective sample size (as quantified by the ratio of χ2 association statistics) compared with HMM-based imputation from individual-level genotypes at the 227 (176) published single nucleotide polymorphisms (SNPs) in the WTCCC (1958BC height) data. In addition, for publicly available summary statistics from large meta-analyses of four lipid traits, we publicly release imputed summary statistics at 1000G SNPs, which could not have been obtained using previously published methods, and demonstrate their accuracy by masking subsets of the data. We show that 1000G imputation using our approach increases the magnitude and statistical evidence of enrichment at genic versus non-genic loci for these traits, as compared with an analysis without 1000G imputation. Thus, imputation of summary statistics will be a valuable tool in future functional enrichment analyses. Availability and implementation: Publicly available software package available at http://bogdan.bioinformatics.ucla.edu/software/. Contact: bpasaniuc@mednet.ucla.edu or aprice@hsph.harvard.edu Supplementary information: Supplementary materials are available at Bioinformatics online. PMID:24990607

  1. How Historical Information Can Improve Extreme Value Analysis of Coastal Water Levels

    NASA Astrophysics Data System (ADS)

    Le Cozannet, G.; Bulteau, T.; Idier, D.; Lambert, J.; Garcin, M.

    2016-12-01

    The knowledge of extreme coastal water levels is useful for coastal flooding studies or the design of coastal defences. While deriving such extremes with standard analyses using tide gauge measurements, one often needs to deal with limited effective duration of observation which can result in large statistical uncertainties. This is even truer when one faces outliers, those particularly extreme values distant from the others. In a recent work (Bulteau et al., 2015), we investigated how historical information of past events reported in archives can reduce statistical uncertainties and relativize such outlying observations. We adapted a Bayesian Markov Chain Monte Carlo method, initially developed in the hydrology field (Reis and Stedinger, 2005), to the specific case of coastal water levels. We applied this method to the site of La Rochelle (France), where the storm Xynthia in 2010 generated a water level considered so far as an outlier. Based on 30 years of tide gauge measurements and 8 historical events since 1890, the results showed a significant decrease in statistical uncertainties on return levels when historical information is used. Also, Xynthia's water level no longer appeared as an outlier and we could have reasonably predicted the annual exceedance probability of that level beforehand (predictive probability for 2010 based on data until the end of 2009 of the same order of magnitude as the standard estimative probability using data until the end of 2010). Such results illustrate the usefulness of historical information in extreme value analyses of coastal water levels, as well as the relevance of the proposed method to integrate heterogeneous data in such analyses.

  2. Nonlinear dynamic analysis of voices before and after surgical excision of vocal polyps

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; McGilligan, Clancy; Zhou, Liang; Vig, Mark; Jiang, Jack J.

    2004-05-01

    Phase space reconstruction, correlation dimension, and second-order entropy, methods from nonlinear dynamics, are used to analyze sustained vowels generated by patients before and after surgical excision of vocal polyps. Two conventional acoustic perturbation parameters, jitter and shimmer, are also employed to analyze voices before and after surgery. Presurgical and postsurgical analyses of jitter, shimmer, correlation dimension, and second-order entropy are statistically compared. Correlation dimension and second-order entropy show a statistically significant decrease after surgery, indicating reduced complexity and higher predictability of postsurgical voice dynamics. There is not a significant postsurgical difference in shimmer, although jitter shows a significant postsurgical decrease. The results suggest that jitter and shimmer should be applied to analyze disordered voices with caution; however, nonlinear dynamic methods may be useful for analyzing abnormal vocal function and quantitatively evaluating the effects of surgical excision of vocal polyps.

  3. Keywords and Co-Occurrence Patterns in the Voynich Manuscript: An Information-Theoretic Analysis

    PubMed Central

    Montemurro, Marcelo A.; Zanette, Damián H.

    2013-01-01

    The Voynich manuscript has remained so far as a mystery for linguists and cryptologists. While the text written on medieval parchment -using an unknown script system- shows basic statistical patterns that bear resemblance to those from real languages, there are features that suggested to some researches that the manuscript was a forgery intended as a hoax. Here we analyse the long-range structure of the manuscript using methods from information theory. We show that the Voynich manuscript presents a complex organization in the distribution of words that is compatible with those found in real language sequences. We are also able to extract some of the most significant semantic word-networks in the text. These results together with some previously known statistical features of the Voynich manuscript, give support to the presence of a genuine message inside the book. PMID:23805215

  4. Inflated Uncertainty in Multimodel-Based Regional Climate Projections.

    PubMed

    Madsen, Marianne Sloth; Langen, Peter L; Boberg, Fredrik; Christensen, Jens Hesselbjerg

    2017-11-28

    Multimodel ensembles are widely analyzed to estimate the range of future regional climate change projections. For an ensemble of climate models, the result is often portrayed by showing maps of the geographical distribution of the multimodel mean results and associated uncertainties represented by model spread at the grid point scale. Here we use a set of CMIP5 models to show that presenting statistics this way results in an overestimation of the projected range leading to physically implausible patterns of change on global but also on regional scales. We point out that similar inconsistencies occur in impact analyses relying on multimodel information extracted using statistics at the regional scale, for example, when a subset of CMIP models is selected to represent regional model spread. Consequently, the risk of unwanted impacts may be overestimated at larger scales as climate change impacts will never be realized as the worst (or best) case everywhere.

  5. Why Flash Type Matters: A Statistical Analysis

    NASA Astrophysics Data System (ADS)

    Mecikalski, Retha M.; Bitzer, Phillip M.; Carey, Lawrence D.

    2017-09-01

    While the majority of research only differentiates between intracloud (IC) and cloud-to-ground (CG) flashes, there exists a third flash type, known as hybrid flashes. These flashes have extensive IC components as well as return strokes to ground but are misclassified as CG flashes in current flash type analyses due to the presence of a return stroke. In an effort to show that IC, CG, and hybrid flashes should be separately classified, the two-sample Kolmogorov-Smirnov (KS) test was applied to the flash sizes, flash initiation, and flash propagation altitudes for each of the three flash types. The KS test statistically showed that IC, CG, and hybrid flashes do not have the same parent distributions and thus should be separately classified. Separate classification of hybrid flashes will lead to improved lightning-related research, because unambiguously classified hybrid flashes occur on the same order of magnitude as CG flashes for multicellular storms.

  6. Behavior, sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation.

    PubMed

    Eickhoff, Simon B; Nichols, Thomas E; Laird, Angela R; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T; Bzdok, Danilo; Eickhoff, Claudia R

    2016-08-15

    Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Behavior, Sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation

    PubMed Central

    Eickhoff, Simon B.; Nichols, Thomas E.; Laird, Angela R.; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T.

    2016-01-01

    Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. PMID:27179606

  8. Prognostic Significance of Blood Transfusion in Elderly Patients with Primary Diffuse Large B-Cell Lymphoma

    PubMed Central

    Fan, Liping; Fu, Danhui; Hong, Jinquan; He, Wenqian; Zeng, Feng; Lin, Qiuyan; Xie, Qianling

    2018-01-01

    The current study sought to evaluate whether blood transfusions affect survival of elderly patients with primary diffuse large B-cell lymphoma (DLBCL). A total of 104 patients aged 60 years and over were enrolled and divided into two groups: 24 patients who received transfusions and 80 patients who did not. Statistical analyses showed significant differences in LDH levels, platelet (Plt) counts, and hemoglobin (Hb) and albumin (Alb) levels between the two groups. Univariate analyses showed that LDH level ≥ 245 IU/L, cell of origin (germinal center/nongerminal center), and blood transfusion were associated with both overall survival (OS) and progression-free survival (PFS). Higher IPI (3–5), Alb level < 35 g/L, and rituximab usage were associated with OS. Appearance of B symptoms was associated with PFS. Multivariate analyses showed that cell of origin and rituximab usage were independent factors for OS and LDH level was an independent factor for PFS. Blood transfusion was an independent factor for PFS, but not for OS. Our preliminary results suggested that elderly patients with primary DLBCL may benefit from a restrictive blood transfusion strategy. PMID:29750167

  9. Effects of Psychological and Social Work Factors on Self-Reported Sleep Disturbance and Difficulties Initiating Sleep

    PubMed Central

    Vleeshouwers, Jolien; Knardahl, Stein; Christensen, Jan Olav

    2016-01-01

    Study Objectives: This prospective cohort study examined previously underexplored relations between psychological/social work factors and troubled sleep in order to provide practical information about specific, modifiable factors at work. Methods: A comprehensive evaluation of a range of psychological/social work factors was obtained by several designs; i.e., cross-sectional analyses at baseline and follow-up, prospective analyses with baseline predictors (T1), prospective analyses with average exposure across waves as predictor ([T1 + T2] / 2), and prospective analyses with change in exposure from baseline to follow-up as predictor. Participants consisted of a sample of Norwegian employees from a broad spectrum of occupations, who completed a questionnaire at two points in time, approximately two years apart. Cross-sectional analyses at T1 comprised 7,459 participants, cross-sectional analyses at T2 included 6,688 participants. Prospective analyses comprised a sample 5,070 of participants who responded at both T1 and T2. Univariable and multivariable ordinal logistic regressions were performed. Results: Thirteen psychological/social work factors and two aspects of troubled sleep, namely difficulties initiating sleep and disturbed sleep, were studied. Ordinal logistic regressions revealed statistically significant associations for all psychological and social work factors in at least one of the analyses. Psychological and social work factors predicted sleep problems in the short term as well as the long term. Conclusions: All work factors investigated showed statistically significant associations with both sleep items, however quantitative job demands, decision control, role conflict, and support from superior were the most robust predictors and may therefore be suitable targets of interventions aimed at improving employee sleep. Citation: Vleeshouwers J, Knardahl S, Christensen JO. Effects of psychological and social work factors on self-reported sleep disturbance and difficulties initiating sleep. SLEEP 2016;39(4):833–846. PMID:26446114

  10. Prospects of Fine-Mapping Trait-Associated Genomic Regions by Using Summary Statistics from Genome-wide Association Studies.

    PubMed

    Benner, Christian; Havulinna, Aki S; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ripatti, Samuli; Pirinen, Matti

    2017-10-05

    During the past few years, various novel statistical methods have been developed for fine-mapping with the use of summary statistics from genome-wide association studies (GWASs). Although these approaches require information about the linkage disequilibrium (LD) between variants, there has not been a comprehensive evaluation of how estimation of the LD structure from reference genotype panels performs in comparison with that from the original individual-level GWAS data. Using population genotype data from Finland and the UK Biobank, we show here that a reference panel of 1,000 individuals from the target population is adequate for a GWAS cohort of up to 10,000 individuals, whereas smaller panels, such as those from the 1000 Genomes Project, should be avoided. We also show, both theoretically and empirically, that the size of the reference panel needs to scale with the GWAS sample size; this has important consequences for the application of these methods in ongoing GWAS meta-analyses and large biobank studies. We conclude by providing software tools and by recommending practices for sharing LD information to more efficiently exploit summary statistics in genetics research. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  11. Profiling agricultural land cover change in the North Central U.S. using ten years of the Cropland Data Layer

    NASA Astrophysics Data System (ADS)

    Sandborn, A.; Ebinger, L.

    2016-12-01

    The Cropland Data Layer (CDL), produced by the USDA/National Agricultural Statistics Service, provides annual, georeferenced crop specific land cover data over the contiguous United States. Several analyses were performed on ten years (2007-2016) of CDL data in order to visualize and quantify agricultural change over the North Central region (North Dakota, South Dakota, and Minnesota). Crop masks were derived from the CDL and layered to produce a ten-year time stack of corn, soybeans, and spring wheat at 30m spatial resolution. Through numerous image analyses, a temporal profile of each crop type was compiled and portrayed cartographically. For each crop, analyses included calculating the mean center of crop area over the ten year sequence, identifying the first and latest year the crop was grown on each pixel, and distinguishing crop rotation patterns and replacement statistics. Results show a clear north-western expansion trend for corn and soybeans, and a western migration trend for spring wheat. While some change may be due to commonly practiced crop rotation, this analysis shows that crop footprints have extended into areas that were previously other crops, idle cropland, and pasture/rangeland. Possible factors contributing to this crop migration pattern include profit advantages of row crops over small grains, improved crop genetics, climate change, and farm management program changes. Identifying and mapping these crop planting differences will better inform agricultural best practices, help to monitor the latest crop migration patterns, and present researchers with a way to quantitatively measure and forecast future agricultural trends.

  12. Manual therapy compared with physical therapy in patients with non-specific neck pain: a randomized controlled trial.

    PubMed

    Groeneweg, Ruud; van Assen, Luite; Kropman, Hans; Leopold, Huco; Mulder, Jan; Smits-Engelsman, Bouwien C M; Ostelo, Raymond W J G; Oostendorp, Rob A B; van Tulder, Maurits W

    2017-01-01

    Manual therapy according to the School of Manual Therapy Utrecht (MTU) is a specific type of passive manual joint mobilization. MTU has not yet been systematically compared to other manual therapies and physical therapy. In this study the effectiveness of MTU is compared to physical therapy, particularly active exercise therapy (PT) in patients with non-specific neck pain. Patients neck pain, aged between 18-70 years, were included in a pragmatic randomized controlled trial with a one-year follow-up. Primary outcome measures were global perceived effect and functioning (Neck Disability Index), the secondary outcome was pain intensity (Numeric Rating Scale for Pain). Outcomes were measured at 3, 7, 13, 26 and 52 weeks. Multilevel analyses (intention-to-treat) were the primary analyses for overall between-group differences. Additional to the primary and secondary outcomes the number of treatment sessions of the MTU group and PT group was analyzed. Data were collected from September 2008 to February 2011. A total of 181 patients were included. Multilevel analyses showed no statistically significant overall differences at one year between the MTU and PT groups on any of the primary and secondary outcomes. The MTU group showed significantly lower treatment sessions compared to the PT group (respectively 3.1 vs. 5.9 after 7 weeks; 6.1 vs. 10.0 after 52 weeks). Patients with neck pain improved in both groups without statistical significantly or clinically relevant differences between the MTU and PT groups during one-year follow-up. ClinicalTrials.gov Identifier: NCT00713843.

  13. Effects of botulinum toxin A therapy and multidisciplinary rehabilitation on upper and lower limb spasticity in post-stroke patients.

    PubMed

    Hara, Takatoshi; Abo, Masahiro; Hara, Hiroyoshi; Kobayashi, Kazushige; Shimamoto, Yusuke; Samizo, Yuta; Sasaki, Nobuyuki; Yamada, Naoki; Niimi, Masachika

    2017-06-01

    The purpose of this study was to examine the effects of combined botulinum toxin type A (BoNT-A) and inpatient multidisciplinary (MD) rehabilitation therapy on the improvement of upper and lower limb function in post-stroke patients. In this retrospective study, a 12-day inpatient treatment protocol was implemented on 51 post-stroke patients with spasticity. Assessments were performed on the day of admission, at discharge, and at 3 months following discharge. At the time of discharge, all of the evaluated items showed a statistically significant improvement. Only the Functional Reach Test (FRT) showed a statistically significant improvement at 3 months. In subgroup analyses, the slowest walking speed group showed a significantly greater change ratio of the 10 Meter Walk Test relative to the other groups, from the time of admission to discharge. This group showed a greater FRT change ratio than the other groups from the time of admission to the 3-month follow-up. Inpatient combined therapy of simultaneous injections of BoNT-A to the upper and lower limbs and MD may improve motor function.

  14. Crimes against the elderly in Italy, 2007-2014.

    PubMed

    Terranova, Claudio; Bevilacqua, Greta; Zen, Margherita; Montisci, Massimo

    2017-08-01

    Crimes against the elderly have physical, psychological, and economic consequences. Approaches for mitigating them must be based on comprehensive knowledge of the phenomenon. This study analyses crimes against the elderly in Italy during the period 2007-2014 from an epidemiological viewpoint. Data on violent and non-violent crimes derived from the Italian Institute of Statistics were analysed in relation to trends, gender and age by linear regression, T-test, and calculation of the odds ratio with a 95% confidence interval. Results show that the elderly are at higher risk of being victimized in two types of crime, violent (residential robbery) and non-violent (pick-pocketing and purse-snatching) compared with other age groups during the period considered. A statistically significant increase in residential robbery and pick-pocketing was also observed. The rate of homicide against the elderly was stable during the study period, in contrast with reduced rates in other age groups. These results may be explained by risk factors increasing the profiles of elderly individuals as potential victims, such as frailty, cognitive impairment, and social isolation. Further studies analysing the characteristics of victims are required. Based on the results presented here, appropriate preventive strategies should be planned to reduce crimes against the elderly. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  15. LC–MS/MS Quantitation of Esophagus Disease Blood Serum Glycoproteins by Enrichment with Hydrazide Chemistry and Lectin Affinity Chromatography

    PubMed Central

    2015-01-01

    Changes in glycosylation have been shown to have a profound correlation with development/malignancy in many cancer types. Currently, two major enrichment techniques have been widely applied in glycoproteomics, namely, lectin affinity chromatography (LAC)-based and hydrazide chemistry (HC)-based enrichments. Here we report the LC–MS/MS quantitative analyses of human blood serum glycoproteins and glycopeptides associated with esophageal diseases by LAC- and HC-based enrichment. The separate and complementary qualitative and quantitative data analyses of protein glycosylation were performed using both enrichment techniques. Chemometric and statistical evaluations, PCA plots, or ANOVA test, respectively, were employed to determine and confirm candidate cancer-associated glycoprotein/glycopeptide biomarkers. Out of 139, 59 common glycoproteins (42% overlap) were observed in both enrichment techniques. This overlap is very similar to previously published studies. The quantitation and evaluation of significantly changed glycoproteins/glycopeptides are complementary between LAC and HC enrichments. LC–ESI–MS/MS analyses indicated that 7 glycoproteins enriched by LAC and 11 glycoproteins enriched by HC showed significantly different abundances between disease-free and disease cohorts. Multiple reaction monitoring quantitation resulted in 13 glycopeptides by LAC enrichment and 10 glycosylation sites by HC enrichment to be statistically different among disease cohorts. PMID:25134008

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shettel, D.L. Jr.; Langfeldt, S.L.; Youngquist, C.A.

    This report presents a Hydrogeochemical and Stream Sediment Reconnaissance of the Christian NTMS Quadrangle, Alaska. In addition to this abbreviated data release, more complete data are available to the public in machine-readable form. These machine-readable data, as well as quarterly or semiannual program progress reports containing further information on the HSSR program in general, or on the Los Alamos National Laboratory portion of the program in particular, are available from DOE's Technical Library at its Grand Junction Area Office. Presented in this data release are location data, field analyses, and laboratory analyses of several different sample media. For the sakemore » of brevity, many field site observations have not been included in this volume; these data are, however, available on the magnetic tape. Appendices A through D describe the sample media and summarize the analytical results for each medium. The data have been subdivided by one of the Los Alamos National Laboratory sorting programs of Zinkl and others (1981a) into groups of stream-sediment, lake-sediment, stream-water, lake-water, and ground-water samples. For each group which contains a sufficient number of observations, statistical tables, tables of raw data, and 1:1,000,000 scale maps of pertinent elements have been included in this report. Also included are maps showing results of multivariate statistical analyses.« less

  17. Bootstrap versus Statistical Effect Size Corrections: A Comparison with Data from the Finding Embedded Figures Test.

    ERIC Educational Resources Information Center

    Thompson, Bruce; Melancon, Janet G.

    Effect sizes have been increasingly emphasized in research as more researchers have recognized that: (1) all parametric analyses (t-tests, analyses of variance, etc.) are correlational; (2) effect sizes have played an important role in meta-analytic work; and (3) statistical significance testing is limited in its capacity to inform scientific…

  18. Comments on `A Cautionary Note on the Interpretation of EOFs'.

    NASA Astrophysics Data System (ADS)

    Behera, Swadhin K.; Rao, Suryachandra A.; Saji, Hameed N.; Yamagata, Toshio

    2003-04-01

    The misleading aspect of the statistical analyses used in Dommenget and Latif, which raises concerns on some of the reported climate modes, is demonstrated. Adopting simple statistical techniques, the physical existence of the Indian Ocean dipole mode is shown and then the limitations of varimax and regression analyses in capturing the climate mode are discussed.

  19. Finnish upper secondary students' collaborative processes in learning statistics in a CSCL environment

    NASA Astrophysics Data System (ADS)

    Kaleva Oikarinen, Juho; Järvelä, Sanna; Kaasila, Raimo

    2014-04-01

    This design-based research project focuses on documenting statistical learning among 16-17-year-old Finnish upper secondary school students (N = 78) in a computer-supported collaborative learning (CSCL) environment. One novel value of this study is in reporting the shift from teacher-led mathematical teaching to autonomous small-group learning in statistics. The main aim of this study is to examine how student collaboration occurs in learning statistics in a CSCL environment. The data include material from videotaped classroom observations and the researcher's notes. In this paper, the inter-subjective phenomena of students' interactions in a CSCL environment are analysed by using a contact summary sheet (CSS). The development of the multi-dimensional coding procedure of the CSS instrument is presented. Aptly selected video episodes were transcribed and coded in terms of conversational acts, which were divided into non-task-related and task-related categories to depict students' levels of collaboration. The results show that collaborative learning (CL) can facilitate cohesion and responsibility and reduce students' feelings of detachment in our classless, periodic school system. The interactive .pdf material and collaboration in small groups enable statistical learning. It is concluded that CSCL is one possible method of promoting statistical teaching. CL using interactive materials seems to foster and facilitate statistical learning processes.

  20. The statistical reporting quality of articles published in 2010 in five dental journals.

    PubMed

    Vähänikkilä, Hannu; Tjäderhane, Leo; Nieminen, Pentti

    2015-01-01

    Statistical methods play an important role in medical and dental research. In earlier studies it has been observed that current use of methods and reporting of statistics are responsible for some of the errors in the interpretation of results. The aim of this study was to investigate the quality of statistical reporting in dental research articles. A total of 200 articles published in 2010 were analysed covering five dental journals: Journal of Dental Research, Caries Research, Community Dentistry and Oral Epidemiology, Journal of Dentistry and Acta Odontologica Scandinavica. Each paper underwent careful scrutiny for the use of statistical methods and reporting. A paper with at least one poor reporting item has been classified as 'problems with reporting statistics' and a paper without any poor reporting item as 'acceptable'. The investigation showed that 18 (9%) papers were acceptable and 182 (91%) papers contained at least one poor reporting item. The proportion of at least one poor reporting item in this survey was high (91%). The authors of dental journals should be encouraged to improve the statistical section of their research articles and to present the results in such a way that it is in line with the policy and presentation of the leading dental journals.

  1. Accounting for Population Structure in Gene-by-Environment Interactions in Genome-Wide Association Studies Using Mixed Models.

    PubMed

    Sul, Jae Hoon; Bilow, Michael; Yang, Wen-Yun; Kostem, Emrah; Furlotte, Nick; He, Dan; Eskin, Eleazar

    2016-03-01

    Although genome-wide association studies (GWASs) have discovered numerous novel genetic variants associated with many complex traits and diseases, those genetic variants typically explain only a small fraction of phenotypic variance. Factors that account for phenotypic variance include environmental factors and gene-by-environment interactions (GEIs). Recently, several studies have conducted genome-wide gene-by-environment association analyses and demonstrated important roles of GEIs in complex traits. One of the main challenges in these association studies is to control effects of population structure that may cause spurious associations. Many studies have analyzed how population structure influences statistics of genetic variants and developed several statistical approaches to correct for population structure. However, the impact of population structure on GEI statistics in GWASs has not been extensively studied and nor have there been methods designed to correct for population structure on GEI statistics. In this paper, we show both analytically and empirically that population structure may cause spurious GEIs and use both simulation and two GWAS datasets to support our finding. We propose a statistical approach based on mixed models to account for population structure on GEI statistics. We find that our approach effectively controls population structure on statistics for GEIs as well as for genetic variants.

  2. Trends in selected streamflow statistics at 19 long-term streamflow-gaging stations indicative of outflows from Texas to Arkansas, Louisiana, Galveston Bay, and the Gulf of Mexico, 1922-2009

    USGS Publications Warehouse

    Barbie, Dana L.; Wehmeyer, Loren L.

    2012-01-01

    Trends in selected streamflow statistics during 1922-2009 were evaluated at 19 long-term streamflow-gaging stations considered indicative of outflows from Texas to Arkansas, Louisiana, Galveston Bay, and the Gulf of Mexico. The U.S. Geological Survey, in cooperation with the Texas Water Development Board, evaluated streamflow data from streamflow-gaging stations with more than 50 years of record that were active as of 2009. The outflows into Arkansas and Louisiana were represented by 3 streamflow-gaging stations, and outflows into the Gulf of Mexico, including Galveston Bay, were represented by 16 streamflow-gaging stations. Monotonic trend analyses were done using the following three streamflow statistics generated from daily mean values of streamflow: (1) annual mean daily discharge, (2) annual maximum daily discharge, and (3) annual minimum daily discharge. The trend analyses were based on the nonparametric Kendall's Tau test, which is useful for the detection of monotonic upward or downward trends with time. A total of 69 trend analyses by Kendall's Tau were computed - 19 periods of streamflow multiplied by the 3 streamflow statistics plus 12 additional trend analyses because the periods of record for 2 streamflow-gaging stations were divided into periods representing pre- and post-reservoir impoundment. Unless otherwise described, each trend analysis used the entire period of record for each streamflow-gaging station. The monotonic trend analysis detected 11 statistically significant downward trends, 37 instances of no trend, and 21 statistically significant upward trends. One general region studied, which seemingly has relatively more upward trends for many of the streamflow statistics analyzed, includes the rivers and associated creeks and bayous to Galveston Bay in the Houston metropolitan area. Lastly, the most western river basins considered (the Nueces and Rio Grande) had statistically significant downward trends for many of the streamflow statistics analyzed.

  3. Statistical analysis of QC data and estimation of fuel rod behaviour

    NASA Astrophysics Data System (ADS)

    Heins, L.; Groβ, H.; Nissen, K.; Wunderlich, F.

    1991-02-01

    The behaviour of fuel rods while in reactor is influenced by many parameters. As far as fabrication is concerned, fuel pellet diameter and density, and inner cladding diameter are important examples. Statistical analyses of quality control data show a scatter of these parameters within the specified tolerances. At present it is common practice to use a combination of superimposed unfavorable tolerance limits (worst case dataset) in fuel rod design calculations. Distributions are not considered. The results obtained in this way are very conservative but the degree of conservatism is difficult to quantify. Probabilistic calculations based on distributions allow the replacement of the worst case dataset by a dataset leading to results with known, defined conservatism. This is achieved by response surface methods and Monte Carlo calculations on the basis of statistical distributions of the important input parameters. The procedure is illustrated by means of two examples.

  4. When mechanism matters: Bayesian forecasting using models of ecological diffusion

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.; Russell, Robin E.; Walsh, Daniel P.; Powell, James A.

    2017-01-01

    Ecological diffusion is a theory that can be used to understand and forecast spatio-temporal processes such as dispersal, invasion, and the spread of disease. Hierarchical Bayesian modelling provides a framework to make statistical inference and probabilistic forecasts, using mechanistic ecological models. To illustrate, we show how hierarchical Bayesian models of ecological diffusion can be implemented for large data sets that are distributed densely across space and time. The hierarchical Bayesian approach is used to understand and forecast the growth and geographic spread in the prevalence of chronic wasting disease in white-tailed deer (Odocoileus virginianus). We compare statistical inference and forecasts from our hierarchical Bayesian model to phenomenological regression-based methods that are commonly used to analyse spatial occurrence data. The mechanistic statistical model based on ecological diffusion led to important ecological insights, obviated a commonly ignored type of collinearity, and was the most accurate method for forecasting.

  5. Institutional racism in public health contracting: Findings of a nationwide survey from New Zealand.

    PubMed

    Came, H; Doole, C; McKenna, B; McCreanor, T

    2018-02-01

    Public institutions within New Zealand have long been accused of mono-culturalism and institutional racism. This study sought to identify inconsistencies and bias by comparing government funded contracting processes for Māori public health providers (n = 60) with those of generic providers (n = 90). Qualitative and quantitative data were collected (November 2014-May 2015), through a nationwide telephone survey of public health providers, achieving a 75% response rate. Descriptive statistical analyses were applied to quantitative responses and an inductive approach was taken to analyse data from open-ended responses in the survey domains of relationships with portfolio contract managers, contracting and funding. The quantitative data showed four sites of statistically significant variation: length of contracts, intensity of monitoring, compliance costs and frequency of auditing. Non-significant data involved access to discretionary funding and cost of living adjustments, the frequency of monitoring, access to Crown (government) funders and representation on advisory groups. The qualitative material showed disparate provider experiences, dependent on individual portfolio managers, with nuanced differences between generic and Māori providers' experiences. This study showed that monitoring government performance through a nationwide survey was an innovative way to identify sites of institutional racism. In a policy context where health equity is a key directive to the health sector, this study suggests there is scope for New Zealand health funders to improve their contracting practices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Ventral and dorsal streams for choosing word order during sentence production

    PubMed Central

    Thothathiri, Malathi; Rattinger, Michelle

    2015-01-01

    Proficient language use requires speakers to vary word order and choose between different ways of expressing the same meaning. Prior statistical associations between individual verbs and different word orders are known to influence speakers’ choices, but the underlying neural mechanisms are unknown. Here we show that distinct neural pathways are used for verbs with different statistical associations. We manipulated statistical experience by training participants in a language containing novel verbs and two alternative word orders (agent-before-patient, AP; patient-before-agent, PA). Some verbs appeared exclusively in AP, others exclusively in PA, and yet others in both orders. Subsequently, we used sparse sampling neuroimaging to examine the neural substrates as participants generated new sentences in the scanner. Behaviorally, participants showed an overall preference for AP order, but also increased PA order for verbs experienced in that order, reflecting statistical learning. Functional activation and connectivity analyses revealed distinct networks underlying the increased PA production. Verbs experienced in both orders during training preferentially recruited a ventral stream, indicating the use of conceptual processing for mapping meaning to word order. In contrast, verbs experienced solely in PA order recruited dorsal pathways, indicating the use of selective attention and sensorimotor integration for choosing words in the right order. These results show that the brain tracks the structural associations of individual verbs and that the same structural output may be achieved via ventral or dorsal streams, depending on the type of regularities in the input. PMID:26621706

  7. Publication of statistically significant research findings in prosthodontics & implant dentistry in the context of other dental specialties.

    PubMed

    Papageorgiou, Spyridon N; Kloukos, Dimitrios; Petridis, Haralampos; Pandis, Nikolaos

    2015-10-01

    To assess the hypothesis that there is excessive reporting of statistically significant studies published in prosthodontic and implantology journals, which could indicate selective publication. The last 30 issues of 9 journals in prosthodontics and implant dentistry were hand-searched for articles with statistical analyses. The percentages of significant and non-significant results were tabulated by parameter of interest. Univariable/multivariable logistic regression analyses were applied to identify possible predictors of reporting statistically significance findings. The results of this study were compared with similar studies in dentistry with random-effects meta-analyses. From the 2323 included studies 71% of them reported statistically significant results, with the significant results ranging from 47% to 86%. Multivariable modeling identified that geographical area and involvement of statistician were predictors of statistically significant results. Compared to interventional studies, the odds that in vitro and observational studies would report statistically significant results was increased by 1.20 times (OR: 2.20, 95% CI: 1.66-2.92) and 0.35 times (OR: 1.35, 95% CI: 1.05-1.73), respectively. The probability of statistically significant results from randomized controlled trials was significantly lower compared to various study designs (difference: 30%, 95% CI: 11-49%). Likewise the probability of statistically significant results in prosthodontics and implant dentistry was lower compared to other dental specialties, but this result did not reach statistical significant (P>0.05). The majority of studies identified in the fields of prosthodontics and implant dentistry presented statistically significant results. The same trend existed in publications of other specialties in dentistry. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Chasing the peak: optimal statistics for weak shear analyses

    NASA Astrophysics Data System (ADS)

    Smit, Merijn; Kuijken, Konrad

    2018-01-01

    Context. Weak gravitational lensing analyses are fundamentally limited by the intrinsic distribution of galaxy shapes. It is well known that this distribution of galaxy ellipticity is non-Gaussian, and the traditional estimation methods, explicitly or implicitly assuming Gaussianity, are not necessarily optimal. Aims: We aim to explore alternative statistics for samples of ellipticity measurements. An optimal estimator needs to be asymptotically unbiased, efficient, and robust in retaining these properties for various possible sample distributions. We take the non-linear mapping of gravitational shear and the effect of noise into account. We then discuss how the distribution of individual galaxy shapes in the observed field of view can be modeled by fitting Fourier modes to the shear pattern directly. This allows scientific analyses using statistical information of the whole field of view, instead of locally sparse and poorly constrained estimates. Methods: We simulated samples of galaxy ellipticities, using both theoretical distributions and data for ellipticities and noise. We determined the possible bias Δe, the efficiency η and the robustness of the least absolute deviations, the biweight, and the convex hull peeling (CHP) estimators, compared to the canonical weighted mean. Using these statistics for regression, we have shown the applicability of direct Fourier mode fitting. Results: We find an improved performance of all estimators, when iteratively reducing the residuals after de-shearing the ellipticity samples by the estimated shear, which removes the asymmetry in the ellipticity distributions. We show that these estimators are then unbiased in the absence of noise, and decrease noise bias by more than 30%. Our results show that the CHP estimator distribution is skewed, but still centered around the underlying shear, and its bias least affected by noise. We find the least absolute deviations estimator to be the most efficient estimator in almost all cases, except in the Gaussian case, where it's still competitive (0.83 < η < 5.1) and therefore robust. These results hold when fitting Fourier modes, where amplitudes of variation in ellipticity are determined to the order of 10-3. Conclusions: The peak of the ellipticity distribution is a direct tracer of the underlying shear and unaffected by noise, and we have shown that estimators that are sensitive to a central cusp perform more efficiently, potentially reducing uncertainties by more 0% and significantly decreasing noise bias. These results become increasingly important, as survey sizes increase and systematic issues in shape measurements decrease.

  9. The Effect of Aging on the Dynamics of Reactive and Proactive Cognitive Control of Response Interference

    PubMed Central

    Xiang, Ling; Zhang, Baoqiang; Wang, Baoxi; Jiang, Jun; Zhang, Fenghua; Hu, Zhujing

    2016-01-01

    A prime-target interference task was used to investigate the effects of cognitive aging on reactive and proactive control after eliminating frequency confounds and feature repetitions from the cognitive control measures. We used distributional analyses to explore the dynamics of the two control functions by distinguishing the strength of incorrect response capture and the efficiency of suppression control. For reactive control, within-trial conflict control and between-trial conflict adaption were analyzed. The statistical analysis showed that there were no reliable between-trial conflict adaption effects for either young or older adults. For within-trial conflict control, the results revealed that older adults showed larger interference effects on mean RT and mean accuracy. Distributional analyses showed that the decline mainly stemmed from inefficient suppression rather than from stronger incorrect responses. For proactive control, older adults showed comparable proactive conflict resolution to young adults on mean RT and mean accuracy. Distributional analyses showed that older adults were as effective as younger adults in adjusting their responses based on congruency proportion information to minimize automatic response capture and actively suppress the direct response activation. The results suggest that older adults were less proficient at suppressing interference after conflict was detected but can anticipate and prevent inference in response to congruency proportion manipulation. These results challenge earlier views that older adults have selective deficits in proactive control but intact reactive control. PMID:27847482

  10. The Effect of Aging on the Dynamics of Reactive and Proactive Cognitive Control of Response Interference.

    PubMed

    Xiang, Ling; Zhang, Baoqiang; Wang, Baoxi; Jiang, Jun; Zhang, Fenghua; Hu, Zhujing

    2016-01-01

    A prime-target interference task was used to investigate the effects of cognitive aging on reactive and proactive control after eliminating frequency confounds and feature repetitions from the cognitive control measures. We used distributional analyses to explore the dynamics of the two control functions by distinguishing the strength of incorrect response capture and the efficiency of suppression control. For reactive control, within-trial conflict control and between-trial conflict adaption were analyzed. The statistical analysis showed that there were no reliable between-trial conflict adaption effects for either young or older adults. For within-trial conflict control, the results revealed that older adults showed larger interference effects on mean RT and mean accuracy. Distributional analyses showed that the decline mainly stemmed from inefficient suppression rather than from stronger incorrect responses. For proactive control, older adults showed comparable proactive conflict resolution to young adults on mean RT and mean accuracy. Distributional analyses showed that older adults were as effective as younger adults in adjusting their responses based on congruency proportion information to minimize automatic response capture and actively suppress the direct response activation. The results suggest that older adults were less proficient at suppressing interference after conflict was detected but can anticipate and prevent inference in response to congruency proportion manipulation. These results challenge earlier views that older adults have selective deficits in proactive control but intact reactive control.

  11. Angiogenesis and lymphangiogenesis as prognostic factors after therapy in patients with cervical cancer

    PubMed Central

    Makarewicz, Roman; Kopczyńska, Ewa; Marszałek, Andrzej; Goralewska, Alina; Kardymowicz, Hanna

    2012-01-01

    Aim of the study This retrospective study attempts to evaluate the influence of serum vascular endothelial growth factor C (VEGF-C), microvessel density (MVD) and lymphatic vessel density (LMVD) on the result of tumour treatment in women with cervical cancer. Material and methods The research was carried out in a group of 58 patients scheduled for brachytherapy for cervical cancer. All women were patients of the Department and University Hospital of Oncology and Brachytherapy, Collegium Medicum in Bydgoszcz of Nicolaus Copernicus University in Toruń. VEGF-C was determined by means of a quantitative sandwich enzyme immunoassay using a human antibody VEGF-C ELISA produced by Bender MedSystem, enzyme-linked immunosorbent detecting the activity of human VEGF-C in body fluids. The measure for the intensity of angiogenesis and lymphangiogenesis in immunohistochemical reactions is the number of blood vessels within the tumour. Statistical analysis was done using Statistica 6.0 software (StatSoft, Inc. 2001). The Cox proportional hazards model was used for univariate and multivariate analyses. Univariate analysis of overall survival was performed as outlined by Kaplan and Meier. In all statistical analyses p < 0.05 (marked red) was taken as significant. Results In 51 patients who showed up for follow-up examination, the influence of the factors of angiogenesis, lymphangiogenesis, patients’ age and the level of haemoglobin at the end of treatment were assessed. Selected variables, such as patients’ age, lymph vessel density (LMVD), microvessel density (MVD) and the level of haemoglobin (Hb) before treatment were analysed by means of Cox logical regression as potential prognostic factors for lymph node invasion. The observed differences were statistically significant for haemoglobin level before treatment and the platelet number after treatment. The study revealed the following prognostic factors: lymph node status, FIGO stage, and kind of treatment. No statistically significant influence of angiogenic and lymphangiogenic factors on the prognosis was found. Conclusion Angiogenic and lymphangiogenic factors have no value in predicting response to radiotherapy in cervical cancer patients. PMID:23788848

  12. Do regional methods really help reduce uncertainties in flood frequency analyses?

    NASA Astrophysics Data System (ADS)

    Cong Nguyen, Chi; Payrastre, Olivier; Gaume, Eric

    2013-04-01

    Flood frequency analyses are often based on continuous measured series at gauge sites. However, the length of the available data sets is usually too short to provide reliable estimates of extreme design floods. To reduce the estimation uncertainties, the analyzed data sets have to be extended either in time, making use of historical and paleoflood data, or in space, merging data sets considered as statistically homogeneous to build large regional data samples. Nevertheless, the advantage of the regional analyses, the important increase of the size of the studied data sets, may be counterbalanced by the possible heterogeneities of the merged sets. The application and comparison of four different flood frequency analysis methods to two regions affected by flash floods in the south of France (Ardèche and Var) illustrates how this balance between the number of records and possible heterogeneities plays in real-world applications. The four tested methods are: (1) a local statistical analysis based on the existing series of measured discharges, (2) a local analysis valuating the existing information on historical floods, (3) a standard regional flood frequency analysis based on existing measured series at gauged sites and (4) a modified regional analysis including estimated extreme peak discharges at ungauged sites. Monte Carlo simulations are conducted to simulate a large number of discharge series with characteristics similar to the observed ones (type of statistical distributions, number of sites and records) to evaluate to which extent the results obtained on these case studies can be generalized. These two case studies indicate that even small statistical heterogeneities, which are not detected by the standard homogeneity tests implemented in regional flood frequency studies, may drastically limit the usefulness of such approaches. On the other hand, these result show that the valuation of information on extreme events, either historical flood events at gauged sites or estimated extremes at ungauged sites in the considered region, is an efficient way to reduce uncertainties in flood frequency studies.

  13. Intelligence, birth order, and family size.

    PubMed

    Kanazawa, Satoshi

    2012-09-01

    The analysis of the National Child Development Study in the United Kingdom (n = 17,419) replicates some earlier findings and shows that genuine within-family data are not necessary to make the apparent birth-order effect on intelligence disappear. Birth order is not associated with intelligence in between-family data once the number of siblings is statistically controlled. The analyses support the admixture hypothesis, which avers that the apparent birth-order effect on intelligence is an artifact of family size, and cast doubt on the confluence and resource dilution models, both of which claim that birth order has a causal influence on children's cognitive development. The analyses suggest that birth order has no genuine causal effect on general intelligence.

  14. [Family communication styles, attitude towards institutional authority and adolescents' violent behaviour at school].

    PubMed

    Estévez López, Estefanía; Murgui Pérez, Sergio; Moreno Ruiz, David; Musitu Ochoa, Gonzalo

    2007-02-01

    The purpose of present study is to analyse the relationship among certain family and school factors, adolescents' attitude towards institutional authority, and violent behaviour at school. The sample is composed of 1049 adolescents of both sexes and aged from 11 to 16 years old. Statistical analyses were carried out using structural equation modelling. Results indicate a close association between negative communication with father and violent behaviour in adolescence. Moreover, data suggest that teachers' expectations affect students' attitude towards institutional authority, which in turn is closely related to school violence. Finally, findings show an indirect influence of father, mother and teacher in adolescents' violent behaviour, mainly through their effect on family- and school-self-concept.

  15. DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTIC ASSESSMENT

    EPA Science Inventory

    Databases designed for statistical analyses have characteristics that distinguish them from databases intended for general use. EMAP uses a probabilistic sampling design to collect data to produce statistical assessments of environmental conditions. In addition to supporting the ...

  16. Testing a Coupled Global-limited-area Data Assimilation System using Observations from the 2004 Pacific Typhoon Season

    NASA Astrophysics Data System (ADS)

    Holt, C. R.; Szunyogh, I.; Gyarmati, G.; Hoffman, R. N.; Leidner, M.

    2011-12-01

    Tropical cyclone (TC) track and intensity forecasts have improved in recent years due to increased model resolution, improved data assimilation, and the rapid increase in the number of routinely assimilated observations over oceans. The data assimilation approach that has received the most attention in recent years is Ensemble Kalman Filtering (EnKF). The most attractive feature of the EnKF is that it uses a fully flow-dependent estimate of the error statistics, which can have important benefits for the analysis of rapidly developing TCs. We implement the Local Ensemble Transform Kalman Filter algorithm, a vari- ation of the EnKF, on a reduced-resolution version of the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) model and the NCEP Regional Spectral Model (RSM) to build a coupled global-limited area anal- ysis/forecast system. This is the first time, to our knowledge, that such a system is used for the analysis and forecast of tropical cyclones. We use data from summer 2004 to study eight tropical cyclones in the Northwest Pacific. The benchmark data sets that we use to assess the performance of our system are the NCEP Reanalysis and the NCEP Operational GFS analyses from 2004. These benchmark analyses were both obtained by the Statistical Spectral Interpolation, which was the operational data assimilation system of NCEP in 2004. The GFS Operational analysis assimilated a large number of satellite radiance observations in addition to the observations assimilated in our system. All analyses are verified against the Joint Typhoon Warning Center Best Track data set. The errors are calculated for the position and intensity of the TCs. The global component of the ensemble-based system shows improvement in po- sition analysis over the NCEP Reanalysis, but shows no significant difference from the NCEP operational analysis for most of the storm tracks. The regional com- ponent of our system improves position analysis over all the global analyses. The intensity analyses, measured by the minimum sea level pressure, are of similar quality in all of the analyses. Regional deterministic forecasts started from our analyses are generally not significantly different from those started from the GFS operational analysis. On average, the regional experiments performed better for longer than 48 h sea level pressure forecasts, while the global forecast performed better in predicting the position for longer than 48 h.

  17. Comparing Visual and Statistical Analysis of Multiple Baseline Design Graphs.

    PubMed

    Wolfe, Katie; Dickenson, Tammiee S; Miller, Bridget; McGrath, Kathleen V

    2018-04-01

    A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.

  18. Errors in statistical decision making Chapter 2 in Applied Statistics in Agricultural, Biological, and Environmental Sciences

    USDA-ARS?s Scientific Manuscript database

    Agronomic and Environmental research experiments result in data that are analyzed using statistical methods. These data are unavoidably accompanied by uncertainty. Decisions about hypotheses, based on statistical analyses of these data are therefore subject to error. This error is of three types,...

  19. Effect of repeated cycles of chemical disinfection on the roughness and hardness of hard reline acrylic resins.

    PubMed

    Pinto, Luciana de Rezende; Acosta, Emílio José T Rodríguez; Távora, Flora Freitas Fernandes; da Silva, Paulo Maurício Batista; Porto, Vinícius Carvalho

    2010-06-01

    The aim of this study was to assess the effect of repeated cycles of five chemical disinfectant solutions on the roughness and hardness of three hard chairside reliners. A total of 180 circular specimens (30 mm x 6 mm) were fabricated using three hard chairside reliners (Jet; n = 60, Kooliner; n = 60, Tokuyama Rebase II Fast; n = 60), which were immersed in deionised water (control), and five disinfectant solutions (1%, 2%, 5.25% sodium hypochlorite; 2% glutaraldehyde; 4% chlorhexidine gluconate). They were tested for Knoop hardness (KHN) and surface roughness (microm), before and after 30 simulated disinfecting cycles. Data was analysed by the factorial scheme (6 x 2), two-way analysis of variance (anova), followed by Tukey's test. For Jet (from 18.74 to 13.86 KHN), Kooliner (from 14.09 to 8.72 KHN), Tokuyama (from 12.57 to 8.28 KHN) a significant decrease in hardness was observed irrespective of the solution used on all materials. For Jet (from 0.09 to 0.11 microm) there was a statistically significant increase in roughness. Kooliner (from 0.36 to 0.26 microm) presented a statistically significant decrease in roughness and Tokuyama (from 0.15 to 0.11 microm) presented no statistically significant difference after 30 days. This study showed that all disinfectant solutions promoted a statistically significant decrease in hardness, whereas with roughness, the materials tested showed a statistically significant increase, except for Tokuyama. Although statistically significant values were registered, these results could not be considered clinically significant.

  20. Does Anxiety Modify the Risk for, or Severity of, Conduct Problems Among Children With Co-Occurring ADHD: Categorical and Dimensional and Analyses.

    PubMed

    Danforth, Jeffrey S; Doerfler, Leonard A; Connor, Daniel F

    2017-08-01

    The goal was to examine whether anxiety modifies the risk for, or severity of, conduct problems in children with ADHD. Assessment included both categorical and dimensional measures of ADHD, anxiety, and conduct problems. Analyses compared conduct problems between children with ADHD features alone versus children with co-occurring ADHD and anxiety features. When assessed by dimensional rating scales, results showed that compared with children with ADHD alone, those children with ADHD co-occurring with anxiety are at risk for more intense conduct problems. When assessment included a Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV) diagnosis via the Schedule for Affective Disorders and Schizophrenia for School Age Children-Epidemiologic Version (K-SADS), results showed that compared with children with ADHD alone, those children with ADHD co-occurring with anxiety neither had more intense conduct problems nor were they more likely to be diagnosed with oppositional defiant disorder or conduct disorder. Different methodological measures of ADHD, anxiety, and conduct problem features influenced the outcome of the analyses.

  1. Human mtDNA hypervariable regions, HVR I and II, hint at deep common maternal founder and subsequent maternal gene flow in Indian population groups.

    PubMed

    Sharma, Swarkar; Saha, Anjana; Rai, Ekta; Bhat, Audesh; Bamezai, Ramesh

    2005-01-01

    We have analysed the hypervariable regions (HVR I and II) of human mitochondrial DNA (mtDNA) in individuals from Uttar Pradesh (UP), Bihar (BI) and Punjab (PUNJ), belonging to the Indo-European linguistic group, and from South India (SI), that have their linguistic roots in Dravidian language. Our analysis revealed the presence of known and novel mutations in both hypervariable regions in the studied population groups. Median joining network analyses based on mtDNA showed extensive overlap in mtDNA lineages despite the extensive cultural and linguistic diversity. MDS plot analysis based on Fst distances suggested increased maternal genetic proximity for the studied population groups compared with other world populations. Mismatch distribution curves, respective neighbour joining trees and other statistical analyses showed that there were significant expansions. The study revealed an ancient common ancestry for the studied population groups, most probably through common founder female lineage(s), and also indicated that human migrations occurred (maybe across and within the Indian subcontinent) even after the initial phase of female migration to India.

  2. Separate enrichment analysis of pathways for up- and downregulated genes.

    PubMed

    Hong, Guini; Zhang, Wenjing; Li, Hongdong; Shen, Xiaopei; Guo, Zheng

    2014-03-06

    Two strategies are often adopted for enrichment analysis of pathways: the analysis of all differentially expressed (DE) genes together or the analysis of up- and downregulated genes separately. However, few studies have examined the rationales of these enrichment analysis strategies. Using both microarray and RNA-seq data, we show that gene pairs with functional links in pathways tended to have positively correlated expression levels, which could result in an imbalance between the up- and downregulated genes in particular pathways. We then show that the imbalance could greatly reduce the statistical power for finding disease-associated pathways through the analysis of all-DE genes. Further, using gene expression profiles from five types of tumours, we illustrate that the separate analysis of up- and downregulated genes could identify more pathways that are really pertinent to phenotypic difference. In conclusion, analysing up- and downregulated genes separately is more powerful than analysing all of the DE genes together.

  3. A quantitative analysis of factors influencing the professional longevity of high school science teachers in Florida

    NASA Astrophysics Data System (ADS)

    Ridgley, James Alexander, Jr.

    This dissertation is an exploratory quantitative analysis of various independent variables to determine their effect on the professional longevity (years of service) of high school science teachers in the state of Florida for the academic years 2011-2012 to 2013-2014. Data are collected from the Florida Department of Education, National Center for Education Statistics, and the National Assessment of Educational Progress databases. The following research hypotheses are examined: H1 - There are statistically significant differences in Level 1 (teacher variables) that influence the professional longevity of a high school science teacher in Florida. H2 - There are statistically significant differences in Level 2 (school variables) that influence the professional longevity of a high school science teacher in Florida. H3 - There are statistically significant differences in Level 3 (district variables) that influence the professional longevity of a high school science teacher in Florida. H4 - When tested in a hierarchical multiple regression, there are statistically significant differences in Level 1, Level 2, or Level 3 that influence the professional longevity of a high school science teacher in Florida. The professional longevity of a Floridian high school science teacher is the dependent variable. The independent variables are: (Level 1) a teacher's sex, age, ethnicity, earned degree, salary, number of schools taught in, migration count, and various years of service in different areas of education; (Level 2) a school's geographic location, residential population density, average class size, charter status, and SES; and (Level 3) a school district's average SES and average spending per pupil. Statistical analyses of exploratory MLRs and a HMR are used to support the research hypotheses. The final results of the HMR analysis show a teacher's age, salary, earned degree (unknown, associate, and doctorate), and ethnicity (Hispanic and Native Hawaiian/Pacific Islander); a school's charter status; and a school district's average SES are all significant predictors of a Florida high school science teacher's professional longevity. Although statistically significant in the initial exploratory MLR analyses, a teacher's ethnicity (Asian and Black), a school's geographic location (city and rural), and a school's SES are not statistically significant in the final HMR model.

  4. A multicenter study of viable PCR using propidium monoazide to detect Legionella in water samples.

    PubMed

    Scaturro, Maria; Fontana, Stefano; Dell'eva, Italo; Helfer, Fabrizia; Marchio, Michele; Stefanetti, Maria Vittoria; Cavallaro, Mario; Miglietta, Marilena; Montagna, Maria Teresa; De Giglio, Osvalda; Cuna, Teresa; Chetti, Leonarda; Sabattini, Maria Antonietta Bucci; Carlotti, Michela; Viggiani, Mariagabriella; Stenico, Alberta; Romanin, Elisa; Bonanni, Emma; Ottaviano, Claudio; Franzin, Laura; Avanzini, Claudio; Demarie, Valerio; Corbella, Marta; Cambieri, Patrizia; Marone, Piero; Rota, Maria Cristina; Bella, Antonino; Ricci, Maria Luisa

    2016-07-01

    Legionella quantification in environmental samples is overestimated by qPCR. Combination with a viable dye, such as Propidium monoazide (PMA), could make qPCR (named then vPCR) very reliable. In this multicentre study 717 artificial water samples, spiked with fixed concentrations of Legionella and interfering bacterial flora, were analysed by qPCR, vPCR and culture and data were compared by statistical analysis. A heat-treatment at 55 °C for 10 minutes was also performed to obtain viable and not-viable bacteria. When data of vPCR were compared with those of culture and qPCR, statistical analysis showed significant differences (P < 0.001). However, although the heat-treatment caused an abatement of CFU/mL ≤1 to 1 log10 unit, the comparison between untreated and heat-treated samples analysed by vPCR highlighted non-significant differences (P > 0.05). Overall this study provided a good experimental reproducibility of vPCR but also highlighted limits of PMA in the discriminating capability of dead and live bacteria, making vPCR not completely reliable. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Use of the Analysis of the Volatile Faecal Metabolome in Screening for Colorectal Cancer

    PubMed Central

    2015-01-01

    Diagnosis of colorectal cancer is an invasive and expensive colonoscopy, which is usually carried out after a positive screening test. Unfortunately, existing screening tests lack specificity and sensitivity, hence many unnecessary colonoscopies are performed. Here we report on a potential new screening test for colorectal cancer based on the analysis of volatile organic compounds (VOCs) in the headspace of faecal samples. Faecal samples were obtained from subjects who had a positive faecal occult blood sample (FOBT). Subjects subsequently had colonoscopies performed to classify them into low risk (non-cancer) and high risk (colorectal cancer) groups. Volatile organic compounds were analysed by selected ion flow tube mass spectrometry (SIFT-MS) and then data were analysed using both univariate and multivariate statistical methods. Ions most likely from hydrogen sulphide, dimethyl sulphide and dimethyl disulphide are statistically significantly higher in samples from high risk rather than low risk subjects. Results using multivariate methods show that the test gives a correct classification of 75% with 78% specificity and 72% sensitivity on FOBT positive samples, offering a potentially effective alternative to FOBT. PMID:26086914

  6. Trends and variability of cloud fraction cover in the Arctic, 1982-2009

    NASA Astrophysics Data System (ADS)

    Boccolari, Mauro; Parmiggiani, Flavio

    2018-05-01

    Climatology, trends and variability of cloud fraction cover (CFC) data over the Arctic (north of 70°N), were analysed over the 1982-2009 period. Data, available from the Climate Monitoring Satellite Application Facility (CM SAF), are derived from satellite measurements by AVHRR. Climatological means confirm permanent high CFC values over the Atlantic sector during all the year and during summer over the eastern Arctic Ocean. Lower values are found in the rest of the analysed area especially over Greenland and the Canadian Archipelago, nearly continuously during all the months. These results are confirmed by CFC trends and variability. Statistically significant trends were found during all the months over the Greenland Sea, particularly during the winter season (negative, less than -5 % dec -1) and over the Beaufort Sea in spring (positive, more than +5 % dec -1). CFC variability, investigated by the Empirical Orthogonal Functions, shows a substantial "non-variability" in the Northern Atlantic Ocean. Statistically significant correlations between CFC principal components elements and both the Pacific Decadal Oscillation index and Pacific North America patterns are found.

  7. [Sonographic ovarian vascularization and volume in women with polycystic ovary syndrome treated with clomiphene citrate and metformin].

    PubMed

    de la Fuente-Valero, Jesús; Zapardiel-Gutiérrez, Ignacio; Orensanz-Fernández, Inmaculada; Alvarez-Alvarez, Pilar; Engels-Calvo, Virginia; Bajo-Arenas, José Manuel

    2010-01-01

    To measure the vascularization and ovarian volume with three-dimensional sonography in patients diagnosed of polycystic ovary syndrome with stimulated ovulation treatment, and to analyse the differences between the patients treated with clomiphen citrate versus clomiphen citrate and metformin. Therty patients were studied. Twenty ovulation cycles were obtained with clomiphen citrate and 17 with clomiphen citrate plus merformin (added in case of obesity or hyperglucemy/hyperinsulinemia). Ovarian volumes and vascular indexes were studied with 3D-sonography and results were analysed by treatment. There were no statistical differences of ovarian volume by treatment along the cycles, although bigger volume were found in ovulatory cycles compared to non-ovulatory ones (20,36 versus 13,89 ml, p = 0,026). No statistical differences were also found concerning vascular indexes, neither by treatment nor by the obtention of ovulation in the cycle. Ovarian volume and vascular indexes measured with three-dimensional sonography in patients diagnosed of polycystic ovary syndrome do not show differents values in patients treated with clomiphen citrate alone versus clomiphen citrate plus metformin.

  8. Are the kids alright? Review books and the internet as the most common study resources for the general surgery clerkship.

    PubMed

    Taylor, Janice A; Shaw, Christiana M; Tan, Sanda A; Falcone, John L

    2018-01-01

    To define resources deemed most important to medical students on their general surgery clerkship, we evaluated their material utilization. A prospective study was conducted amongst third-year medical students using a 20-item survey. Descriptive statistics were performed on the demographics. Kruskal-Wallis and Mann-Whitney analyses were performed on the Likert responses (α = 0.05). Survey response was 69.2%. Use of review books and Internet was significantly higher compared to all other resources (p < 0.05). Wikipedia was the most used Internet source (39.1%). 56% never used textbooks. Analyses of surgery subject exam (NBME) results or intended specialty with resources used showed no statistical relationship (all p > 0.05). Resources used by students reflect access to high-yield material and increased Internet use. The Internet and review books were used more than the recommended textbook; NBME results were not affected. Understanding study habits and resource use will help guide curricular development and students' self-regulated learning. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Statistical Analyses of Femur Parameters for Designing Anatomical Plates.

    PubMed

    Wang, Lin; He, Kunjin; Chen, Zhengming

    2016-01-01

    Femur parameters are key prerequisites for scientifically designing anatomical plates. Meanwhile, individual differences in femurs present a challenge to design well-fitting anatomical plates. Therefore, to design anatomical plates more scientifically, analyses of femur parameters with statistical methods were performed in this study. The specific steps were as follows. First, taking eight anatomical femur parameters as variables, 100 femur samples were classified into three classes with factor analysis and Q-type cluster analysis. Second, based on the mean parameter values of the three classes of femurs, three sizes of average anatomical plates corresponding to the three classes of femurs were designed. Finally, based on Bayes discriminant analysis, a new femur could be assigned to the proper class. Thereafter, the average anatomical plate suitable for that new femur was selected from the three available sizes of plates. Experimental results showed that the classification of femurs was quite reasonable based on the anatomical aspects of the femurs. For instance, three sizes of condylar buttress plates were designed. Meanwhile, 20 new femurs are judged to which classes the femurs belong. Thereafter, suitable condylar buttress plates were determined and selected.

  10. VCSEL-based fiber optic link for avionics: implementation and performance analyses

    NASA Astrophysics Data System (ADS)

    Shi, Jieqin; Zhang, Chunxi; Duan, Jingyuan; Wen, Huaitao

    2006-11-01

    A Gb/s fiber optic link with built-in test capability (BIT) basing on vertical-cavity surface-emitting laser (VCSEL) sources for military avionics bus for next generation has been presented in this paper. To accurately predict link performance, statistical methods and Bit Error Rate (BER) measurements have been examined. The results show that the 1Gb/s fiber optic link meets the BER requirement and values for link margin can reach up to 13dB. Analysis shows that the suggested photonic network may provide high performance and low cost interconnections alternative for future military avionics.

  11. Analysis of the color alteration and radiopacity promoted by bismuth oxide in calcium silicate cement.

    PubMed

    Marciano, Marina Angélica; Estrela, Carlos; Mondelli, Rafael Francisco Lia; Ordinola-Zapata, Ronald; Duarte, Marco Antonio Hungaro

    2013-01-01

    The aim of the study was to determine if the increase in radiopacity provided by bismuth oxide is related to the color alteration of calcium silicate-based cement. Calcium silicate cement (CSC) was mixed with 0%, 15%, 20%, 30% and 50% of bismuth oxide (BO), determined by weight. Mineral trioxide aggregate (MTA) was the control group. The radiopacity test was performed according to ISO 6876/2001. The color was evaluated using the CIE system. The assessments were performed after 24 hours, 7 and 30 days of setting time, using a spectrophotometer to obtain the ΔE, Δa, Δb and ΔL values. The statistical analyses were performed using the Kruskal-Wallis/Dunn and ANOVA/Tukey tests (p<0.05). The cements in which bismuth oxide was added showed radiopacity corresponding to the ISO recommendations (>3 mm equivalent of Al). The MTA group was statistically similar to the CSC/30% BO group (p>0.05). In regard to color, the increase of bismuth oxide resulted in a decrease in the ΔE value of the calcium silicate cement. The CSC group presented statistically higher ΔE values than the CSC/50% BO group (p<0.05). The comparison between 24 hours and 7 days showed higher ΔE for the MTA group, with statistical differences for the CSC/15% BO and CSC/50% BO groups (p<0.05). After 30 days, CSC showed statistically higher ΔE values than CSC/30% BO and CSC/50% BO (p<0.05). In conclusion, the increase in radiopacity provided by bismuth oxide has no relation to the color alteration of calcium silicate-based cements.

  12. Statistical Analyses of Raw Material Data for MTM45-1/CF7442A-36% RW: CMH Cure Cycle

    NASA Technical Reports Server (NTRS)

    Coroneos, Rula; Pai, Shantaram, S.; Murthy, Pappu

    2013-01-01

    This report describes statistical characterization of physical properties of the composite material system MTM45-1/CF7442A, which has been tested and is currently being considered for use on spacecraft structures. This composite system is made of 6K plain weave graphite fibers in a highly toughened resin system. This report summarizes the distribution types and statistical details of the tests and the conditions for the experimental data generated. These distributions will be used in multivariate regression analyses to help determine material and design allowables for similar material systems and to establish a procedure for other material systems. Additionally, these distributions will be used in future probabilistic analyses of spacecraft structures. The specific properties that are characterized are the ultimate strength, modulus, and Poisson??s ratio by using a commercially available statistical package. Results are displayed using graphical and semigraphical methods and are included in the accompanying appendixes.

  13. Analyses of global sea surface temperature 1856-1991

    NASA Astrophysics Data System (ADS)

    Kaplan, Alexey; Cane, Mark A.; Kushnir, Yochanan; Clement, Amy C.; Blumenthal, M. Benno; Rajagopalan, Balaji

    1998-08-01

    Global analyses of monthly sea surface temperature (SST) anomalies from 1856 to 1991 are produced using three statistically based methods: optimal smoothing (OS), the Kaiman filter (KF) and optimal interpolation (OI). Each of these is accompanied by estimates of the error covariance of the analyzed fields. The spatial covariance function these methods require is estimated from the available data; the timemarching model is a first-order autoregressive model again estimated from data. The data input for the analyses are monthly anomalies from the United Kingdom Meteorological Office historical sea surface temperature data set (MOHSST5) [Parker et al., 1994] of the Global Ocean Surface Temperature Atlas (GOSTA) [Bottomley et al., 1990]. These analyses are compared with each other, with GOSTA, and with an analysis generated by projection (P) onto a set of empirical orthogonal functions (as in Smith et al. [1996]). In theory, the quality of the analyses should rank in the order OS, KF, OI, P, and GOSTA. It is found that the first four give comparable results in the data-rich periods (1951-1991), but at times when data is sparse the first three differ significantly from P and GOSTA. At these times the latter two often have extreme and fluctuating values, prima facie evidence of error. The statistical schemes are also verified against data not used in any of the analyses (proxy records derived from corals and air temperature records from coastal and island stations). We also present evidence that the analysis error estimates are indeed indicative of the quality of the products. At most times the OS and KF products are close to the OI product, but at times of especially poor coverage their use of information from other times is advantageous. The methods appear to reconstruct the major features of the global SST field from very sparse data. Comparison with other indications of the El Niño-Southern Oscillation cycle show that the analyses provide usable information on interannual variability as far back as the 1860s.

  14. Sign of the Zodiac as a predictor of survival for recipients of an allogeneic stem cell transplant for chronic myeloid leukaemia (CML): an artificial association.

    PubMed

    Szydlo, R M; Gabriel, I; Olavarria, E; Apperley, J

    2010-10-01

    Astrological or Zodiac (star) sign has been shown to be a statistically significant factor in the outcome of a variety of diseases, conditions, and phenomena. To investigate its relevance in the context of a stem cell transplant (SCT), we examined its influence in chronic myeloid leukaemia, a disease with well-established prognostic factors. Data were collected on 626 patients who received a first myeloablative allogeneic SCT between 1981 and 2006. Star sign was determined for each patient. Univariate analyses comparing all 12 individual star signs showed considerable variation of 5-year probabilities of survival, 63% for Arians, to 45% for Aquarians, but without significance (P=.65). However, it was possible to pool together star signs likely to provide dichotomous results. Thus, grouping together Aries, Taurus, Gemini, Leo, Scorpio, and Capricorn (group A; n=317) versus others (group B; n=309) resulted in a highly significant difference (58% vs 48%; P=.007). When adjusted for known prognostic factors in a multivariate analysis, group B was associated with an increased risk of mortality when compared with group A (relative risk [RR], 1.37; P=.005). In this study, we show that, providing adequate care is taken, a significant relationship between patient star sign and survival post SCT for CML can be observed. This is, however, a completely erroneous result, and is based on the pooling together of observations to artificially create a statistically significant result. Statistical analyses should thus be carried out on a priori hypotheses and not to find a meaningful or significant result. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. The added value of ordinal analysis in clinical trials: an example in traumatic brain injury.

    PubMed

    Roozenbeek, Bob; Lingsma, Hester F; Perel, Pablo; Edwards, Phil; Roberts, Ian; Murray, Gordon D; Maas, Andrew Ir; Steyerberg, Ewout W

    2011-01-01

    In clinical trials, ordinal outcome measures are often dichotomized into two categories. In traumatic brain injury (TBI) the 5-point Glasgow outcome scale (GOS) is collapsed into unfavourable versus favourable outcome. Simulation studies have shown that exploiting the ordinal nature of the GOS increases chances of detecting treatment effects. The objective of this study is to quantify the benefits of ordinal analysis in the real-life situation of a large TBI trial. We used data from the CRASH trial that investigated the efficacy of corticosteroids in TBI patients (n = 9,554). We applied two techniques for ordinal analysis: proportional odds analysis and the sliding dichotomy approach, where the GOS is dichotomized at different cut-offs according to baseline prognostic risk. These approaches were compared to dichotomous analysis. The information density in each analysis was indicated by a Wald statistic. All analyses were adjusted for baseline characteristics. Dichotomous analysis of the six-month GOS showed a non-significant treatment effect (OR = 1.09, 95% CI 0.98 to 1.21, P = 0.096). Ordinal analysis with proportional odds regression or sliding dichotomy showed highly statistically significant treatment effects (OR 1.15, 95% CI 1.06 to 1.25, P = 0.0007 and 1.19, 95% CI 1.08 to 1.30, P = 0.0002), with 2.05-fold and 2.56-fold higher information density compared to the dichotomous approach respectively. Analysis of the CRASH trial data confirmed that ordinal analysis of outcome substantially increases statistical power. We expect these results to hold for other fields of critical care medicine that use ordinal outcome measures and recommend that future trials adopt ordinal analyses. This will permit detection of smaller treatment effects.

  16. Lithium and neuroleptics in combination: is there enhancement of neurotoxicity leading to permanent sequelae?

    PubMed

    Goldman, S A

    1996-10-01

    Neurotoxicity in relation to concomitant administration of lithium and neuroleptic drugs, particularly haloperidol, has been an ongoing issue. This study examined whether use of lithium with neuroleptic drugs enhances neurotoxicity leading to permanent sequelae. The Spontaneous Reporting System database of the United States Food and Drug Administration and extant literature were reviewed for spectrum cases of lithium/neuroleptic neurotoxicity. Groups taking lithium alone (Li), lithium/haloperidol (LiHal) and lithium/ nonhaloperidol neuroleptics (LiNeuro), each paired for recovery and sequelae, were established for 237 cases. Statistical analyses included pairwise comparisons of lithium levels using the Wilcoxon Rank Sum procedure and logistic regression to analyze the relationship between independent variables and development of sequelae. The Li and Li-Neuro groups showed significant statistical differences in median lithium levels between recovery and sequelae pairs, whereas the LiHal pair did not differ significantly. Lithium level was associated with sequelae development overall and within the Li and LiNeuro groups; no such association was evident in the LiHal group. On multivariable logistic regression analysis, lithium level and taking lithium/haloperidol were significant factors in the development of sequelae, with multiple possibly confounding factors (e.g., age, sex) not statistically significant. Multivariable logistic regression analyses with neuroleptic dose as five discrete dose ranges or actual dose did not show an association between development of sequelae and dose. Database limitations notwithstanding, the lack of apparent impact of serum lithium level on the development of sequelae in patients treated with haloperidol contrasts notably with results in the Li and LiNeuro groups. These findings may suggest a possible effect of pharmacodynamic factors in lithium/neuroleptic combination therapy.

  17. Large-scale replication study reveals a limit on probabilistic prediction in language comprehension.

    PubMed

    Nieuwland, Mante S; Politzer-Ahles, Stephen; Heyselaar, Evelien; Segaert, Katrien; Darley, Emily; Kazanina, Nina; Von Grebmer Zu Wolfsthurn, Sarah; Bartolozzi, Federica; Kogan, Vita; Ito, Aine; Mézière, Diane; Barr, Dale J; Rousselet, Guillaume A; Ferguson, Heather J; Busch-Moreno, Simon; Fu, Xiao; Tuomainen, Jyrki; Kulakova, Eugenia; Husband, E Matthew; Donaldson, David I; Kohút, Zdenko; Rueschemeyer, Shirley-Ann; Huettig, Falk

    2018-04-03

    Do people routinely pre-activate the meaning and even the phonological form of upcoming words? The most acclaimed evidence for phonological prediction comes from a 2005 Nature Neuroscience publication by DeLong, Urbach and Kutas, who observed a graded modulation of electrical brain potentials (N400) to nouns and preceding articles by the probability that people use a word to continue the sentence fragment ('cloze'). In our direct replication study spanning 9 laboratories ( N =334), pre-registered replication-analyses and exploratory Bayes factor analyses successfully replicated the noun-results but, crucially, not the article-results. Pre-registered single-trial analyses also yielded a statistically significant effect for the nouns but not the articles. Exploratory Bayesian single-trial analyses showed that the article-effect may be non-zero but is likely far smaller than originally reported and too small to observe without very large sample sizes. Our results do not support the view that readers routinely pre-activate the phonological form of predictable words. © 2018, Nieuwland et al.

  18. Non-invasive brain stimulation to investigate language production in healthy speakers: A meta-analysis.

    PubMed

    Klaus, Jana; Schutter, Dennis J L G

    2018-06-01

    Non-invasive brain stimulation (NIBS) has become a common method to study the interrelations between the brain and language functioning. This meta-analysis examined the efficacy of transcranial magnetic stimulation (TMS) and direct current stimulation (tDCS) in the study of language production in healthy volunteers. Forty-five effect sizes from 30 studies which investigated the effects of NIBS on picture naming or verbal fluency in healthy participants were meta-analysed. Further sub-analyses investigated potential influences of stimulation type, control, target site, task, online vs. offline application, and current density of the target electrode. Random effects modelling showed a small, but reliable effect of NIBS on language production. Subsequent analyses indicated larger weighted mean effect sizes for TMS as compared to tDCS studies. No statistical differences for the other sub-analyses were observed. We conclude that NIBS is a useful method for neuroscientific studies on language production in healthy volunteers. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Large-scale replication study reveals a limit on probabilistic prediction in language comprehension

    PubMed Central

    Politzer-Ahles, Stephen; Heyselaar, Evelien; Segaert, Katrien; Darley, Emily; Kazanina, Nina; Von Grebmer Zu Wolfsthurn, Sarah; Bartolozzi, Federica; Kogan, Vita; Ito, Aine; Mézière, Diane; Barr, Dale J; Rousselet, Guillaume A; Ferguson, Heather J; Busch-Moreno, Simon; Fu, Xiao; Tuomainen, Jyrki; Kulakova, Eugenia; Husband, E Matthew; Donaldson, David I; Kohút, Zdenko; Rueschemeyer, Shirley-Ann; Huettig, Falk

    2018-01-01

    Do people routinely pre-activate the meaning and even the phonological form of upcoming words? The most acclaimed evidence for phonological prediction comes from a 2005 Nature Neuroscience publication by DeLong, Urbach and Kutas, who observed a graded modulation of electrical brain potentials (N400) to nouns and preceding articles by the probability that people use a word to continue the sentence fragment (‘cloze’). In our direct replication study spanning 9 laboratories (N=334), pre-registered replication-analyses and exploratory Bayes factor analyses successfully replicated the noun-results but, crucially, not the article-results. Pre-registered single-trial analyses also yielded a statistically significant effect for the nouns but not the articles. Exploratory Bayesian single-trial analyses showed that the article-effect may be non-zero but is likely far smaller than originally reported and too small to observe without very large sample sizes. Our results do not support the view that readers routinely pre-activate the phonological form of predictable words. PMID:29631695

  20. Total mercury in infant food, occurrence and exposure assessment in Portugal.

    PubMed

    Martins, Carla; Vasco, Elsa; Paixão, Eleonora; Alvito, Paula

    2013-01-01

    Commercial infant food labelled as from organic and conventional origin (n = 87) was analysed for total mercury content using a direct mercury analyser (DMA). Median contents of mercury were 0.50, 0.50 and 0.40 μg kg⁻¹ for processed cereal-based food, infant formulae and baby foods, respectively, with a maximum value of 19.56 μg kg⁻¹ in a baby food containing fish. Processed cereal-based food samples showed statistically significant differences for mercury content between organic and conventional analysed products. Consumption of commercial infant food analysed did not pose a risk to infants when the provisionally tolerable weekly intake (PTWI) for food other than fish and shellfish was considered. By the contrary, a risk to some infants could not be excluded when using the PTWI for fish and shellfish. This is the first study reporting contents of total mercury in commercial infant food from both farming systems and the first on exposure assessment of children to mercury in Portugal.

  1. Uncommon knowledge of a common phenomenon: intuitions and statistical thinking about gender birth ratio

    NASA Astrophysics Data System (ADS)

    Peled, Ofra N.; Peled, Irit; Peled, Jonathan U.

    2013-01-01

    The phenomenon of birth of a baby is a common and familiar one, and yet college students participating in a general biology class did not possess the expected common knowledge of the equal probability of gender births. We found that these students held strikingly skewed conceptions regarding gender birth ratio, estimating the number of female births to be more than twice the number of male births. Possible sources of these beliefs were analysed, showing flaws in statistical thinking such as viewing small unplanned samples as representing the whole population and making inferences from an inappropriate population. Some educational implications are discussed and a short teaching example (using data assembly) demonstrates an instructional direction that might facilitate conceptual change.

  2. Methodological Standards for Meta-Analyses and Qualitative Systematic Reviews of Cardiac Prevention and Treatment Studies: A Scientific Statement From the American Heart Association.

    PubMed

    Rao, Goutham; Lopez-Jimenez, Francisco; Boyd, Jack; D'Amico, Frank; Durant, Nefertiti H; Hlatky, Mark A; Howard, George; Kirley, Katherine; Masi, Christopher; Powell-Wiley, Tiffany M; Solomonides, Anthony E; West, Colin P; Wessel, Jennifer

    2017-09-05

    Meta-analyses are becoming increasingly popular, especially in the fields of cardiovascular disease prevention and treatment. They are often considered to be a reliable source of evidence for making healthcare decisions. Unfortunately, problems among meta-analyses such as the misapplication and misinterpretation of statistical methods and tests are long-standing and widespread. The purposes of this statement are to review key steps in the development of a meta-analysis and to provide recommendations that will be useful for carrying out meta-analyses and for readers and journal editors, who must interpret the findings and gauge methodological quality. To make the statement practical and accessible, detailed descriptions of statistical methods have been omitted. Based on a survey of cardiovascular meta-analyses, published literature on methodology, expert consultation, and consensus among the writing group, key recommendations are provided. Recommendations reinforce several current practices, including protocol registration; comprehensive search strategies; methods for data extraction and abstraction; methods for identifying, measuring, and dealing with heterogeneity; and statistical methods for pooling results. Other practices should be discontinued, including the use of levels of evidence and evidence hierarchies to gauge the value and impact of different study designs (including meta-analyses) and the use of structured tools to assess the quality of studies to be included in a meta-analysis. We also recommend choosing a pooling model for conventional meta-analyses (fixed effect or random effects) on the basis of clinical and methodological similarities among studies to be included, rather than the results of a test for statistical heterogeneity. © 2017 American Heart Association, Inc.

  3. A Primer on Receiver Operating Characteristic Analysis and Diagnostic Efficiency Statistics for Pediatric Psychology: We Are Ready to ROC

    PubMed Central

    2014-01-01

    Objective To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Method Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Results Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. Conclusions This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses. PMID:23965298

  4. A primer on receiver operating characteristic analysis and diagnostic efficiency statistics for pediatric psychology: we are ready to ROC.

    PubMed

    Youngstrom, Eric A

    2014-03-01

    To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses.

  5. Distinguishing Mediational Models and Analyses in Clinical Psychology: Atemporal Associations Do Not Imply Causation.

    PubMed

    Winer, E Samuel; Cervone, Daniel; Bryant, Jessica; McKinney, Cliff; Liu, Richard T; Nadorff, Michael R

    2016-09-01

    A popular way to attempt to discern causality in clinical psychology is through mediation analysis. However, mediation analysis is sometimes applied to research questions in clinical psychology when inferring causality is impossible. This practice may soon increase with new, readily available, and easy-to-use statistical advances. Thus, we here provide a heuristic to remind clinical psychological scientists of the assumptions of mediation analyses. We describe recent statistical advances and unpack assumptions of causality in mediation, underscoring the importance of time in understanding mediational hypotheses and analyses in clinical psychology. Example analyses demonstrate that statistical mediation can occur despite theoretical mediation being improbable. We propose a delineation of mediational effects derived from cross-sectional designs into the terms temporal and atemporal associations to emphasize time in conceptualizing process models in clinical psychology. The general implications for mediational hypotheses and the temporal frameworks from within which they may be drawn are discussed. © 2016 Wiley Periodicals, Inc.

  6. Cross-sectional associations between air pollution and chronic bronchitis: an ESCAPE meta-analysis across five cohorts.

    PubMed

    Cai, Yutong; Schikowski, Tamara; Adam, Martin; Buschka, Anna; Carsin, Anne-Elie; Jacquemin, Benedicte; Marcon, Alessandro; Sanchez, Margaux; Vierkötter, Andrea; Al-Kanaani, Zaina; Beelen, Rob; Birk, Matthias; Brunekreef, Bert; Cirach, Marta; Clavel-Chapelon, Françoise; Declercq, Christophe; de Hoogh, Kees; de Nazelle, Audrey; Ducret-Stich, Regina E; Valeria Ferretti, Virginia; Forsberg, Bertil; Gerbase, Margaret W; Hardy, Rebecca; Heinrich, Joachim; Hoek, Gerard; Jarvis, Debbie; Keidel, Dirk; Kuh, Diana; Nieuwenhuijsen, Mark J; Ragettli, Martina S; Ranzi, Andrea; Rochat, Thierry; Schindler, Christian; Sugiri, Dorothea; Temam, Sofia; Tsai, Ming-Yi; Varraso, Raphaëlle; Kauffmann, Francine; Krämer, Ursula; Sunyer, Jordi; Künzli, Nino; Probst-Hensch, Nicole; Hansell, Anna L

    2014-11-01

    This study aimed to assess associations of outdoor air pollution on prevalence of chronic bronchitis symptoms in adults in five cohort studies (Asthma-E3N, ECRHS, NSHD, SALIA, SAPALDIA) participating in the European Study of Cohorts for Air Pollution Effects (ESCAPE) project. Annual average particulate matter (PM(10), PM(2.5), PM(absorbance), PM(coarse)), NO(2), nitrogen oxides (NO(x)) and road traffic measures modelled from ESCAPE measurement campaigns 2008-2011 were assigned to home address at most recent assessments (1998-2011). Symptoms examined were chronic bronchitis (cough and phlegm for ≥3 months of the year for ≥2 years), chronic cough (with/without phlegm) and chronic phlegm (with/without cough). Cohort-specific cross-sectional multivariable logistic regression analyses were conducted using common confounder sets (age, sex, smoking, interview season, education), followed by meta-analysis. 15 279 and 10 537 participants respectively were included in the main NO(2) and PM analyses at assessments in 1998-2011. Overall, there were no statistically significant associations with any air pollutant or traffic exposure. Sensitivity analyses including in asthmatics only, females only or using back-extrapolated NO(2) and PM10 for assessments in 1985-2002 (ECRHS, NSHD, SALIA, SAPALDIA) did not alter conclusions. In never-smokers, all associations were positive, but reached statistical significance only for chronic phlegm with PM(coarse) OR 1.31 (1.05 to 1.64) per 5 µg/m(3) increase and PM(10) with similar effect size. Sensitivity analyses of older cohorts showed increased risk of chronic cough with PM(2.5abs) (black carbon) exposures. Results do not show consistent associations between chronic bronchitis symptoms and current traffic-related air pollution in adult European populations. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  7. Multiple Phenotype Association Tests Using Summary Statistics in Genome-Wide Association Studies

    PubMed Central

    Liu, Zhonghua; Lin, Xihong

    2017-01-01

    Summary We study in this paper jointly testing the associations of a genetic variant with correlated multiple phenotypes using the summary statistics of individual phenotype analysis from Genome-Wide Association Studies (GWASs). We estimated the between-phenotype correlation matrix using the summary statistics of individual phenotype GWAS analyses, and developed genetic association tests for multiple phenotypes by accounting for between-phenotype correlation without the need to access individual-level data. Since genetic variants often affect multiple phenotypes differently across the genome and the between-phenotype correlation can be arbitrary, we proposed robust and powerful multiple phenotype testing procedures by jointly testing a common mean and a variance component in linear mixed models for summary statistics. We computed the p-values of the proposed tests analytically. This computational advantage makes our methods practically appealing in large-scale GWASs. We performed simulation studies to show that the proposed tests maintained correct type I error rates, and to compare their powers in various settings with the existing methods. We applied the proposed tests to a GWAS Global Lipids Genetics Consortium summary statistics data set and identified additional genetic variants that were missed by the original single-trait analysis. PMID:28653391

  8. Multiple phenotype association tests using summary statistics in genome-wide association studies.

    PubMed

    Liu, Zhonghua; Lin, Xihong

    2018-03-01

    We study in this article jointly testing the associations of a genetic variant with correlated multiple phenotypes using the summary statistics of individual phenotype analysis from Genome-Wide Association Studies (GWASs). We estimated the between-phenotype correlation matrix using the summary statistics of individual phenotype GWAS analyses, and developed genetic association tests for multiple phenotypes by accounting for between-phenotype correlation without the need to access individual-level data. Since genetic variants often affect multiple phenotypes differently across the genome and the between-phenotype correlation can be arbitrary, we proposed robust and powerful multiple phenotype testing procedures by jointly testing a common mean and a variance component in linear mixed models for summary statistics. We computed the p-values of the proposed tests analytically. This computational advantage makes our methods practically appealing in large-scale GWASs. We performed simulation studies to show that the proposed tests maintained correct type I error rates, and to compare their powers in various settings with the existing methods. We applied the proposed tests to a GWAS Global Lipids Genetics Consortium summary statistics data set and identified additional genetic variants that were missed by the original single-trait analysis. © 2017, The International Biometric Society.

  9. Impact of covariate models on the assessment of the air pollution-mortality association in a single- and multipollutant context.

    PubMed

    Sacks, Jason D; Ito, Kazuhiko; Wilson, William E; Neas, Lucas M

    2012-10-01

    With the advent of multicity studies, uniform statistical approaches have been developed to examine air pollution-mortality associations across cities. To assess the sensitivity of the air pollution-mortality association to different model specifications in a single and multipollutant context, the authors applied various regression models developed in previous multicity time-series studies of air pollution and mortality to data from Philadelphia, Pennsylvania (May 1992-September 1995). Single-pollutant analyses used daily cardiovascular mortality, fine particulate matter (particles with an aerodynamic diameter ≤2.5 µm; PM(2.5)), speciated PM(2.5), and gaseous pollutant data, while multipollutant analyses used source factors identified through principal component analysis. In single-pollutant analyses, risk estimates were relatively consistent across models for most PM(2.5) components and gaseous pollutants. However, risk estimates were inconsistent for ozone in all-year and warm-season analyses. Principal component analysis yielded factors with species associated with traffic, crustal material, residual oil, and coal. Risk estimates for these factors exhibited less sensitivity to alternative regression models compared with single-pollutant models. Factors associated with traffic and crustal material showed consistently positive associations in the warm season, while the coal combustion factor showed consistently positive associations in the cold season. Overall, mortality risk estimates examined using a source-oriented approach yielded more stable and precise risk estimates, compared with single-pollutant analyses.

  10. Randomized trial of parent training to prevent adolescent problem behaviors during the high school transition.

    PubMed

    Mason, W Alex; Fleming, Charles B; Gross, Thomas J; Thompson, Ronald W; Parra, Gilbert R; Haggerty, Kevin P; Snyder, James J

    2016-12-01

    This randomized controlled trial tested a widely used general parent training program, Common Sense Parenting (CSP), with low-income 8th graders and their families to support a positive transition to high school. The program was tested in its original 6-session format and in a modified format (CSP-Plus), which added 2 sessions that included adolescents. Over 2 annual cohorts, 321 families were enrolled and randomly assigned to either the CSP, CSP-Plus, or minimal-contact control condition. Pretest, posttest, 1-year follow-up, and 2-year follow-up survey data on parenting as well as youth school bonding, social skills, and problem behaviors were collected from parents and youth (94% retention). Extending prior examinations of posttest outcomes, intent-to-treat regression analyses tested for intervention effects at the 2 follow-up assessments, and growth curve analyses examined experimental condition differences in yearly change across time. Separate exploratory tests of moderation by youth gender, youth conduct problems, and family economic hardship also were conducted. Out of 52 regression models predicting 1- and 2-year follow-up outcomes, only 2 out of 104 possible intervention effects were statistically significant. No statistically significant intervention effects were found in the growth curve analyses. Tests of moderation also showed few statistically significant effects. Because CSP already is in widespread use, findings have direct implications for practice. Specifically, findings suggest that the program may not be efficacious with parents of adolescents in a selective prevention context and may reveal the limits of brief, general parent training for achieving outcomes with parents of adolescents. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Statistical contact angle analyses; "slow moving" drops on a horizontal silicon-oxide surface.

    PubMed

    Schmitt, M; Grub, J; Heib, F

    2015-06-01

    Sessile drop experiments on horizontal surfaces are commonly used to characterise surface properties in science and in industry. The advancing angle and the receding angle are measurable on every solid. Specially on horizontal surfaces even the notions themselves are critically questioned by some authors. Building a standard, reproducible and valid method of measuring and defining specific (advancing/receding) contact angles is an important challenge of surface science. Recently we have developed two/three approaches, by sigmoid fitting, by independent and by dependent statistical analyses, which are practicable for the determination of specific angles/slopes if inclining the sample surface. These approaches lead to contact angle data which are independent on "user-skills" and subjectivity of the operator which is also of urgent need to evaluate dynamic measurements of contact angles. We will show in this contribution that the slightly modified procedures are also applicable to find specific angles for experiments on horizontal surfaces. As an example droplets on a flat freshly cleaned silicon-oxide surface (wafer) are dynamically measured by sessile drop technique while the volume of the liquid is increased/decreased. The triple points, the time, the contact angles during the advancing and the receding of the drop obtained by high-precision drop shape analysis are statistically analysed. As stated in the previous contribution the procedure is called "slow movement" analysis due to the small covered distance and the dominance of data points with low velocity. Even smallest variations in velocity such as the minimal advancing motion during the withdrawing of the liquid are identifiable which confirms the flatness and the chemical homogeneity of the sample surface and the high sensitivity of the presented approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Cancer Statistics Animator

    Cancer.gov

    This tool allows users to animate cancer trends over time by cancer site and cause of death, race, and sex. Provides access to incidence, mortality, and survival. Select the type of statistic, variables, format, and then extract the statistics in a delimited format for further analyses.

  13. Searching Choices: Quantifying Decision-Making Processes Using Search Engine Data.

    PubMed

    Moat, Helen Susannah; Olivola, Christopher Y; Chater, Nick; Preis, Tobias

    2016-07-01

    When making a decision, humans consider two types of information: information they have acquired through their prior experience of the world, and further information they gather to support the decision in question. Here, we present evidence that data from search engines such as Google can help us model both sources of information. We show that statistics from search engines on the frequency of content on the Internet can help us estimate the statistical structure of prior experience; and, specifically, we outline how such statistics can inform psychological theories concerning the valuation of human lives, or choices involving delayed outcomes. Turning to information gathering, we show that search query data might help measure human information gathering, and it may predict subsequent decisions. Such data enable us to compare information gathered across nations, where analyses suggest, for example, a greater focus on the future in countries with a higher per capita GDP. We conclude that search engine data constitute a valuable new resource for cognitive scientists, offering a fascinating new tool for understanding the human decision-making process. Copyright © 2016 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  14. Association analysis of multiple traits by an approach of combining P values.

    PubMed

    Chen, Lili; Wang, Yong; Zhou, Yajing

    2018-03-01

    Increasing evidence shows that one variant can affect multiple traits, which is a widespread phenomenon in complex diseases. Joint analysis of multiple traits can increase statistical power of association analysis and uncover the underlying genetic mechanism. Although there are many statistical methods to analyse multiple traits, most of these methods are usually suitable for detecting common variants associated with multiple traits. However, because of low minor allele frequency of rare variant, these methods are not optimal for rare variant association analysis. In this paper, we extend an adaptive combination of P values method (termed ADA) for single trait to test association between multiple traits and rare variants in the given region. For a given region, we use reverse regression model to test each rare variant associated with multiple traits and obtain the P value of single-variant test. Further, we take the weighted combination of these P values as the test statistic. Extensive simulation studies show that our approach is more powerful than several other comparison methods in most cases and is robust to the inclusion of a high proportion of neutral variants and the different directions of effects of causal variants.

  15. Water-quality characteristics and trends for selected sites at and near the Idaho National Laboratory, Idaho, 1949-2009

    USGS Publications Warehouse

    Bartholomay, Roy C.; Davis, Linda C.; Fisher, Jason C.; Tucker, Betty J.; Raben, Flint A.

    2012-01-01

    The U.S. Geological Survey, in cooperation with the U.S. Department of Energy, analyzed water-quality data collected from 67 aquifer wells and 7 surface-water sites at the Idaho National Laboratory (INL) from 1949 through 2009. The data analyzed included major cations, anions, nutrients, trace elements, and total organic carbon. The analyses were performed to examine water-quality trends that might inform future management decisions about the number of wells to sample at the INL and the type of constituents to monitor. Water-quality trends were determined using (1) the nonparametric Kendall's tau correlation coefficient, p-value, Theil-Sen slope estimator, and summary statistics for uncensored data; and (2) the Kaplan-Meier method for calculating summary statistics, Kendall's tau correlation coefficient, p-value, and Akritas-Theil-Sen slope estimator for robust linear regression for censored data. Statistical analyses for chloride concentrations indicate that groundwater influenced by Big Lost River seepage has decreasing chloride trends or, in some cases, has variable chloride concentration changes that correlate with above-average and below-average periods of recharge. Analyses of trends for chloride in water samples from four sites located along the Big Lost River indicate a decreasing trend or no trend for chloride, and chloride concentrations generally are much lower at these four sites than those in the aquifer. Above-average and below-average periods of recharge also affect concentration trends for sodium, sulfate, nitrate, and a few trace elements in several wells. Analyses of trends for constituents in water from several of the wells that is mostly regionally derived groundwater generally indicate increasing trends for chloride, sodium, sulfate, and nitrate concentrations. These increases are attributed to agricultural or other anthropogenic influences on the aquifer upgradient of the INL. Statistical trends of chemical constituents from several wells near the Naval Reactors Facility may be influenced by wastewater disposal at the facility or by anthropogenic influence from the Little Lost River basin. Groundwater samples from three wells downgradient of the Power Burst Facility area show increasing trends for chloride, nitrate, sodium, and sulfate concentrations. The increases could be caused by wastewater disposal in the Power Burst Facility area. Some groundwater samples in the southwestern part of the INL and southwest of the INL show concentration trends for chloride and sodium that may be influenced by wastewater disposal. Some of the groundwater samples have decreasing trends that could be attributed to the decreasing concentrations in the wastewater from the late 1970s to 2009. The young fraction of groundwater in many of the wells is more than 20 years old, so samples collected in the early 1990s are more representative of groundwater discharged in the 1960s and 1970s, when concentrations in wastewater were much higher. Groundwater sampled in 2009 would be representative of the lower concentrations of chloride and sodium in wastewater discharged in the late 1980s. Analyses of trends for sodium in several groundwater samples from the central and southern part of the eastern Snake River aquifer show increasing trends. In most cases, however, the sodium concentrations are less than background concentrations measured in the aquifer. Many of the wells are open to larger mixed sections of the aquifer, and the increasing trends may indicate that the long history of wastewater disposal in the central part of the INL is increasing sodium concentrations in the groundwater.

  16. Study/experimental/research design: much more than statistics.

    PubMed

    Knight, Kenneth L

    2010-01-01

    The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes "Methods" sections hard to read and understand. To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs. The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style. At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary. Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results.

  17. Statistically derived contributions of diverse human influences to twentieth-century temperature changes

    NASA Astrophysics Data System (ADS)

    Estrada, Francisco; Perron, Pierre; Martínez-López, Benjamín

    2013-12-01

    The warming of the climate system is unequivocal as evidenced by an increase in global temperatures by 0.8°C over the past century. However, the attribution of the observed warming to human activities remains less clear, particularly because of the apparent slow-down in warming since the late 1990s. Here we analyse radiative forcing and temperature time series with state-of-the-art statistical methods to address this question without climate model simulations. We show that long-term trends in total radiative forcing and temperatures have largely been determined by atmospheric greenhouse gas concentrations, and modulated by other radiative factors. We identify a pronounced increase in the growth rates of both temperatures and radiative forcing around 1960, which marks the onset of sustained global warming. Our analyses also reveal a contribution of human interventions to two periods when global warming slowed down. Our statistical analysis suggests that the reduction in the emissions of ozone-depleting substances under the Montreal Protocol, as well as a reduction in methane emissions, contributed to the lower rate of warming since the 1990s. Furthermore, we identify a contribution from the two world wars and the Great Depression to the documented cooling in the mid-twentieth century, through lower carbon dioxide emissions. We conclude that reductions in greenhouse gas emissions are effective in slowing the rate of warming in the short term.

  18. The classification of secondary colorectal liver cancer in human biopsy samples using angular dispersive x-ray diffraction and multivariate analysis

    NASA Astrophysics Data System (ADS)

    Theodorakou, Chrysoula; Farquharson, Michael J.

    2009-08-01

    The motivation behind this study is to assess whether angular dispersive x-ray diffraction (ADXRD) data, processed using multivariate analysis techniques, can be used for classifying secondary colorectal liver cancer tissue and normal surrounding liver tissue in human liver biopsy samples. The ADXRD profiles from a total of 60 samples of normal liver tissue and colorectal liver metastases were measured using a synchrotron radiation source. The data were analysed for 56 samples using nonlinear peak-fitting software. Four peaks were fitted to all of the ADXRD profiles, and the amplitude, area, amplitude and area ratios for three of the four peaks were calculated and used for the statistical and multivariate analysis. The statistical analysis showed that there are significant differences between all the peak-fitting parameters and ratios between the normal and the diseased tissue groups. The technique of soft independent modelling of class analogy (SIMCA) was used to classify normal liver tissue and colorectal liver metastases resulting in 67% of the normal tissue samples and 60% of the secondary colorectal liver tissue samples being classified correctly. This study has shown that the ADXRD data of normal and secondary colorectal liver cancer are statistically different and x-ray diffraction data analysed using multivariate analysis have the potential to be used as a method of tissue classification.

  19. Using Network Analysis to Characterize Biogeographic Data in a Community Archive

    NASA Astrophysics Data System (ADS)

    Wellman, T. P.; Bristol, S.

    2017-12-01

    Informative measures are needed to evaluate and compare data from multiple providers in a community-driven data archive. This study explores insights from network theory and other descriptive and inferential statistics to examine data content and application across an assemblage of publically available biogeographic data sets. The data are archived in ScienceBase, a collaborative catalog of scientific data supported by the U.S Geological Survey to enhance scientific inquiry and acuity. In gaining understanding through this investigation and other scientific venues our goal is to improve scientific insight and data use across a spectrum of scientific applications. Network analysis is a tool to reveal patterns of non-trivial topological features in the data that do not exhibit complete regularity or randomness. In this work, network analyses are used to explore shared events and dependencies between measures of data content and application derived from metadata and catalog information and measures relevant to biogeographic study. Descriptive statistical tools are used to explore relations between network analysis properties, while inferential statistics are used to evaluate the degree of confidence in these assessments. Network analyses have been used successfully in related fields to examine social awareness of scientific issues, taxonomic structures of biological organisms, and ecosystem resilience to environmental change. Use of network analysis also shows promising potential to identify relationships in biogeographic data that inform programmatic goals and scientific interests.

  20. The determinants of bond angle variability in protein/peptide backbones: A comprehensive statistical/quantum mechanics analysis.

    PubMed

    Improta, Roberto; Vitagliano, Luigi; Esposito, Luciana

    2015-11-01

    The elucidation of the mutual influence between peptide bond geometry and local conformation has important implications for protein structure refinement, validation, and prediction. To gain insights into the structural determinants and the energetic contributions associated with protein/peptide backbone plasticity, we here report an extensive analysis of the variability of the peptide bond angles by combining statistical analyses of protein structures and quantum mechanics calculations on small model peptide systems. Our analyses demonstrate that all the backbone bond angles strongly depend on the peptide conformation and unveil the existence of regular trends as function of ψ and/or φ. The excellent agreement of the quantum mechanics calculations with the statistical surveys of protein structures validates the computational scheme here employed and demonstrates that the valence geometry of protein/peptide backbone is primarily dictated by local interactions. Notably, for the first time we show that the position of the H(α) hydrogen atom, which is an important parameter in NMR structural studies, is also dependent on the local conformation. Most of the trends observed may be satisfactorily explained by invoking steric repulsive interactions; in some specific cases the valence bond variability is also influenced by hydrogen-bond like interactions. Moreover, we can provide a reliable estimate of the energies involved in the interplay between geometry and conformations. © 2015 Wiley Periodicals, Inc.

  1. Management Information Systems Design Implications: The Effect of Cognitive Style and Information Presentation on Problem Solving.

    DTIC Science & Technology

    1987-12-01

    my thesis advisor, Dr Dennis E Campbell. Without his expert advice and extreme patience with an INTP like myself, this research would not have been...research was to identify a relationship between psychological type and mode of presentation of information. The * type theory developed ty Carl Jung and...preference rankings for seven differewnt modes of presentation of data. The statistical analyses showed no relationship betveen personality type and

  2. SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit

    PubMed Central

    Chu, Annie; Cui, Jenny; Dinov, Ivo D.

    2011-01-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models. PMID:21546994

  3. GreekLex 2: A comprehensive lexical database with part-of-speech, syllabic, phonological, and stress information

    PubMed Central

    van Heuven, Walter J. B.; Pitchford, Nicola J.; Ledgeway, Timothy

    2017-01-01

    Databases containing lexical properties on any given orthography are crucial for psycholinguistic research. In the last ten years, a number of lexical databases have been developed for Greek. However, these lack important part-of-speech information. Furthermore, the need for alternative procedures for calculating syllabic measurements and stress information, as well as combination of several metrics to investigate linguistic properties of the Greek language are highlighted. To address these issues, we present a new extensive lexical database of Modern Greek (GreekLex 2) with part-of-speech information for each word and accurate syllabification and orthographic information predictive of stress, as well as several measurements of word similarity and phonetic information. The addition of detailed statistical information about Greek part-of-speech, syllabification, and stress neighbourhood allowed novel analyses of stress distribution within different grammatical categories and syllabic lengths to be carried out. Results showed that the statistical preponderance of stress position on the pre-final syllable that is reported for Greek language is dependent upon grammatical category. Additionally, analyses showed that a proportion higher than 90% of the tokens in the database would be stressed correctly solely by relying on stress neighbourhood information. The database and the scripts for orthographic and phonological syllabification as well as phonetic transcription are available at http://www.psychology.nottingham.ac.uk/greeklex/. PMID:28231303

  4. Model-based iterative reconstruction in low-dose CT colonography-feasibility study in 65 patients for symptomatic investigation.

    PubMed

    Vardhanabhuti, Varut; James, Julia; Nensey, Rehaan; Hyde, Christopher; Roobottom, Carl

    2015-05-01

    To compare image quality on computed tomographic colonography (CTC) acquired at standard dose (STD) and low dose (LD) using filtered-back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) techniques. A total of 65 symptomatic patients were prospectively enrolled for the study and underwent STD and LD CTC with filtered-back projection, adaptive statistical iterative reconstruction, and MBIR to allow direct per-patient comparison. Objective image noise, subjective image analyses, and polyp detection were assessed. Objective image noise analysis demonstrates significant noise reduction using MBIR technique (P < .05) despite being acquired at lower doses. Subjective image analyses were superior for LD MBIR in all parameters except visibility of extracolonic lesions (two-dimensional) and visibility of colonic wall (three-dimensional) where there were no significant differences. There was no significant difference in polyp detection rates (P > .05). Doses: LD (dose-length product, 257.7), STD (dose-length product, 483.6). LD MBIR CTC objectively shows improved image noise using parameters in our study. Subjectively, image quality is maintained. Polyp detection shows no significant difference but because of small numbers needs further validation. Average dose reduction of 47% can be achieved. This study confirms feasibility of using MBIR in this context of CTC in symptomatic population. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  5. Appraisal of data for ground-water quality in Nebraska

    USGS Publications Warehouse

    Engberg, R.A.

    1984-01-01

    This report summarizes existing data for groundwater quality in Nebraska and indicates their adequacy as a data base. Analyses have been made of water from nearly 10,000 wells by 8 agencies. Those analyses that meet reliability criteria have been aggregated by geologic source of water into four principal aquifer groupings--Holocene-Pleistocene aquifers, Tertiary aquifers, Mesozoic aquifers, and Paleozoic aquifers. For each aquifer grouping, data for specific conductance and 24 constituents in the water are summarized statistically. Also, diagrams are presented showing differences in statistical parameters, or in chemical composition, of water from the different aquifer groupings. Additionally, for each grouping except Paleozoic aquifers, maps show ranges in concentration of dissolved solids, calcium, alkalinity, and sulfate. In areas where data are insufficient to delimit, ranges in concentration also are shown on the maps. Point-source contamination has been identified at 41 locations and nonpoint-source contamination in 3 areas, namely, the central Platte Valley, Holt County, and Boyd County. Potential for nonpoint-source contamination exists in 10 major areas, which together comprise more than one-third of the State. Existing data are mostly from specific projects having limited areas and objectives. Consequently, a lack of data exists for other areas and for certain geologic units, particularly the Mesozoic and Paleozoic aquifers. Specific data needs for each of the four principal aquifer groupings are indicated in a matrix table.

  6. GreekLex 2: A comprehensive lexical database with part-of-speech, syllabic, phonological, and stress information.

    PubMed

    Kyparissiadis, Antonios; van Heuven, Walter J B; Pitchford, Nicola J; Ledgeway, Timothy

    2017-01-01

    Databases containing lexical properties on any given orthography are crucial for psycholinguistic research. In the last ten years, a number of lexical databases have been developed for Greek. However, these lack important part-of-speech information. Furthermore, the need for alternative procedures for calculating syllabic measurements and stress information, as well as combination of several metrics to investigate linguistic properties of the Greek language are highlighted. To address these issues, we present a new extensive lexical database of Modern Greek (GreekLex 2) with part-of-speech information for each word and accurate syllabification and orthographic information predictive of stress, as well as several measurements of word similarity and phonetic information. The addition of detailed statistical information about Greek part-of-speech, syllabification, and stress neighbourhood allowed novel analyses of stress distribution within different grammatical categories and syllabic lengths to be carried out. Results showed that the statistical preponderance of stress position on the pre-final syllable that is reported for Greek language is dependent upon grammatical category. Additionally, analyses showed that a proportion higher than 90% of the tokens in the database would be stressed correctly solely by relying on stress neighbourhood information. The database and the scripts for orthographic and phonological syllabification as well as phonetic transcription are available at http://www.psychology.nottingham.ac.uk/greeklex/.

  7. Effect of Probiotic Curd on Salivary pH and Streptococcus mutans: A Double Blind Parallel Randomized Controlled Trial.

    PubMed

    Srivastava, Shivangi; Saha, Sabyasachi; Kumari, Minti; Mohd, Shafaat

    2016-02-01

    Dairy products like curd seem to be the most natural way to ingest probiotics which can reduce Streptococcus mutans level and also increase salivary pH thereby reducing the dental caries risk. To estimate the role of probiotic curd on salivary pH and Streptococcus mutans count, over a period of 7 days. This double blind parallel randomized clinical trial was conducted at the institution with 60 caries free volunteers belonging to the age group of 20-25 years who were randomly allocated into two groups. Test Group consisted of 30 subjects who consumed 100ml of probiotic curd daily for seven days while an equal numbered Control Group were given 100ml of regular curd for seven days. Saliva samples were assessed at baseline, after ½ hour 1 hour and 7 days of intervention period using pH meter and Mitis Salivarius Bacitracin agar to estimate salivary pH and S. mutans count. Data was statistically analysed using Paired and Unpaired t-test. The study revealed a reduction in salivary pH after ½ hour and 1 hour in both the groups. However after 7 days, normal curd showed a statistically significant (p< 0.05) reduction in salivary pH while probiotic curd showed a statistically significant (p< 0.05) increase in salivary pH. Similarly with regard to S. mutans colony counts probiotic curd showed statistically significant reduction (p< 0.05) as compared to normal curd. Short-term consumption of probiotic curds showed marked salivary pH elevation and reduction of salivary S. mutans counts and thus can be exploited for the prevention of enamel demineralization as a long-term remedy keeping in mind its cost effectiveness.

  8. Dissecting the genetics of complex traits using summary association statistics.

    PubMed

    Pasaniuc, Bogdan; Price, Alkes L

    2017-02-01

    During the past decade, genome-wide association studies (GWAS) have been used to successfully identify tens of thousands of genetic variants associated with complex traits and diseases. These studies have produced extensive repositories of genetic variation and trait measurements across large numbers of individuals, providing tremendous opportunities for further analyses. However, privacy concerns and other logistical considerations often limit access to individual-level genetic data, motivating the development of methods that analyse summary association statistics. Here, we review recent progress on statistical methods that leverage summary association data to gain insights into the genetic basis of complex traits and diseases.

  9. Statistical innovations in diagnostic device evaluation.

    PubMed

    Yu, Tinghui; Li, Qin; Gray, Gerry; Yue, Lilly Q

    2016-01-01

    Due to rapid technological development, innovations in diagnostic devices are proceeding at an extremely fast pace. Accordingly, the needs for adopting innovative statistical methods have emerged in the evaluation of diagnostic devices. Statisticians in the Center for Devices and Radiological Health at the Food and Drug Administration have provided leadership in implementing statistical innovations. The innovations discussed in this article include: the adoption of bootstrap and Jackknife methods, the implementation of appropriate multiple reader multiple case study design, the application of robustness analyses for missing data, and the development of study designs and data analyses for companion diagnostics.

  10. Distribution of water quality parameters in Dhemaji district, Assam (India).

    PubMed

    Buragohain, Mridul; Bhuyan, Bhabajit; Sarma, H P

    2010-07-01

    The primary objective of this study is to present a statistically significant water quality database of Dhemaji district, Assam (India) with special reference to pH, fluoride, nitrate, arsenic, iron, sodium and potassium. 25 water samples collected from different locations of five development blocks in Dhemaji district have been studied separately. The implications presented are based on statistical analyses of the raw data. Normal distribution statistics and reliability analysis (correlation and covariance matrix) have been employed to find out the distribution pattern, localisation of data, and other related information. Statistical observations show that all the parameters under investigation exhibit non uniform distribution with a long asymmetric tail either on the right or left side of the median. The width of the third quartile was consistently found to be more than the second quartile for each parameter. Differences among mean, mode and median, significant skewness and kurtosis value indicate that the distribution of various water quality parameters in the study area is widely off normal. Thus, the intrinsic water quality is not encouraging due to unsymmetrical distribution of various water quality parameters in the study area.

  11. Fundamentals and Catalytic Innovation: The Statistical and Data Management Center of the Antibacterial Resistance Leadership Group

    PubMed Central

    Huvane, Jacqueline; Komarow, Lauren; Hill, Carol; Tran, Thuy Tien T.; Pereira, Carol; Rosenkranz, Susan L.; Finnemeyer, Matt; Earley, Michelle; Jiang, Hongyu (Jeanne); Wang, Rui; Lok, Judith

    2017-01-01

    Abstract The Statistical and Data Management Center (SDMC) provides the Antibacterial Resistance Leadership Group (ARLG) with statistical and data management expertise to advance the ARLG research agenda. The SDMC is active at all stages of a study, including design; data collection and monitoring; data analyses and archival; and publication of study results. The SDMC enhances the scientific integrity of ARLG studies through the development and implementation of innovative and practical statistical methodologies and by educating research colleagues regarding the application of clinical trial fundamentals. This article summarizes the challenges and roles, as well as the innovative contributions in the design, monitoring, and analyses of clinical trials and diagnostic studies, of the ARLG SDMC. PMID:28350899

  12. Statistical analyses and computational prediction of helical kinks in membrane proteins

    NASA Astrophysics Data System (ADS)

    Huang, Y.-H.; Chen, C.-M.

    2012-10-01

    We have carried out statistical analyses and computer simulations of helical kinks for TM helices in the PDBTM database. About 59 % of 1562 TM helices showed a significant kink, and 38 % of these kinks are associated with prolines in a range of ±4 residues. Our analyses show that helical kinks are more populated in the central region of helices, particularly in the range of 1-3 residues away from the helix center. Among 1,053 helical kinks analyzed, 88 % of kinks are bends (change in helix axis without loss of helical character) and 12 % are disruptions (change in helix axis and loss of helical character). It is found that proline residues tend to cause larger kink angles in helical bends, while this effect is not observed in helical disruptions. A further analysis of these kinked helices suggests that a kinked helix usually has 1-2 broken backbone hydrogen bonds with the corresponding N-O distance in the range of 4.2-8.7 Å, whose distribution is sharply peaked at 4.9 Å followed by an exponential decay with increasing distance. Our main aims of this study are to understand the formation of helical kinks and to predict their structural features. Therefore we further performed molecular dynamics (MD) simulations under four simulation scenarios to investigate kink formation in 37 kinked TM helices and 5 unkinked TM helices. The representative models of these kinked helices are predicted by a clustering algorithm, SPICKER, from numerous decoy structures possessing the above generic features of kinked helices. Our results show an accuracy of 95 % in predicting the kink position of kinked TM helices and an error less than 10° in the angle prediction of 71.4 % kinked helices. For unkinked helices, based on various structure similarity tests, our predicted models are highly consistent with their crystal structure. These results provide strong supports for the validity of our method in predicting the structure of TM helices.

  13. Sensitivity of bud burst in key tree species in the UK to recent climate variability and change

    NASA Astrophysics Data System (ADS)

    Abernethy, Rachel; Cook, Sally; Hemming, Deborah; McCarthy, Mark

    2017-04-01

    Analysing the relationship between the changing climate of the UK and the spatial and temporal distribution of spring bud burst plays an important role in understanding ecosystem functionality and predicting future phenological trends. The location and timing of bud burst of eleven species of trees alongside climatic factors such as, temperature, precipitation and hours of sunshine (photoperiod) were used to investigate: i. which species' bud burst timing experiences the greatest impact from a changing climate, ii. which climatic factor has the greatest influence on the timing of bud burst, and iii. whether the location of bud burst is influenced by climate variability. Winter heatwave duration was also analysed as part of an investigation into the relationship between temperature trends of a specific winter period and the following spring events. Geographic Information Systems (GIS) and statistical analysis tools were used to visualise spatial patterns and to analyse the phenological and climate data through regression and analysis of variance (ANOVA) tests. Where there were areas that showed a strong positive or negative relationship between phenology and climate, satellite imagery was used to calculate a Normalised Difference Vegetation Index (NDVI) and a Leaf Area Index (LAI) to further investigate the relationships found. It was expected that in the north of the UK, where bud burst tends to occur later in the year than in the south, that the bud bursts would begin to occur earlier due to increasing temperatures and increased hours of sunshine. However, initial results show that for some species, the bud burst timing tends to remain or become later in the year. Interesting results will be found when investigating the statistical significance between the changing location of the bud bursts and each climatic factor.

  14. Cryptic or pseudocryptic: can morphological methods inform copepod taxonomy? An analysis of publications and a case study of the Eurytemora affinis species complex

    PubMed Central

    Lajus, Dmitry; Sukhikh, Natalia; Alekseev, Victor

    2015-01-01

    Interest in cryptic species has increased significantly with current progress in genetic methods. The large number of cryptic species suggests that the resolution of traditional morphological techniques may be insufficient for taxonomical research. However, some species now considered to be cryptic may, in fact, be designated pseudocryptic after close morphological examination. Thus the “cryptic or pseudocryptic” dilemma speaks to the resolution of morphological analysis and its utility for identifying species. We address this dilemma first by systematically reviewing data published from 1980 to 2013 on cryptic species of Copepoda and then by performing an in-depth morphological study of the former Eurytemora affinis complex of cryptic species. Analyzing the published data showed that, in 5 of 24 revisions eligible for systematic review, cryptic species assignment was based solely on the genetic variation of forms without detailed morphological analysis to confirm the assignment. Therefore, some newly described cryptic species might be designated pseudocryptic under more detailed morphological analysis as happened with Eurytemora affinis complex. Recent genetic analyses of the complex found high levels of heterogeneity without morphological differences; it is argued to be cryptic. However, next detailed morphological analyses allowed to describe a number of valid species. Our study, using deep statistical analyses usually not applied for new species describing, of this species complex confirmed considerable differences between former cryptic species. In particular, fluctuating asymmetry (FA), the random variation of left and right structures, was significantly different between forms and provided independent information about their status. Our work showed that multivariate statistical approaches, such as principal component analysis, can be powerful techniques for the morphological discrimination of cryptic taxons. Despite increasing cryptic species designations, morphological techniques have great potential in determining copepod taxonomy. PMID:26120427

  15. Association between kindergarten and first-grade food insecurity and weight status in U.S. children.

    PubMed

    Lee, Arthur M; Scharf, Rebecca J; DeBoer, Mark D

    The aim of this study was to determine if food insecurity is an independent risk factor for obesity in U.S. children. We analyzed data from a nationally representative sample of children participating in the Early Childhood Longitudinal Study-Kindergarten Cohort 2011. Statistical analyses were performed to evaluate longitudinal associations between food security and body mass index (BMI) z-score. All regression models included race/ethnicity, household income, and parental education. Survey and anthropometric data was collected from teachers and parents of 8167 U.S. children entering kindergarten in fall 2010 with regular follow-up through third grade. Complete data regarding food security, socioeconomic assessment, and BMI z-score data were included for statistical analyses. All analyses were weighted to be nationally representative. Children with household food insecurity had increased obesity prevalence from kindergarten through grade 3; for example, at kindergarten, with food insecurity 16.4% (95% confidence interval [CI], 13.7-19) versus food secure 12.4% (95% CI, 11.3-13.6). Adjusted means analysis showed first-grade food insecurity was significantly correlated with increased BMI z-score in first through third grades; for example, at first grade, with food insecurity 0.6 (95% CI, 0.5-0.7) versus food secure 0.4 (95% CI, 0.4-0.5). Logistic regression showed first-grade food insecurity was correlated with increased risk for obesity in that grade (odds ratio 1.4; 95% CI, 1.1-2). Obesity is more prevalent among food-insecure children. First-grade food insecurity is an independent risk factor for longitudinal increases in BMI z-score. There are differences in the association between food insecurity and weight status between kindergarten and first grade. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Sensitivity to volcanic field boundary

    NASA Astrophysics Data System (ADS)

    Runge, Melody; Bebbington, Mark; Cronin, Shane; Lindsay, Jan; Rashad Moufti, Mohammed

    2016-04-01

    Volcanic hazard analyses are desirable where there is potential for future volcanic activity to affect a proximal population. This is frequently the case for volcanic fields (regions of distributed volcanism) where low eruption rates, fertile soil, and attractive landscapes draw populations to live close by. Forecasting future activity in volcanic fields almost invariably uses spatial or spatio-temporal point processes with model selection and development based on exploratory analyses of previous eruption data. For identifiability reasons, spatio-temporal processes, and practically also spatial processes, the definition of a spatial region is required to which volcanism is confined. However, due to the complex and predominantly unknown sub-surface processes driving volcanic eruptions, definition of a region based solely on geological information is currently impossible. Thus, the current approach is to fit a shape to the known previous eruption sites. The class of boundary shape is an unavoidable subjective decision taken by the forecaster that is often overlooked during subsequent analysis of results. This study shows the substantial effect that this choice may have on even the simplest exploratory methods for hazard forecasting, illustrated using four commonly used exploratory statistical methods and two very different regions: the Auckland Volcanic Field, New Zealand, and Harrat Rahat, Kingdom of Saudi Arabia. For Harrat Rahat, sensitivity of results to boundary definition is substantial. For the Auckland Volcanic Field, the range of options resulted in similar shapes, nevertheless, some of the statistical tests still showed substantial variation in results. This work highlights the fact that when carrying out any hazard analysis on volcanic fields, it is vital to specify how the volcanic field boundary has been defined, assess the sensitivity of boundary choice, and to carry these assumptions and related uncertainties through to estimates of future activity and hazard analyses.

  17. 3-D microstructure of olivine in complex geological materials reconstructed by correlative X-ray μ-CT and EBSD analyses.

    PubMed

    Kahl, W-A; Dilissen, N; Hidas, K; Garrido, C J; López-Sánchez-Vizcaíno, V; Román-Alpiste, M J

    2017-11-01

    We reconstruct the 3-D microstructure of centimetre-sized olivine crystals in rocks from the Almirez ultramafic massif (SE Spain) using combined X-ray micro computed tomography (μ-CT) and electron backscatter diffraction (EBSD). The semidestructive sample treatment involves geographically oriented drill pressing of rocks and preparation of oriented thin sections for EBSD from the μ-CT scanned cores. The μ-CT results show that the mean intercept length (MIL) analyses provide reliable information on the shape preferred orientation (SPO) of texturally different olivine groups. We show that statistical interpretation of crystal preferred orientation (CPO) and SPO of olivine becomes feasible because the highest densities of the distribution of main olivine crystal axes from EBSD are aligned with the three axes of the 3-D ellipsoid calculated from the MIL analyses from μ-CT. From EBSD data we distinguish multiple CPO groups and by locating the thin sections within the μ-CT volume, we assign SPO to the corresponding olivine crystal aggregates, which confirm the results of statistical comparison. We demonstrate that the limitations of both methods (i.e. no crystal orientation data in μ-CT and no spatial information in EBSD) can be overcome, and the 3-D orientation of the crystallographic axes of olivines from different orientation groups can be successfully correlated with the crystal shapes of representative olivine grains. Through this approach one can establish the link among geological structures, macrostructure, fabric and 3-D SPO-CPO relationship at the hand specimen scale even in complex, coarse-grained geomaterials. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  18. Influence of eye biometrics and corneal micro-structure on noncontact tonometry.

    PubMed

    Jesus, Danilo A; Majewska, Małgorzata; Krzyżanowska-Berkowska, Patrycja; Iskander, D Robert

    2017-01-01

    Tonometry is widely used as the main screening tool supporting glaucoma diagnosis. Still, its accuracy could be improved if full knowledge about the variation of the corneal biomechanical properties was available. In this study, Optical Coherence Tomography (OCT) speckle statistics are used to infer the organisation of the corneal micro-structure and hence, to analyse its influence on intraocular pressure (IOP) measurements. Fifty-six subjects were recruited for this prospective study. Macro and micro-structural corneal parameters as well as subject age were considered. Macro-structural analysis included the parameters that are associated with the ocular anatomy, such as central corneal thickness (CCT), corneal radius, axial length, anterior chamber depth and white-to-white corneal diameter. Micro-structural parameters which included OCT speckle statistics were related to the internal organisation of the corneal tissue and its physiological changes during lifetime. The corneal speckle obtained from OCT was modelled with the Generalised Gamma (GG) distribution that is characterised with a scale parameter and two shape parameters. In macro-structure analysis, only CCT showed a statistically significant correlation with IOP (R2 = 0.25, p<0.001). The scale parameter and the ratio of the shape parameters of GG distribution showed statistically significant correlation with IOP (R2 = 0.19, p<0.001 and R2 = 0.17, p<0.001, respectively). For the studied group, a weak, although significant correlation was found between age and IOP (R2 = 0.053, p = 0.04). Forward stepwise regression showed that CCT and the scale parameter of the Generalised Gamma distribution can be combined in a regression model (R2 = 0.39, p<0.001) to study the role of the corneal structure on IOP. We show, for the first time, that corneal micro-structure influences the IOP measurements obtained from noncontact tonometry. OCT speckle statistics can be employed to learn about the corneal micro-structure and hence, to further calibrate the IOP measurements.

  19. Influence of eye biometrics and corneal micro-structure on noncontact tonometry

    PubMed Central

    Majewska, Małgorzata; Krzyżanowska-Berkowska, Patrycja; Iskander, D. Robert

    2017-01-01

    Purpose Tonometry is widely used as the main screening tool supporting glaucoma diagnosis. Still, its accuracy could be improved if full knowledge about the variation of the corneal biomechanical properties was available. In this study, Optical Coherence Tomography (OCT) speckle statistics are used to infer the organisation of the corneal micro-structure and hence, to analyse its influence on intraocular pressure (IOP) measurements. Methods Fifty-six subjects were recruited for this prospective study. Macro and micro-structural corneal parameters as well as subject age were considered. Macro-structural analysis included the parameters that are associated with the ocular anatomy, such as central corneal thickness (CCT), corneal radius, axial length, anterior chamber depth and white-to-white corneal diameter. Micro-structural parameters which included OCT speckle statistics were related to the internal organisation of the corneal tissue and its physiological changes during lifetime. The corneal speckle obtained from OCT was modelled with the Generalised Gamma (GG) distribution that is characterised with a scale parameter and two shape parameters. Results In macro-structure analysis, only CCT showed a statistically significant correlation with IOP (R2 = 0.25, p<0.001). The scale parameter and the ratio of the shape parameters of GG distribution showed statistically significant correlation with IOP (R2 = 0.19, p<0.001 and R2 = 0.17, p<0.001, respectively). For the studied group, a weak, although significant correlation was found between age and IOP (R2 = 0.053, p = 0.04). Forward stepwise regression showed that CCT and the scale parameter of the Generalised Gamma distribution can be combined in a regression model (R2 = 0.39, p<0.001) to study the role of the corneal structure on IOP. Conclusions We show, for the first time, that corneal micro-structure influences the IOP measurements obtained from noncontact tonometry. OCT speckle statistics can be employed to learn about the corneal micro-structure and hence, to further calibrate the IOP measurements. PMID:28472178

  20. Associations between DSM-5 section III personality traits and the Minnesota Multiphasic Personality Inventory 2-Restructured Form (MMPI-2-RF) scales in a psychiatric patient sample.

    PubMed

    Anderson, Jaime L; Sellbom, Martin; Ayearst, Lindsay; Quilty, Lena C; Chmielewski, Michael; Bagby, R Michael

    2015-09-01

    Our aim in the current study was to evaluate the convergence between Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5) Section III dimensional personality traits, as operationalized via the Personality Inventory for DSM-5 (PID-5), and Minnesota Multiphasic Personality Inventory 2-Restructured Form (MMPI-2-RF) scale scores in a psychiatric patient sample. We used a sample of 346 (171 men, 175 women) patients who were recruited through a university-affiliated psychiatric facility in Toronto, Canada. We estimated zero-order correlations between the PID-5 and MMPI-2-RF substantive scale scores, as well as a series of exploratory structural equation modeling (ESEM) analyses to examine how these scales converged in multivariate latent space. Results generally showed empirical convergence between the scales of these two measures that were thematically meaningful and in accordance with conceptual expectations. Correlation analyses showed significant associations between conceptually expected scales, and the highest associations tended to be between scales that were theoretically related. ESEM analyses generated evidence for distinct internalizing, externalizing, and psychoticism factors across all analyses. These findings indicate convergence between these two measures and help further elucidate the associations between dysfunctional personality traits and general psychopathology. (c) 2015 APA, all rights reserved.

  1. ISSUES IN THE STATISTICAL ANALYSIS OF SMALL-AREA HEALTH DATA. (R825173)

    EPA Science Inventory

    The availability of geographically indexed health and population data, with advances in computing, geographical information systems and statistical methodology, have opened the way for serious exploration of small area health statistics based on routine data. Such analyses may be...

  2. Food consumption and the actual statistics of cardiovascular diseases: an epidemiological comparison of 42 European countries

    PubMed Central

    Grasgruber, Pavel; Sebera, Martin; Hrazdira, Eduard; Hrebickova, Sylva; Cacek, Jan

    2016-01-01

    Background The aim of this ecological study was to identify the main nutritional factors related to the prevalence of cardiovascular diseases (CVDs) in Europe, based on a comparison of international statistics. Design The mean consumption of 62 food items from the FAOSTAT database (1993–2008) was compared with the actual statistics of five CVD indicators in 42 European countries. Several other exogenous factors (health expenditure, smoking, body mass index) and the historical stability of results were also examined. Results We found exceptionally strong relationships between some of the examined factors, the highest being a correlation between raised cholesterol in men and the combined consumption of animal fat and animal protein (r=0.92, p<0.001). The most significant dietary correlate of low CVD risk was high total fat and animal protein consumption. Additional statistical analyses further highlighted citrus fruits, high-fat dairy (cheese) and tree nuts. Among other non-dietary factors, health expenditure showed by far the highest correlation coefficients. The major correlate of high CVD risk was the proportion of energy from carbohydrates and alcohol, or from potato and cereal carbohydrates. Similar patterns were observed between food consumption and CVD statistics from the period 1980–2000, which shows that these relationships are stable over time. However, we found striking discrepancies in men's CVD statistics from 1980 and 1990, which can probably explain the origin of the ‘saturated fat hypothesis’ that influenced public health policies in the following decades. Conclusion Our results do not support the association between CVDs and saturated fat, which is still contained in official dietary guidelines. Instead, they agree with data accumulated from recent studies that link CVD risk with the high glycaemic index/load of carbohydrate-based diets. In the absence of any scientific evidence connecting saturated fat with CVDs, these findings show that current dietary recommendations regarding CVDs should be seriously reconsidered. PMID:27680091

  3. Game Related Statistics Discriminating Between Starters and Nonstarters Players in Women’S National Basketball Association League (WNBA)

    PubMed Central

    Gòmez, Miguel-Ángel; Lorenzo, Alberto; Ortega, Enrique; Sampaio, Jaime; Ibàñez, Sergio-José

    2009-01-01

    The aim of the present study was to identify the game-related statistics that allow discriminating between starters and nonstarter players in women’s basketball when related to winning or losing games and best or worst teams. The sample comprised all 216 regular season games from the 2005 Women’s National Basketball Association League (WNBA). The game-related statistics included were 2- and 3- point field-goals (both successful and unsuccessful), free-throws (both successful and unsuccessful), defensive and offensive rebounds, assists, blocks, fouls, steals, turnovers and minutes played. Results from multivariate analysis showed that when best teams won, the discriminant game-related statistics were successful 2-point field-goals (SC = 0.47), successful free-throws (SC = 0.44), fouls (SC = -0.41), assists (SC = 0.37), and defensive rebounds (SC = 0.37). When the worst teams won, the discriminant game-related statistics were successful 2-point field- goals (SC = 0.37), successful free-throws (SC = 0.45), assists (SC = 0.58), and steals (SC = 0.35). The results showed that the successful 2-point field-goals, successful free-throws and the assists were the most powerful variables discriminating between starters and nonstarters. These specific characteristics helped to point out the importance of starters’ players shooting and passing ability during competitions. Key points The players’ game-related statistical profile varied according to team status, game outcome and team quality in women’s basketball. The results of this work help to point out the different player’s performance described in women’s basketball compared with men’s basketball. The results obtained enhance the importance of starters and nonstarters contribution to team’s performance in different game contexts. Results showed the power of successful 2-point field-goals, successful free-throws and assists discriminating between starters and nonstarters in all the analyses. PMID:24149538

  4. [Appraisal of occupational stress in different gender, age, work duration, educational level and marital status groups].

    PubMed

    Yang, Xin-Wei; Wang, Zhi-Ming; Jin, Tai-Yi

    2006-05-01

    This study was conducted to assess occupational stress in different gender, age, work duration, educational level and marital status group. A test of occupational stress in different gender, age, work duration, educational level and marital status group, was carried out with revised occupational stress inventory (OSI-R) for 4278 participants. The results of gender show that there are heavier occupational role, stronger interpersonal and physical strain in male than that in female, and the differences are statistically significant (P < 0.01). The score of recreation in the male is higher than that in female, but the score of self-care in the female is higher than that in male, and the differences are statistically significant (P < 0.01). Difference in the scores of occupational role, personal resource among various age groups is significant (P < 0.01). Vocational, interpersonal strain scores among various age groups is significant (P < 0.05). The results of educational level analyses suggest that the difference in the scores of occupational stress and strain among various educational levels show statistically significant (P < 0.05), whereas there are no statistic significance of coping resources among the groups (P > 0.05). The occupational stress so as to improve the work ability of different groups. Different measure should be taken to reduce the occupational stress so as to improve the work ability of different groups.

  5. Meta-epidemiologic study showed frequent time trends in summary estimates from meta-analyses of diagnostic accuracy studies.

    PubMed

    Cohen, Jérémie F; Korevaar, Daniël A; Wang, Junfeng; Leeflang, Mariska M; Bossuyt, Patrick M

    2016-09-01

    To evaluate changes over time in summary estimates from meta-analyses of diagnostic accuracy studies. We included 48 meta-analyses from 35 MEDLINE-indexed systematic reviews published between September 2011 and January 2012 (743 diagnostic accuracy studies; 344,015 participants). Within each meta-analysis, we ranked studies by publication date. We applied random-effects cumulative meta-analysis to follow how summary estimates of sensitivity and specificity evolved over time. Time trends were assessed by fitting a weighted linear regression model of the summary accuracy estimate against rank of publication. The median of the 48 slopes was -0.02 (-0.08 to 0.03) for sensitivity and -0.01 (-0.03 to 0.03) for specificity. Twelve of 96 (12.5%) time trends in sensitivity or specificity were statistically significant. We found a significant time trend in at least one accuracy measure for 11 of the 48 (23%) meta-analyses. Time trends in summary estimates are relatively frequent in meta-analyses of diagnostic accuracy studies. Results from early meta-analyses of diagnostic accuracy studies should be considered with caution. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Analysis of recent climatic changes in the Arabian Peninsula region

    NASA Astrophysics Data System (ADS)

    Nasrallah, H. A.; Balling, R. C.

    1996-12-01

    Interest in the potential climatic consequences of the continued buildup of anthropo-generated greenhouse gases has led many scientists to conduct extensive climate change studies at the global, hemispheric, and regional scales. In this investigation, analyses are conducted on long-term historical climate records from the Arabian Peninsula region. Over the last 100 years, temperatures in the region increased linearly by 0.63 °C. However, virtually all of this warming occurred from 1911 1935, and over the most recent 50 years, the Arabian Peninsula region has cooled slightly. In addition, the satellite-based measurements of lower-tropospheric temperatures for the region do not show any statistically significant warming over the period 1979 1991. While many other areas of the world are showing a decrease in the diurnal temperature range, the Arabian Peninsula region reveals no evidence of a long-term change in this parameter. Precipitation records for the region show a slight, statistically insignificant decrease over the past 40 years. The results from this study should complement the mass of information that has resulted from similar regional climate studies conducted in the United States, Europe, and Australia.

  7. Forecasting incidence of dengue in Rajasthan, using time series analyses.

    PubMed

    Bhatnagar, Sunil; Lal, Vivek; Gupta, Shiv D; Gupta, Om P

    2012-01-01

    To develop a prediction model for dengue fever/dengue haemorrhagic fever (DF/DHF) using time series data over the past decade in Rajasthan and to forecast monthly DF/DHF incidence for 2011. Seasonal autoregressive integrated moving average (SARIMA) model was used for statistical modeling. During January 2001 to December 2010, the reported DF/DHF cases showed a cyclical pattern with seasonal variation. SARIMA (0,0,1) (0,1,1) 12 model had the lowest normalized Bayesian information criteria (BIC) of 9.426 and mean absolute percentage error (MAPE) of 263.361 and appeared to be the best model. The proportion of variance explained by the model was 54.3%. Adequacy of the model was established through Ljung-Box test (Q statistic 4.910 and P-value 0.996), which showed no significant correlation between residuals at different lag times. The forecast for the year 2011 showed a seasonal peak in the month of October with an estimated 546 cases. Application of SARIMA model may be useful for forecast of cases and impending outbreaks of DF/DHF and other infectious diseases, which exhibit seasonal pattern.

  8. Geospatial Characterization of Fluvial Wood Arrangement in a Semi-confined Alluvial River

    NASA Astrophysics Data System (ADS)

    Martin, D. J.; Harden, C. P.; Pavlowsky, R. T.

    2014-12-01

    Large woody debris (LWD) has become universally recognized as an integral component of fluvial systems, and as a result, has become increasingly common as a river restoration tool. However, "natural" processes of wood recruitment and the subsequent arrangement of LWD within the river network are poorly understood. This research used a suite of spatial statistics to investigate longitudinal arrangement patterns of LWD in a low-gradient, Midwestern river. First, a large-scale GPS inventory of LWD, performed on the Big River in the eastern Missouri Ozarks, resulted in over 4,000 logged positions of LWD along seven river segments that covered nearly 100 km of the 237 km river system. A global Moran's I analysis indicates that LWD density is spatially autocorrelated and displays a clustering tendency within all seven river segments (P-value range = 0.000 to 0.054). A local Moran's I analysis identified specific locations along the segments where clustering occurs and revealed that, on average, clusters of LWD density (high or low) spanned 400 m. Spectral analyses revealed that, in some segments, LWD density is spatially periodic. Two segments displayed strong periodicity, while the remaining segments displayed varying degrees of noisiness. Periodicity showed a positive association with gravel bar spacing and meander wavelength, although there were insufficient data to statistically confirm the relationship. A wavelet analysis was then performed to investigate periodicity relative to location along the segment. The wavelet analysis identified significant (α = 0.05) periodicity at discrete locations along each of the segments. Those reaches yielding strong periodicity showed stronger relationships between LWD density and the geomorphic/riparian independent variables tested. Analyses consistently identified valley width and sinuosity as being associated with LWD density. The results of these analyses contribute a new perspective on the longitudinal distribution of LWD in a river system, which should help identify physical and/or riparian control mechanisms of LWD arrangement and support the development of models of LWD arrangement. Additionally, the spatial statistical tools presented here have shown to be valuable for identifying longitudinal patterns in river system components.

  9. Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.

    PubMed

    Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi

    2017-05-01

    Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.

  10. Primary implant stability in a bone model simulating clinical situations for the posterior maxilla: an in vitro study

    PubMed Central

    2016-01-01

    Purpose The aim of this study was to determine the influence of anatomical conditions on primary stability in the models simulating posterior maxilla. Methods Polyurethane blocks were designed to simulate monocortical (M) and bicortical (B) conditions. Each condition had four subgroups measuring 3 mm (M3, B3), 5 mm (M5, B5), 8 mm (M8, B8), and 12 mm (M12, B12) in residual bone height (RBH). After implant placement, the implant stability quotient (ISQ), Periotest value (PTV), insertion torque (IT), and reverse torque (RT) were measured. Two-factor ANOVA (two cortical conditions×four RBHs) and additional analyses for simple main effects were performed. Results A significant interaction between cortical condition and RBH was demonstrated for all methods measuring stability with two-factor ANOVA. In the analyses for simple main effects, ISQ and PTV were statistically higher in the bicortical groups than the corresponding monocortical groups, respectively. In the monocortical group, ISQ and PTV showed a statistically significant rise with increasing RBH. Measurements of IT and RT showed a similar tendency, measuring highest in the M3 group, followed by the M8, the M5, and the M12 groups. In the bicortical group, all variables showed a similar tendency, with different degrees of rise and decline. The B8 group showed the highest values, followed by the B12, the B5, and the B3 groups. The highest coefficient was demonstrated between ISQ and PTV. Conclusions Primary stability was enhanced by the presence of bicortex and increased RBH, which may be better demonstrated by ISQ and PTV than by IT and RT. PMID:27588215

  11. Cross-reactivity of insulin analogues with three insulin assays.

    PubMed

    Dayaldasani, A; Rodríguez Espinosa, M; Ocón Sánchez, P; Pérez Valero, V

    2015-05-01

    Immunometric assays have recently shown higher specificity in the detection of human insulin than radioimmunoassays with almost no cross-reaction with proinsulin or C peptide. The introduction of the new insulin analogues on the market, however, has raised the need to define their cross-reactivity in these assays. Several studies have been published in this regard with different results. The analogues studied were insulins lispro, aspart, glargine, detemir, and glulisine. Insulin concentrations were measured in Immulite(®) 2000 and Advia Centaur(®) XP (Siemens Healthcare Diagnostics), and Elecsys(®) Modular Analytics E170 (Roche). All samples were processed 15 times in the same analytical run following a random sequence. Those samples which showed statistically and clinically significant changes in insulin concentration were reprocessed using increasing concentrations of analogue, and this was done twice, using two different serum pools, one with a low concentration of insulin and one with a high concentration of insulin. In the Elecsys(®) E170 analyser, glargine showed statistical changes (comparison of mean concentrations with p < 0.05) and clinically significant changes in measured insulin (percentage difference 986.2% > reference change value: 59.8%), and the interference increased with increasing concentrations of analogue; the differences were not significant in the case of the other analogues. In the Advia Centaur(®) and Immulite(®) 2000 only the results for glulisine did not present significance (percentage difference 44.7% < reference change value 103.5%). Increasing concentrations of aspart, glargine, and lispro showed increased interference in Immulite(®) 2000. In the Elecsys(®) E170 assay, relevant cross-reactivity was only detected with insulin glargine, whereas in the other analysers all analogues except glulisine showed significant interference. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  12. Communication about patient pain in primary care: development of the Physician-Patient Communication about Pain scale (PCAP).

    PubMed

    Haskard-Zolnierek, Kelly B

    2012-01-01

    This paper describes the development of the 47-item Physician-Patient Communication about Pain (PCAP) scale for use with audiotaped medical visit interactions. Patient pain was assessed with the Medical Outcomes Study SF-36 Bodily Pain Scale. Four raters assessed 181 audiotaped patient interactions with 68 physicians. Descriptive statistics of PCAP items were computed. Principal components analyses with 20 scale items were used to reduce the scale to composite variables for analyses. Validity was assessed through (1) comparing PCAP composite scores for patients with high versus low pain and (2) correlating PCAP composites with a separate communication rating scale. Principal components analyses yielded four physician and five patient communication composites (mean alpha=.77). Some evidence for concurrent validity was provided (5 of 18 correlations with communication validation rating scale were significant). Paired-sample t tests showed significant differences for 4 patient PCAP composites, showing the PCAP scale discriminates between high and low pain patients' communication. The PCAP scale shows partial evidence of reliability and two forms of validity. More research with this scale (developing more reliable and valid composites) is needed to extend these preliminary findings before this scale is applicable for use in practice. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. Statistical analysis and interpretation of prenatal diagnostic imaging studies, Part 2: descriptive and inferential statistical methods.

    PubMed

    Tuuli, Methodius G; Odibo, Anthony O

    2011-08-01

    The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.

  14. Using a Five-Step Procedure for Inferential Statistical Analyses

    ERIC Educational Resources Information Center

    Kamin, Lawrence F.

    2010-01-01

    Many statistics texts pose inferential statistical problems in a disjointed way. By using a simple five-step procedure as a template for statistical inference problems, the student can solve problems in an organized fashion. The problem and its solution will thus be a stand-by-itself organic whole and a single unit of thought and effort. The…

  15. Abnormal brain structure as a potential biomarker for venous erectile dysfunction: evidence from multimodal MRI and machine learning.

    PubMed

    Li, Lingli; Fan, Wenliang; Li, Jun; Li, Quanlin; Wang, Jin; Fan, Yang; Ye, Tianhe; Guo, Jialun; Li, Sen; Zhang, Youpeng; Cheng, Yongbiao; Tang, Yong; Zeng, Hanqing; Yang, Lian; Zhu, Zhaohui

    2018-03-29

    To investigate the cerebral structural changes related to venous erectile dysfunction (VED) and the relationship of these changes to clinical symptoms and disorder duration and distinguish patients with VED from healthy controls using a machine learning classification. 45 VED patients and 50 healthy controls were included. Voxel-based morphometry (VBM), tract-based spatial statistics (TBSS) and correlation analyses of VED patients and clinical variables were performed. The machine learning classification method was adopted to confirm its effectiveness in distinguishing VED patients from healthy controls. Compared to healthy control subjects, VED patients showed significantly decreased cortical volumes in the left postcentral gyrus and precentral gyrus, while only the right middle temporal gyrus showed a significant increase in cortical volume. Increased axial diffusivity (AD), radial diffusivity (RD) and mean diffusivity (MD) values were observed in widespread brain regions. Certain regions of these alterations related to VED patients showed significant correlations with clinical symptoms and disorder durations. Machine learning analyses discriminated patients from controls with overall accuracy 96.7%, sensitivity 93.3% and specificity 99.0%. Cortical volume and white matter (WM) microstructural changes were observed in VED patients, and showed significant correlations with clinical symptoms and dysfunction durations. Various DTI-derived indices of some brain regions could be regarded as reliable discriminating features between VED patients and healthy control subjects, as shown by machine learning analyses. • Multimodal magnetic resonance imaging helps clinicians to assess patients with VED. • VED patients show cerebral structural alterations related to their clinical symptoms. • Machine learning analyses discriminated VED patients from controls with an excellent performance. • Machine learning classification provided a preliminary demonstration of DTI's clinical use.

  16. Statistical studies of selected trace elements with reference to geology and genesis of the Carlin gold deposit, Nevada

    USGS Publications Warehouse

    Harris, Michael; Radtke, Arthur S.

    1976-01-01

    Linear regression and discriminant analyses techniques were applied to gold, mercury, arsenic, antimony, barium, copper, molybdenum, lead, zinc, boron, tellurium, selenium, and tungsten analyses from drill holes into unoxidized gold ore at the Carlin gold mine near Carlin, Nev. The statistical treatments employed were used to judge proposed hypotheses on the origin and geochemical paragenesis of this disseminated gold deposit.

  17. The Effects of Using a Wiki on Student Engagement and Learning of Report Writing Skills in a University Statistics Course

    ERIC Educational Resources Information Center

    Neumann, David L.; Hood, Michelle

    2009-01-01

    A wiki was used as part of a blended learning approach to promote collaborative learning among students in a first year university statistics class. One group of students analysed a data set and communicated the results by jointly writing a practice report using a wiki. A second group analysed the same data but communicated the results in a…

  18. Extreme between-study homogeneity in meta-analyses could offer useful insights.

    PubMed

    Ioannidis, John P A; Trikalinos, Thomas A; Zintzaras, Elias

    2006-10-01

    Meta-analyses are routinely evaluated for the presence of large between-study heterogeneity. We examined whether it is also important to probe whether there is extreme between-study homogeneity. We used heterogeneity tests with left-sided statistical significance for inference and developed a Monte Carlo simulation test for testing extreme homogeneity in risk ratios across studies, using the empiric distribution of the summary risk ratio and heterogeneity statistic. A left-sided P=0.01 threshold was set for claiming extreme homogeneity to minimize type I error. Among 11,803 meta-analyses with binary contrasts from the Cochrane Library, 143 (1.21%) had left-sided P-value <0.01 for the asymptotic Q statistic and 1,004 (8.50%) had left-sided P-value <0.10. The frequency of extreme between-study homogeneity did not depend on the number of studies in the meta-analyses. We identified examples where extreme between-study homogeneity (left-sided P-value <0.01) could result from various possibilities beyond chance. These included inappropriate statistical inference (asymptotic vs. Monte Carlo), use of a specific effect metric, correlated data or stratification using strong predictors of outcome, and biases and potential fraud. Extreme between-study homogeneity may provide useful insights about a meta-analysis and its constituent studies.

  19. Personal use of hair dyes and the risk of bladder cancer: results of a meta-analysis.

    PubMed Central

    Huncharek, Michael; Kupelnick, Bruce

    2005-01-01

    OBJECTIVE: This study examined the methodology of observational studies that explored an association between personal use of hair dye products and the risk of bladder cancer. METHODS: Data were pooled from epidemiological studies using a general variance-based meta-analytic method that employed confidence intervals. The outcome of interest was a summary relative risk (RRs) reflecting the risk of bladder cancer development associated with use of hair dye products vs. non-use. Sensitivity analyses were performed to explain any observed statistical heterogeneity and to explore the influence of specific study characteristics of the summary estimate of effect. RESULTS: Initially combining homogenous data from six case-control and one cohort study yielded a non-significant RR of 1.01 (0.92, 1.11), suggesting no association between hair dye use and bladder cancer development. Sensitivity analyses examining the influence of hair dye type, color, and study design on this suspected association showed that uncontrolled confounding and design limitations contributed to a spurious non-significant summary RR. The sensitivity analyses yielded statistically significant RRs ranging from 1.22 (1.11, 1.51) to 1.50 (1.30, 1.98), indicating that personal use of hair dye products increases bladder cancer risk by 22% to 50% vs. non-use. CONCLUSION: The available epidemiological data suggest an association between personal use of hair dye products and increased risk of bladder cancer. PMID:15736329

  20. Amino acid pair- and triplet-wise groupings in the interior of α-helical segments in proteins.

    PubMed

    de Sousa, Miguel M; Munteanu, Cristian R; Pazos, Alejandro; Fonseca, Nuno A; Camacho, Rui; Magalhães, A L

    2011-02-21

    A statistical approach has been applied to analyse primary structure patterns at inner positions of α-helices in proteins. A systematic survey was carried out in a recent sample of non-redundant proteins selected from the Protein Data Bank, which were used to analyse α-helix structures for amino acid pairing patterns. Only residues more than three positions apart from both termini of the α-helix were considered as inner. Amino acid pairings i, i+k (k=1, 2, 3, 4, 5), were analysed and the corresponding 20×20 matrices of relative global propensities were constructed. An analysis of (i, i+4, i+8) and (i, i+3, i+4) triplet patterns was also performed. These analysis yielded information on a series of amino acid patterns (pairings and triplets) showing either high or low preference for α-helical motifs and suggested a novel approach to protein alphabet reduction. In addition, it has been shown that the individual amino acid propensities are not enough to define the statistical distribution of these patterns. Global pair propensities also depend on the type of pattern, its composition and orientation in the protein sequence. The data presented should prove useful to obtain and refine useful predictive rules which can further the development and fine-tuning of protein structure prediction algorithms and tools. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Fast and accurate imputation of summary statistics enhances evidence of functional enrichment.

    PubMed

    Pasaniuc, Bogdan; Zaitlen, Noah; Shi, Huwenbo; Bhatia, Gaurav; Gusev, Alexander; Pickrell, Joseph; Hirschhorn, Joel; Strachan, David P; Patterson, Nick; Price, Alkes L

    2014-10-15

    Imputation using external reference panels (e.g. 1000 Genomes) is a widely used approach for increasing power in genome-wide association studies and meta-analysis. Existing hidden Markov models (HMM)-based imputation approaches require individual-level genotypes. Here, we develop a new method for Gaussian imputation from summary association statistics, a type of data that is becoming widely available. In simulations using 1000 Genomes (1000G) data, this method recovers 84% (54%) of the effective sample size for common (>5%) and low-frequency (1-5%) variants [increasing to 87% (60%) when summary linkage disequilibrium information is available from target samples] versus the gold standard of 89% (67%) for HMM-based imputation, which cannot be applied to summary statistics. Our approach accounts for the limited sample size of the reference panel, a crucial step to eliminate false-positive associations, and it is computationally very fast. As an empirical demonstration, we apply our method to seven case-control phenotypes from the Wellcome Trust Case Control Consortium (WTCCC) data and a study of height in the British 1958 birth cohort (1958BC). Gaussian imputation from summary statistics recovers 95% (105%) of the effective sample size (as quantified by the ratio of [Formula: see text] association statistics) compared with HMM-based imputation from individual-level genotypes at the 227 (176) published single nucleotide polymorphisms (SNPs) in the WTCCC (1958BC height) data. In addition, for publicly available summary statistics from large meta-analyses of four lipid traits, we publicly release imputed summary statistics at 1000G SNPs, which could not have been obtained using previously published methods, and demonstrate their accuracy by masking subsets of the data. We show that 1000G imputation using our approach increases the magnitude and statistical evidence of enrichment at genic versus non-genic loci for these traits, as compared with an analysis without 1000G imputation. Thus, imputation of summary statistics will be a valuable tool in future functional enrichment analyses. Publicly available software package available at http://bogdan.bioinformatics.ucla.edu/software/. bpasaniuc@mednet.ucla.edu or aprice@hsph.harvard.edu Supplementary materials are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. From sexless to sexy: Why it is time for human genetics to consider and report analyses of sex.

    PubMed

    Powers, Matthew S; Smith, Phillip H; McKee, Sherry A; Ehringer, Marissa A

    2017-01-01

    Science has come a long way with regard to the consideration of sex differences in clinical and preclinical research, but one field remains behind the curve: human statistical genetics. The goal of this commentary is to raise awareness and discussion about how to best consider and evaluate possible sex effects in the context of large-scale human genetic studies. Over the course of this commentary, we reinforce the importance of interpreting genetic results in the context of biological sex, establish evidence that sex differences are not being considered in human statistical genetics, and discuss how best to conduct and report such analyses. Our recommendation is to run stratified analyses by sex no matter the sample size or the result and report the findings. Summary statistics from stratified analyses are helpful for meta-analyses, and patterns of sex-dependent associations may be hidden in a combined dataset. In the age of declining sequencing costs, large consortia efforts, and a number of useful control samples, it is now time for the field of human genetics to appropriately include sex in the design, analysis, and reporting of results.

  3. The Applications of Mindfulness with Students of Secondary School: Results on the Academic Performance, Self-concept and Anxiety

    NASA Astrophysics Data System (ADS)

    Franco, Clemente; Mañas, Israel; Cangas, Adolfo J.; Gallego, José

    The aim of the present research is to verify the impact of a mindfulness programme on the levels academic performance, self-concept and anxiety, of a group of students in Year 1 at secondary school. The statistical analyses carried out on the variables studied showed significant differences in favour of the experimental group with regard to the control group in all the variables analysed. In the experimental group we can observe a significant increase of academic performance as well as an improvement in all the self-concept dimensions, and a significant decrease in anxiety states and traits. The importance and usefulness of mindfulness techniques in the educative system is discussed.

  4. Quantum behaviour of pumped and damped triangular Bose-Hubbard systems

    NASA Astrophysics Data System (ADS)

    Chianca, C. V.; Olsen, M. K.

    2017-12-01

    We propose and analyse analogs of optical cavities for atoms using three-well Bose-Hubbard models with pumping and losses. We consider triangular configurations. With one well pumped and one damped, we find that both the mean-field dynamics and the quantum statistics show a quantitative dependence on the choice of damped well. The systems we analyse remain far from equilibrium, preserving good coherence between the wells in the steady-state. We find quadrature squeezing and mode entanglement for some parameter regimes and demonstrate that the trimer with pumping and damping at the same well is the stronger option for producing non-classical states. Due to recent experimental advances, it should be possible to demonstrate the effects we investigate and predict.

  5. Sealing ability of root-end filling materials.

    PubMed

    Amezcua, Octávio; Gonzalez, Álvaro Cruz; Borges, Álvaro Henrique; Bandeca, Matheus Coelho; Estrela, Cyntia Rodrigues de Araújo; Estrela, Carlos

    2015-03-01

    The aim of this research was to compare the apical sealing ability of different root-end filling materials (SuperEBA(®), ProRoot MTA(®), thermoplasticized gutta-percha + AH-Plus(®), thermoplasticized RealSeal(®)), by means of microbial indicators. Thus, 50 human single-rooted teeth were employed, which were shaped until size 5 0, retro - prepared with ultrasonic tips and assigned to 4 groups, retro-filled with each material or controls. A platform was employed, which was split in two halves: upper chamber-where the microbial suspension containing the biological indicators was introduced (E. faecalis + S. aureus + P. aeruginosa + B. subtilis + C. albicans); and a lower chamber containing the culture medium brain, heart influsion, where 3 mm of the apical region of teeth were kept immersed. Lectures were made daily for 60 days, using the turbidity of the culture medium as indicative of microbial contamination. Statistical analyses were carried out at 5% level of significance. The results showed microbial leakage at least in some specimens in all of the groups. RealSeal(®) has more microbial leakage, statistically significant, compared to ProRoot(®) MTA and SuperEBA(®). No significant differences were observed when compared ProRoot(®) MTA and SuperEBA(®). The gutta-percha + AH Plus results showed no statistically significant differences when compared with the other groups. All the tested materials showed microbial leakage. Root-end fillings with Super-EBA or MTA had the lowest bacterial filtration and RealSeal shows highest bacterial filtration.

  6. Transfusion Indication Threshold Reduction (TITRe2) randomized controlled trial in cardiac surgery: statistical analysis plan.

    PubMed

    Pike, Katie; Nash, Rachel L; Murphy, Gavin J; Reeves, Barnaby C; Rogers, Chris A

    2015-02-22

    The Transfusion Indication Threshold Reduction (TITRe2) trial is the largest randomized controlled trial to date to compare red blood cell transfusion strategies following cardiac surgery. This update presents the statistical analysis plan, detailing how the study will be analyzed and presented. The statistical analysis plan has been written following recommendations from the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use, prior to database lock and the final analysis of trial data. Outlined analyses are in line with the Consolidated Standards of Reporting Trials (CONSORT). The study aims to randomize 2000 patients from 17 UK centres. Patients are randomized to either a restrictive (transfuse if haemoglobin concentration <7.5 g/dl) or liberal (transfuse if haemoglobin concentration <9 g/dl) transfusion strategy. The primary outcome is a binary composite outcome of any serious infectious or ischaemic event in the first 3 months following randomization. The statistical analysis plan details how non-adherence with the intervention, withdrawals from the study, and the study population will be derived and dealt with in the analysis. The planned analyses of the trial primary and secondary outcome measures are described in detail, including approaches taken to deal with multiple testing, model assumptions not being met and missing data. Details of planned subgroup and sensitivity analyses and pre-specified ancillary analyses are given, along with potential issues that have been identified with such analyses and possible approaches to overcome such issues. ISRCTN70923932 .

  7. Playing-related disabling musculoskeletal disorders in young and adult classical piano students.

    PubMed

    Bruno, S; Lorusso, A; L'Abbate, N

    2008-07-01

    To determine the prevalence of instrument-related musculoskeletal problems in classical piano students and investigate piano-specific risk factors. A specially developed four parts questionnaire was administered to classical piano students of two Apulian conservatories, in southern Italy. A cross-sectional design was used. Prevalences of playing related musculoskeletal disorders (MSDs) were calculated and cases were compared with non-cases. A total of 195 out of the 224 piano students responded (87%). Among 195 responders, 75 (38.4%) were considered affected according to the pre-established criteria. Disabling MSDs showed similar prevalence rates for neck (29.3%), thoracic spine (21.3%) and upper limbs (from 20.0 to 30.4%) in the affected group. Univariate analyses showed statistical differences concerning mean age, number of hours per week spent playing, more than 60 min of continuative playing without breaks, lack of sport practice and acceptability of "No pain, no gain" criterion in students with music-related pain compared with pianists not affected. Statistical correlation was found only between upper limbs diseases in pianists and hand sizes. No correlation with the model of piano played was found in the affected group. The multivariate analyses performed by logistic regression confirmed the independent correlation of the risk factors age, lack of sport practice and acceptability of "No pain, no gain" criterion. Our study showed MSDs to be a common problem among classical piano students. With variance in several studies reported, older students appeared to be more frequently affected by disabling MSDs and no difference in the prevalence rate of the disorders was found in females.

  8. Comparative Evaluation of Microleakage Between Nano-Ionomer, Giomer and Resin Modified Glass Ionomer Cement in Class V Cavities- CLSM Study

    PubMed Central

    Hari, Archana; Thumu, Jayaprakash; Velagula, Lakshmi Deepa; Bolla, Nagesh; Varri, Sujana; Kasaraneni, Srikanth; Nalli, Siva Venkata Malathi

    2016-01-01

    Introduction Marginal integrity of adhesive restorative materials provides better sealing ability for enamel and dentin and plays an important role in success of restoration in Class V cavities. Restorative material with good marginal adaptation improves the longevity of restorations. Aim Aim of this study was to evaluate microleakage in Class V cavities which were restored with Resin Modified Glass Ionomer Cement (RMGIC), Giomer and Nano-Ionomer. Materials and Methods This in-vitro study was performed on 60 human maxillary and mandibular premolars which were extracted for orthodontic reasons. A standard wedge shaped defect was prepared on the buccal surfaces of teeth with the gingival margin placed near Cemento Enamel Junction (CEJ). Teeth were divided into three groups of 20 each and restored with RMGIC, Giomer and Nano-Ionomer and were subjected to thermocycling. Teeth were then immersed in 0.5% Rhodamine B dye for 48 hours. They were sectioned longitudinally from the middle of cavity into mesial and distal parts. The sections were observed under Confocal Laser Scanning Microscope (CLSM) to evaluate microleakage. Depth of dye penetration was measured in millimeters. Statistical Analysis The data was analysed using the Kruskal Wallis test. Pair wise comparison was done with Mann Whitney U Test. A p-value<0.05 is taken as statistically significant. Results Nano-Ionomer showed less microleakage which was statistically significant when compared to Giomer (p=0.0050). Statistically no significant difference was found between Nano Ionomer and RMGIC (p=0.3550). There was statistically significant difference between RMGIC and Giomer (p=0.0450). Conclusion Nano-Ionomer and RMGIC showed significantly less leakage and better adaptation than Giomer and there was no statistically significant difference between Nano-Ionomer and RMGIC. PMID:27437363

  9. Mathematics authentic assessment on statistics learning: the case for student mini projects

    NASA Astrophysics Data System (ADS)

    Fauziah, D.; Mardiyana; Saputro, D. R. S.

    2018-03-01

    Mathematics authentic assessment is a form of meaningful measurement of student learning outcomes for the sphere of attitude, skill and knowledge in mathematics. The construction of attitude, skill and knowledge achieved through the fulfilment of tasks which involve active and creative role of the students. One type of authentic assessment is student mini projects, started from planning, data collecting, organizing, processing, analysing and presenting the data. The purpose of this research is to learn the process of using authentic assessments on statistics learning which is conducted by teachers and to discuss specifically the use of mini projects to improving students’ learning in the school of Surakarta. This research is an action research, where the data collected through the results of the assessments rubric of student mini projects. The result of data analysis shows that the average score of rubric of student mini projects result is 82 with 96% classical completeness. This study shows that the application of authentic assessment can improve students’ mathematics learning outcomes. Findings showed that teachers and students participate actively during teaching and learning process, both inside and outside of the school. Student mini projects also provide opportunities to interact with other people in the real context while collecting information and giving presentation to the community. Additionally, students are able to exceed more on the process of statistics learning using authentic assessment.

  10. Impact of the buildings areas on the fire incidence.

    PubMed

    Srekl, Jože; Golob, Janvit

    2010-03-01

    A survey of statistical studies shows that probability of fires is expressed by the equation P(A) = KAα, where A = total floor area of the building and K and  are constants for an individual group, or risk category. This equation, which is based on the statistical data on fires in Great Britain, does not include the impact factors such as the number of employees and the activities carried out in these buildings. In order to find out possible correlations between the activities carried out in buildings, the characteristics of buildings and number of fires, we used a random sample which included 134 buildings as industrial objects, hotels, restaurants, warehouses and shopping malls. Our study shows that the floor area of buildings has low impact on the incidence of fires. After analysing the sample of buildings by using multivariate analysis we proved a correlation between the number of fires, floor area of objects, work operation period (per day) and the number of employees in objects.

  11. The Effects of Child Abuse and Exposure to Domestic Violence on Adolescent Internalizing and Externalizing Behavior Problems.

    PubMed

    Moylan, Carrie A; Herrenkohl, Todd I; Sousa, Cindy; Tajima, Emiko A; Herrenkohl, Roy C; Russo, M Jean

    2010-01-01

    This study examines the effects of child abuse and domestic violence exposure in childhood on adolescent internalizing and externalizing behaviors. Data for this analysis are from the Lehigh Longitudinal Study, a prospective study of 457 youth addressing outcomes of family violence and resilience in individuals and families. Results show that child abuse, domestic violence, and both in combination (i.e., dual exposure) increase a child's risk for internalizing and externalizing outcomes in adolescence. When accounting for risk factors associated with additional stressors in the family and surrounding environment, only those children with dual exposure had an elevated risk of the tested outcomes compared to non-exposed youth. However, while there were some observable differences in the prediction of outcomes for children with dual exposure compared to those with single exposure (i.e., abuse only or exposure to domestic violence only), these difference were not statistically significant. Analyses showed that the effects of exposure for boys and girls are statistically comparable.

  12. Contamination of different portions of raw and boiled specimens of Norway lobster by mercury and selenium.

    PubMed

    Perugini, Monia; Visciano, Pierina; Manera, Maurizio; Abete, Maria Cesarina; Gavinelli, Stefania; Amorena, Michele

    2013-11-01

    The aim of this study was to evaluate mercury and selenium distribution in different portions (exoskeleton, white meat and brown meat) of Norway lobster (Nephrops norvegicus). Some samples were also analysed as whole specimens. The same portions were also examined after boiling, in order to observe if this cooking practice could affect mercury and selenium concentrations. The highest mercury concentrations were detected in white meat, exceeding in all cases the maximum levels established by European legislation. The brown meat reported the highest selenium concentrations. In all boiled samples, mercury levels showed a statistically significant increase compared to raw portions. On the contrary, selenium concentrations detected in boiled samples of white meat, brown meat and whole specimen showed a statistically significant decrease compared to the corresponding raw samples. These results indicate that boiling modifies mercury and selenium concentrations. The high mercury levels detected represent a possible risk for consumers, and the publication and diffusion of specific advisories concerning seafood consumption is recommended.

  13. Quantitative EEG analysis of the maturational changes associated with childhood absence epilepsy

    NASA Astrophysics Data System (ADS)

    Rosso, O. A.; Hyslop, W.; Gerlach, R.; Smith, R. L. L.; Rostas, J. A. P.; Hunter, M.

    2005-10-01

    This study aimed to examine the background electroencephalography (EEG) in children with childhood absence epilepsy, a condition whose presentation has strong developmental links. EEG hallmarks of absence seizure activity are widely accepted and there is recognition that the bulk of inter-ictal EEG in this group is normal to the naked eye. This multidisciplinary study aimed to use the normalized total wavelet entropy (NTWS) (Signal Processing 83 (2003) 1275) to examine the background EEG of those patients demonstrating absence seizure activity, and compare it with children without absence epilepsy. This calculation can be used to define the degree of order in a system, with higher levels of entropy indicating a more disordered (chaotic) system. Results were subjected to further statistical analyses of significance. Entropy values were calculated for patients versus controls. For all channels combined, patients with absence epilepsy showed (statistically significant) lower entropy values than controls. The size of the difference in entropy values was not uniform, with certain EEG electrodes consistently showing greater differences than others.

  14. CyTOF workflow: differential discovery in high-throughput high-dimensional cytometry datasets

    PubMed Central

    Nowicka, Malgorzata; Krieg, Carsten; Weber, Lukas M.; Hartmann, Felix J.; Guglietta, Silvia; Becher, Burkhard; Levesque, Mitchell P.; Robinson, Mark D.

    2017-01-01

    High dimensional mass and flow cytometry (HDCyto) experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots), reporting of clustering results (dimensionality reduction, heatmaps with dendrograms) and differential analyses (e.g. plots of aggregated signals). PMID:28663787

  15. Medical students' attitudes towards science and gross anatomy, and the relationship to personality

    PubMed Central

    Plaisant, Odile; Stephens, Shiby; Apaydin, Nihal; Courtois, Robert; Lignier, Baptiste; Loukas, Marios; Moxham, Bernard

    2014-01-01

    Assessment of the personalities of medical students can enable medical educators to formulate strategies for the best development of academic and clinical competencies. Previous research has shown that medical students do not share a common personality profile, there being gender differences. We have also shown that, for French medical students, students with personality traits associated with strong competitiveness are selected for admission to medical school. In this study, we further show that the medical students have different personality profiles compared with other student groups (psychology and business studies). The main purpose of the present investigation was to assess attitudes to science and gross anatomy, and to relate these to the students' personalities. Questionnaires (including Thurstone and Chave analyses) were employed to measure attitudes, and personality was assessed using the Big Five Inventory (BFI). Data for attitudes were obtained for students at medical schools in Cardiff (UK), Paris, Descartes/Sorbonne (France), St George's University (Grenada) and Ankara (Turkey). Data obtained from personality tests were available for analysis from the Parisian cohort of students. Although the medical students were found to have strongly supportive views concerning the importance of science in medicine, their knowledge of the scientific method/philosophy of science was poor. Following analyses of the BFI in the French students, ‘openness’ and ‘conscientiousness’ were linked statistically with a positive attitude towards science. For anatomy, again strongly supportive views concerning the subject's importance in medicine were discerned. Analyses of the BFI in the French students did not show links statistically between personality profiles and attitudes towards gross anatomy, except male students with ‘negative affectivity’ showed less appreciation of the importance of anatomy. This contrasts with our earlier studies that showed that there is a relationship between the BF dimensions of personality traits and anxiety towards the dissection room experience (at the start of the course, ‘negative emotionality’ was related to an increased level of anxiety). We conclude that medical students agree on the importance to their studies of both science in general and gross anatomy in particular, and that some personality traits relate to their attitudes that could affect clinical competence. PMID:23594196

  16. From moonlight to movement and synchronized randomness: Fourier and wavelet analyses of animal location time series data

    PubMed Central

    Polansky, Leo; Wittemyer, George; Cross, Paul C.; Tambling, Craig J.; Getz, Wayne M.

    2011-01-01

    High-resolution animal location data are increasingly available, requiring analytical approaches and statistical tools that can accommodate the temporal structure and transient dynamics (non-stationarity) inherent in natural systems. Traditional analyses often assume uncorrelated or weakly correlated temporal structure in the velocity (net displacement) time series constructed using sequential location data. We propose that frequency and time–frequency domain methods, embodied by Fourier and wavelet transforms, can serve as useful probes in early investigations of animal movement data, stimulating new ecological insight and questions. We introduce a novel movement model with time-varying parameters to study these methods in an animal movement context. Simulation studies show that the spectral signature given by these methods provides a useful approach for statistically detecting and characterizing temporal dependency in animal movement data. In addition, our simulations provide a connection between the spectral signatures observed in empirical data with null hypotheses about expected animal activity. Our analyses also show that there is not a specific one-to-one relationship between the spectral signatures and behavior type and that departures from the anticipated signatures are also informative. Box plots of net displacement arranged by time of day and conditioned on common spectral properties can help interpret the spectral signatures of empirical data. The first case study is based on the movement trajectory of a lion (Panthera leo) that shows several characteristic daily activity sequences, including an active–rest cycle that is correlated with moonlight brightness. A second example based on six pairs of African buffalo (Syncerus caffer) illustrates the use of wavelet coherency to show that their movements synchronize when they are within ∼1 km of each other, even when individual movement was best described as an uncorrelated random walk, providing an important spatial baseline of movement synchrony and suggesting that local behavioral cues play a strong role in driving movement patterns. We conclude with a discussion about the role these methods may have in guiding appropriately flexible probabilistic models connecting movement with biotic and abiotic covariates. PMID:20503882

  17. Medical students' attitudes towards science and gross anatomy, and the relationship to personality.

    PubMed

    Plaisant, Odile; Stephens, Shiby; Apaydin, Nihal; Courtois, Robert; Lignier, Baptiste; Loukas, Marios; Moxham, Bernard

    2014-03-01

    Assessment of the personalities of medical students can enable medical educators to formulate strategies for the best development of academic and clinical competencies. Previous research has shown that medical students do not share a common personality profile, there being gender differences. We have also shown that, for French medical students, students with personality traits associated with strong competitiveness are selected for admission to medical school. In this study, we further show that the medical students have different personality profiles compared with other student groups (psychology and business studies). The main purpose of the present investigation was to assess attitudes to science and gross anatomy, and to relate these to the students' personalities. Questionnaires (including Thurstone and Chave analyses) were employed to measure attitudes, and personality was assessed using the Big Five Inventory (BFI). Data for attitudes were obtained for students at medical schools in Cardiff (UK), Paris, Descartes/Sorbonne (France), St George's University (Grenada) and Ankara (Turkey). Data obtained from personality tests were available for analysis from the Parisian cohort of students. Although the medical students were found to have strongly supportive views concerning the importance of science in medicine, their knowledge of the scientific method/philosophy of science was poor. Following analyses of the BFI in the French students, 'openness' and 'conscientiousness' were linked statistically with a positive attitude towards science. For anatomy, again strongly supportive views concerning the subject's importance in medicine were discerned. Analyses of the BFI in the French students did not show links statistically between personality profiles and attitudes towards gross anatomy, except male students with 'negative affectivity' showed less appreciation of the importance of anatomy. This contrasts with our earlier studies that showed that there is a relationship between the BF dimensions of personality traits and anxiety towards the dissection room experience (at the start of the course, 'negative emotionality' was related to an increased level of anxiety). We conclude that medical students agree on the importance to their studies of both science in general and gross anatomy in particular, and that some personality traits relate to their attitudes that could affect clinical competence. © 2013 Anatomical Society.

  18. Conceptual and statistical problems associated with the use of diversity indices in ecology.

    PubMed

    Barrantes, Gilbert; Sandoval, Luis

    2009-09-01

    Diversity indices, particularly the Shannon-Wiener index, have extensively been used in analyzing patterns of diversity at different geographic and ecological scales. These indices have serious conceptual and statistical problems which make comparisons of species richness or species abundances across communities nearly impossible. There is often no a single statistical method that retains all information needed to answer even a simple question. However, multivariate analyses could be used instead of diversity indices, such as cluster analyses or multiple regressions. More complex multivariate analyses, such as Canonical Correspondence Analysis, provide very valuable information on environmental variables associated to the presence and abundance of the species in a community. In addition, particular hypotheses associated to changes in species richness across localities, or change in abundance of one, or a group of species can be tested using univariate, bivariate, and/or rarefaction statistical tests. The rarefaction method has proved to be robust to standardize all samples to a common size. Even the simplest method as reporting the number of species per taxonomic category possibly provides more information than a diversity index value.

  19. Image encryption based on a delayed fractional-order chaotic logistic system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Huang, Xia; Li, Ning; Song, Xiao-Na

    2012-05-01

    A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security.

  20. Autonomy and job satisfaction for a sample of Greek teachers.

    PubMed

    Koustelios, Athanasios D; Karabatzaki, Despina; Kousteliou, Ioanna

    2004-12-01

    Analysing the relation between Job Satisfaction and Autonomy in a sample of 300 Greek teachers (114 men and 186 women, 28 to 59 years old) from primary and secondary schools, showed statistically significant positive correlations between Job Satisfaction and Autonomy. Particularly, Autonomy was correlated with Job Itself (.21), Supervision (.22), and the Organizational as a Whole (.27), aspects of Job Satisfaction. Findings are in line with previous studies conducted in different cultural contexts. Percent common variance accounted for is small.

  1. Water-quality characteristics and trends for selected wells possibly influenced by wastewater disposal at the Idaho National Laboratory, Idaho, 1981-2012

    USGS Publications Warehouse

    Davis, Linda C.; Bartholomay, Roy C.; Fisher, Jason C.; Maimer, Neil V.

    2015-01-01

    Volatile organic compound concentration trends were analyzed for nine aquifer wells. Trend test results indicated an increasing trend for carbon tetrachloride for the Radioactive Waste Management Complex Production Well for the period 1987–2012; however, trend analyses of data collected since 2005 show no statistically significant trend indicating that engineering practices designed to reduce movement of volatile organic compounds to the aquifer may be having a positive effect on the aquifer.

  2. Statistical analysis of Thematic Mapper Simulator data for the geobotanical discrimination of rock types in southwest Oregon

    NASA Technical Reports Server (NTRS)

    Morrissey, L. A.; Weinstock, K. J.; Mouat, D. A.; Card, D. H.

    1984-01-01

    An evaluation of Thematic Mapper Simulator (TMS) data for the geobotanical discrimination of rock types based on vegetative cover characteristics is addressed in this research. A methodology for accomplishing this evaluation utilizing univariate and multivariate techniques is presented. TMS data acquired with a Daedalus DEI-1260 multispectral scanner were integrated with vegetation and geologic information for subsequent statistical analyses, which included a chi-square test, an analysis of variance, stepwise discriminant analysis, and Duncan's multiple range test. Results indicate that ultramafic rock types are spectrally separable from nonultramafics based on vegetative cover through the use of statistical analyses.

  3. EvolQG - An R package for evolutionary quantitative genetics

    PubMed Central

    Melo, Diogo; Garcia, Guilherme; Hubbe, Alex; Assis, Ana Paula; Marroig, Gabriel

    2016-01-01

    We present an open source package for performing evolutionary quantitative genetics analyses in the R environment for statistical computing. Evolutionary theory shows that evolution depends critically on the available variation in a given population. When dealing with many quantitative traits this variation is expressed in the form of a covariance matrix, particularly the additive genetic covariance matrix or sometimes the phenotypic matrix, when the genetic matrix is unavailable and there is evidence the phenotypic matrix is sufficiently similar to the genetic matrix. Given this mathematical representation of available variation, the \\textbf{EvolQG} package provides functions for calculation of relevant evolutionary statistics; estimation of sampling error; corrections for this error; matrix comparison via correlations, distances and matrix decomposition; analysis of modularity patterns; and functions for testing evolutionary hypotheses on taxa diversification. PMID:27785352

  4. A canonical neural mechanism for behavioral variability

    NASA Astrophysics Data System (ADS)

    Darshan, Ran; Wood, William E.; Peters, Susan; Leblois, Arthur; Hansel, David

    2017-05-01

    The ability to generate variable movements is essential for learning and adjusting complex behaviours. This variability has been linked to the temporal irregularity of neuronal activity in the central nervous system. However, how neuronal irregularity actually translates into behavioural variability is unclear. Here we combine modelling, electrophysiological and behavioural studies to address this issue. We demonstrate that a model circuit comprising topographically organized and strongly recurrent neural networks can autonomously generate irregular motor behaviours. Simultaneous recordings of neurons in singing finches reveal that neural correlations increase across the circuit driving song variability, in agreement with the model predictions. Analysing behavioural data, we find remarkable similarities in the babbling statistics of 5-6-month-old human infants and juveniles from three songbird species and show that our model naturally accounts for these `universal' statistics.

  5. Dependency of high coastal water level and river discharge at the global scale

    NASA Astrophysics Data System (ADS)

    Ward, P.; Couasnon, A.; Haigh, I. D.; Muis, S.; Veldkamp, T.; Winsemius, H.; Wahl, T.

    2017-12-01

    It is widely recognized that floods cause huge socioeconomic impacts. From 1980-2013, global flood losses exceeded $1 trillion, with 220,000 fatalities. These impacts are particularly hard felt in low-lying densely populated deltas and estuaries, whose location at the coast-land interface makes them naturally prone to flooding. When river and coastal floods coincide, their impacts in these deltas and estuaries are often worse than when they occur in isolation. Such floods are examples of so-called `compound events'. In this contribution, we present the first global scale analysis of the statistical dependency of high coastal water levels (and the storm surge component alone) and river discharge. We show that there is statistical dependency between these components at more than half of the stations examined. We also show time-lags in the highest correlation between peak discharges and coastal water levels. Finally, we assess the probability of the simultaneous occurrence of design discharge and design coastal water levels, assuming both independence and statistical dependence. For those stations where we identified statistical dependency, the probability is between 1 and 5 times greater, when the dependence structure is accounted for. This information is essential for understanding the likelihood of compound flood events occurring at locations around the world as well as for accurate flood risk assessments and effective flood risk management. The research was carried out by analysing the statistical dependency between observed coastal water levels (and the storm surge component) from GESLA-2 and river discharge using gauged data from GRDC stations all around the world. The dependence structure was examined using copula functions.

  6. [Clinical research=design*measurements*statistical analyses].

    PubMed

    Furukawa, Toshiaki

    2012-06-01

    A clinical study must address true endpoints that matter for the patients and the doctors. A good clinical study starts with a good clinical question. Formulating a clinical question in the form of PECO can sharpen one's original question. In order to perform a good clinical study one must have a knowledge of study design, measurements and statistical analyses: The first is taught by epidemiology, the second by psychometrics and the third by biostatistics.

  7. [Adoptive parents' satisfaction with the adoption experience and with its impact on family life].

    PubMed

    Sánchez-Sandoval, Yolanda

    2011-11-01

    In this study, we discuss the relevance of adoptive families' satisfaction in the assessment of adoption processes. The effects of adoption on a sample group of 272 adoptive families are analyzed. Most families show high levels of satisfaction as to: their decision to adopt, the features of their adopted children and how adoption has affected them as individuals and as a family. Statistical analyses show that these families can have different satisfaction levels depending on certain features of the adoptees, of the adoptive families or of their educational style. Life satisfaction of the adoptees is also related to how their adoptive parents evaluate the adoption.

  8. Using conventional F-statistics to study unconventional sex-chromosome differentiation.

    PubMed

    Rodrigues, Nicolas; Dufresnes, Christophe

    2017-01-01

    Species with undifferentiated sex chromosomes emerge as key organisms to understand the astonishing diversity of sex-determination systems. Whereas new genomic methods are widening opportunities to study these systems, the difficulty to separately characterize their X and Y homologous chromosomes poses limitations. Here we demonstrate that two simple F -statistics calculated from sex-linked genotypes, namely the genetic distance ( F st ) between sexes and the inbreeding coefficient ( F is ) in the heterogametic sex, can be used as reliable proxies to compare sex-chromosome differentiation between populations. We correlated these metrics using published microsatellite data from two frog species ( Hyla arborea and Rana temporaria ), and show that they intimately relate to the overall amount of X-Y differentiation in populations. However, the fits for individual loci appear highly variable, suggesting that a dense genetic coverage will be needed for inferring fine-scale patterns of differentiation along sex-chromosomes. The applications of these F -statistics, which implies little sampling requirement, significantly facilitate population analyses of sex-chromosomes.

  9. Machine Learning Methods for Attack Detection in the Smart Grid.

    PubMed

    Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent

    2016-08-01

    Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.

  10. A methodology using in-chair movements as an objective measure of discomfort for the purpose of statistically distinguishing between similar seat surfaces.

    PubMed

    Cascioli, Vincenzo; Liu, Zhuofu; Heusch, Andrew; McCarthy, Peter W

    2016-05-01

    This study presents a method for objectively measuring in-chair movement (ICM) that shows correlation with subjective ratings of comfort and discomfort. Employing a cross-over controlled, single blind design, healthy young subjects (n = 21) sat for 18 min on each of the following surfaces: contoured foam, straight foam and wood. Force sensitive resistors attached to the sitting interface measured the relative movements of the subjects during sitting. The purpose of this study was to determine whether ICM could statistically distinguish between each seat material, including two with subtle design differences. In addition, this study investigated methodological considerations, in particular appropriate threshold selection and sitting duration, when analysing objective movement data. ICM appears to be able to statistically distinguish between similar foam surfaces, as long as appropriate ICM thresholds and sufficient sitting durations are present. A relationship between greater ICM and increased discomfort, and lesser ICM and increased comfort was also found. Copyright © 2016. Published by Elsevier Ltd.

  11. A statistical investigation into the relationship between meteorological parameters and suicide

    NASA Astrophysics Data System (ADS)

    Dixon, Keith W.; Shulman, Mark D.

    1983-06-01

    Many previous studies of relationships between weather and suicides have been inconclusive and contradictory. This study investigated the relationship between suicide frequency and meteorological conditions in people who are psychologically predisposed to commit suicide. Linear regressions of diurnal temperature change, departure of temperature from the climatic norm, mean daytime sky cover, and the number of hours of precipitation for each day were performed on daily suicide totals using standard computer methods. Statistical analyses of suicide data for days with and without frontal passages were also performed. Days with five or more suicides (clusterdays) were isolated, and their weather parameters compared with those of nonclusterdays. Results show that neither suicide totals nor clusterday occurrence can be predicted using these meteorological parameters, since statistically significant relationships were not found. Although the data hinted that frontal passages and large daily temperature changes may occur on days with above average suicide totals, it was concluded that the influence of the weather parameters used, on the suicide rate, is a minor one, if indeed one exists.

  12. Quantifying variation in speciation and extinction rates with clade data.

    PubMed

    Paradis, Emmanuel; Tedesco, Pablo A; Hugueny, Bernard

    2013-12-01

    High-level phylogenies are very common in evolutionary analyses, although they are often treated as incomplete data. Here, we provide statistical tools to analyze what we name "clade data," which are the ages of clades together with their numbers of species. We develop a general approach for the statistical modeling of variation in speciation and extinction rates, including temporal variation, unknown variation, and linear and nonlinear modeling. We show how this approach can be generalized to a wide range of situations, including testing the effects of life-history traits and environmental variables on diversification rates. We report the results of an extensive simulation study to assess the performance of some statistical tests presented here as well as of the estimators of speciation and extinction rates. These latter results suggest the possibility to estimate correctly extinction rate in the absence of fossils. An example with data on fish is presented. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  13. Reframing Serial Murder Within Empirical Research.

    PubMed

    Gurian, Elizabeth A

    2017-04-01

    Empirical research on serial murder is limited due to the lack of consensus on a definition, the continued use of primarily descriptive statistics, and linkage to popular culture depictions. These limitations also inhibit our understanding of these offenders and affect credibility in the field of research. Therefore, this comprehensive overview of a sample of 508 cases (738 total offenders, including partnered groups of two or more offenders) provides analyses of solo male, solo female, and partnered serial killers to elucidate statistical differences and similarities in offending and adjudication patterns among the three groups. This analysis of serial homicide offenders not only supports previous research on offending patterns present in the serial homicide literature but also reveals that empirically based analyses can enhance our understanding beyond traditional case studies and descriptive statistics. Further research based on these empirical analyses can aid in the development of more accurate classifications and definitions of serial murderers.

  14. [Continuity of hospital identifiers in hospital discharge data - Analysis of the nationwide German DRG Statistics from 2005 to 2013].

    PubMed

    Nimptsch, Ulrike; Wengler, Annelene; Mansky, Thomas

    2016-11-01

    In Germany, nationwide hospital discharge data (DRG statistics provided by the research data centers of the Federal Statistical Office and the Statistical Offices of the 'Länder') are increasingly used as data source for health services research. Within this data hospitals can be separated via their hospital identifier ([Institutionskennzeichen] IK). However, this hospital identifier primarily designates the invoicing unit and is not necessarily equivalent to one hospital location. Aiming to investigate direction and extent of possible bias in hospital-level analyses this study examines the continuity of the hospital identifier within a cross-sectional and longitudinal approach and compares the results to official hospital census statistics. Within the DRG statistics from 2005 to 2013 the annual number of hospitals as classified by hospital identifiers was counted for each year of observation. The annual number of hospitals derived from DRG statistics was compared to the number of hospitals in the official census statistics 'Grunddaten der Krankenhäuser'. Subsequently, the temporal continuity of hospital identifiers in the DRG statistics was analyzed within cohorts of hospitals. Until 2013, the annual number of hospital identifiers in the DRG statistics fell by 175 (from 1,725 to 1,550). This decline affected only providers with small or medium case volume. The number of hospitals identified in the DRG statistics was lower than the number given in the census statistics (e.g., in 2013 1,550 IK vs. 1,668 hospitals in the census statistics). The longitudinal analyses revealed that the majority of hospital identifiers persisted in the years of observation, while one fifth of hospital identifiers changed. In cross-sectional studies of German hospital discharge data the separation of hospitals via the hospital identifier might lead to underestimating the number of hospitals and consequential overestimation of caseload per hospital. Discontinuities of hospital identifiers over time might impair the follow-up of hospital cohorts. These limitations must be taken into account in analyses of German hospital discharge data focusing on the hospital level. Copyright © 2016. Published by Elsevier GmbH.

  15. Trends in statistical methods in articles published in Archives of Plastic Surgery between 2012 and 2017.

    PubMed

    Han, Kyunghwa; Jung, Inkyung

    2018-05-01

    This review article presents an assessment of trends in statistical methods and an evaluation of their appropriateness in articles published in the Archives of Plastic Surgery (APS) from 2012 to 2017. We reviewed 388 original articles published in APS between 2012 and 2017. We categorized the articles that used statistical methods according to the type of statistical method, the number of statistical methods, and the type of statistical software used. We checked whether there were errors in the description of statistical methods and results. A total of 230 articles (59.3%) published in APS between 2012 and 2017 used one or more statistical method. Within these articles, there were 261 applications of statistical methods with continuous or ordinal outcomes, and 139 applications of statistical methods with categorical outcome. The Pearson chi-square test (17.4%) and the Mann-Whitney U test (14.4%) were the most frequently used methods. Errors in describing statistical methods and results were found in 133 of the 230 articles (57.8%). Inadequate description of P-values was the most common error (39.1%). Among the 230 articles that used statistical methods, 71.7% provided details about the statistical software programs used for the analyses. SPSS was predominantly used in the articles that presented statistical analyses. We found that the use of statistical methods in APS has increased over the last 6 years. It seems that researchers have been paying more attention to the proper use of statistics in recent years. It is expected that these positive trends will continue in APS.

  16. PROMISE: a tool to identify genomic features with a specific biologically interesting pattern of associations with multiple endpoint variables.

    PubMed

    Pounds, Stan; Cheng, Cheng; Cao, Xueyuan; Crews, Kristine R; Plunkett, William; Gandhi, Varsha; Rubnitz, Jeffrey; Ribeiro, Raul C; Downing, James R; Lamba, Jatinder

    2009-08-15

    In some applications, prior biological knowledge can be used to define a specific pattern of association of multiple endpoint variables with a genomic variable that is biologically most interesting. However, to our knowledge, there is no statistical procedure designed to detect specific patterns of association with multiple endpoint variables. Projection onto the most interesting statistical evidence (PROMISE) is proposed as a general procedure to identify genomic variables that exhibit a specific biologically interesting pattern of association with multiple endpoint variables. Biological knowledge of the endpoint variables is used to define a vector that represents the biologically most interesting values for statistics that characterize the associations of the endpoint variables with a genomic variable. A test statistic is defined as the dot-product of the vector of the observed association statistics and the vector of the most interesting values of the association statistics. By definition, this test statistic is proportional to the length of the projection of the observed vector of correlations onto the vector of most interesting associations. Statistical significance is determined via permutation. In simulation studies and an example application, PROMISE shows greater statistical power to identify genes with the interesting pattern of associations than classical multivariate procedures, individual endpoint analyses or listing genes that have the pattern of interest and are significant in more than one individual endpoint analysis. Documented R routines are freely available from www.stjuderesearch.org/depts/biostats and will soon be available as a Bioconductor package from www.bioconductor.org.

  17. Pooling sexes when assessing ground reaction forces during walking: Statistical Parametric Mapping versus traditional approach.

    PubMed

    Castro, Marcelo P; Pataky, Todd C; Sole, Gisela; Vilas-Boas, Joao Paulo

    2015-07-16

    Ground reaction force (GRF) data from men and women are commonly pooled for analyses. However, it may not be justifiable to pool sexes on the basis of discrete parameters extracted from continuous GRF gait waveforms because this can miss continuous effects. Forty healthy participants (20 men and 20 women) walked at a cadence of 100 steps per minute across two force plates, recording GRFs. Two statistical methods were used to test the null hypothesis of no mean GRF differences between sexes: (i) Statistical Parametric Mapping-using the entire three-component GRF waveform; and (ii) traditional approach-using the first and second vertical GRF peaks. Statistical Parametric Mapping results suggested large sex differences, which post-hoc analyses suggested were due predominantly to higher anterior-posterior and vertical GRFs in early stance in women compared to men. Statistically significant differences were observed for the first GRF peak and similar values for the second GRF peak. These contrasting results emphasise that different parts of the waveform have different signal strengths and thus that one may use the traditional approach to choose arbitrary metrics and make arbitrary conclusions. We suggest that researchers and clinicians consider both the entire gait waveforms and sex-specificity when analysing GRF data. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Perceived Effectiveness among College Students of Selected Statistical Measures in Motivating Exercise Behavior

    ERIC Educational Resources Information Center

    Merrill, Ray M.; Chatterley, Amanda; Shields, Eric C.

    2005-01-01

    This study explored the effectiveness of selected statistical measures at motivating or maintaining regular exercise among college students. The study also considered whether ease in understanding these statistical measures was associated with perceived effectiveness at motivating or maintaining regular exercise. Analyses were based on a…

  19. Statistical Diversions

    ERIC Educational Resources Information Center

    Petocz, Peter; Sowey, Eric

    2012-01-01

    The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the…

  20. The Empirical Nature and Statistical Treatment of Missing Data

    ERIC Educational Resources Information Center

    Tannenbaum, Christyn E.

    2009-01-01

    Introduction. Missing data is a common problem in research and can produce severely misleading analyses, including biased estimates of statistical parameters, and erroneous conclusions. In its 1999 report, the APA Task Force on Statistical Inference encouraged authors to report complications such as missing data and discouraged the use of…

  1. Statistical Significance Testing in Second Language Research: Basic Problems and Suggestions for Reform

    ERIC Educational Resources Information Center

    Norris, John M.

    2015-01-01

    Traditions of statistical significance testing in second language (L2) quantitative research are strongly entrenched in how researchers design studies, select analyses, and interpret results. However, statistical significance tests using "p" values are commonly misinterpreted by researchers, reviewers, readers, and others, leading to…

  2. 75 FR 24718 - Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-05

    ...] Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability AGENCY... Programs and Data Files.'' This guidance is provided to inform study statisticians of recommendations for documenting statistical analyses and data files submitted to the Center for Veterinary Medicine (CVM) for the...

  3. ParallABEL: an R library for generalized parallelization of genome-wide association studies.

    PubMed

    Sangket, Unitsa; Mahasirimongkol, Surakameth; Chantratita, Wasun; Tandayya, Pichaya; Aulchenko, Yurii S

    2010-04-29

    Genome-Wide Association (GWA) analysis is a powerful method for identifying loci associated with complex traits and drug response. Parts of GWA analyses, especially those involving thousands of individuals and consuming hours to months, will benefit from parallel computation. It is arduous acquiring the necessary programming skills to correctly partition and distribute data, control and monitor tasks on clustered computers, and merge output files. Most components of GWA analysis can be divided into four groups based on the types of input data and statistical outputs. The first group contains statistics computed for a particular Single Nucleotide Polymorphism (SNP), or trait, such as SNP characterization statistics or association test statistics. The input data of this group includes the SNPs/traits. The second group concerns statistics characterizing an individual in a study, for example, the summary statistics of genotype quality for each sample. The input data of this group includes individuals. The third group consists of pair-wise statistics derived from analyses between each pair of individuals in the study, for example genome-wide identity-by-state or genomic kinship analyses. The input data of this group includes pairs of SNPs/traits. The final group concerns pair-wise statistics derived for pairs of SNPs, such as the linkage disequilibrium characterisation. The input data of this group includes pairs of individuals. We developed the ParallABEL library, which utilizes the Rmpi library, to parallelize these four types of computations. ParallABEL library is not only aimed at GenABEL, but may also be employed to parallelize various GWA packages in R. The data set from the North American Rheumatoid Arthritis Consortium (NARAC) includes 2,062 individuals with 545,080, SNPs' genotyping, was used to measure ParallABEL performance. Almost perfect speed-up was achieved for many types of analyses. For example, the computing time for the identity-by-state matrix was linearly reduced from approximately eight hours to one hour when ParallABEL employed eight processors. Executing genome-wide association analysis using the ParallABEL library on a computer cluster is an effective way to boost performance, and simplify the parallelization of GWA studies. ParallABEL is a user-friendly parallelization of GenABEL.

  4. Diet misreporting can be corrected: confirmation of the association between energy intake and fat-free mass in adolescents.

    PubMed

    Vainik, Uku; Konstabel, Kenn; Lätt, Evelin; Mäestu, Jarek; Purge, Priit; Jürimäe, Jaak

    2016-10-01

    Subjective energy intake (sEI) is often misreported, providing unreliable estimates of energy consumed. Therefore, relating sEI data to health outcomes is difficult. Recently, Börnhorst et al. compared various methods to correct sEI-based energy intake estimates. They criticised approaches that categorise participants as under-reporters, plausible reporters and over-reporters based on the sEI:total energy expenditure (TEE) ratio, and thereafter use these categories as statistical covariates or exclusion criteria. Instead, they recommended using external predictors of sEI misreporting as statistical covariates. We sought to confirm and extend these findings. Using a sample of 190 adolescent boys (mean age=14), we demonstrated that dual-energy X-ray absorptiometry-measured fat-free mass is strongly associated with objective energy intake data (onsite weighted breakfast), but the association with sEI (previous 3-d dietary interview) is weak. Comparing sEI with TEE revealed that sEI was mostly under-reported (74 %). Interestingly, statistically controlling for dietary reporting groups or restricting samples to plausible reporters created a stronger-than-expected association between fat-free mass and sEI. However, the association was an artifact caused by selection bias - that is, data re-sampling and simulations showed that these methods overestimated the effect size because fat-free mass was related to sEI both directly and indirectly via TEE. A more realistic association between sEI and fat-free mass was obtained when the model included common predictors of misreporting (e.g. BMI, restraint). To conclude, restricting sEI data only to plausible reporters can cause selection bias and inflated associations in later analyses. Therefore, we further support statistically correcting sEI data in nutritional analyses. The script for running simulations is provided.

  5. Confidence intervals for the between-study variance in random-effects meta-analysis using generalised heterogeneity statistics: should we use unequal tails?

    PubMed

    Jackson, Dan; Bowden, Jack

    2016-09-07

    Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.

  6. Effect of Different Anti-Oxidants on Shear Bond Strength of Composite Resins to Bleached Human Enamel

    PubMed Central

    Saladi, Hari Krishna; Bollu, Indira Priyadarshini; Burla, Devipriya; Ballullaya, Srinidhi Vishnu; Devalla, Srihari; Maroli, Sohani; Jayaprakash, Thumu

    2015-01-01

    Introduction The bond strength of the composite to the bleached enamel plays a very important role in the success and longevity of an aesthetic restoration. Aim The aim of this study was to compare and evaluate the effect of Aloe Vera with 10% Sodium Ascorbate on the Shear bond strength of composite resin to bleached human enamel. Materials and Methods Fifty freshly extracted human maxillary central incisors were selected and divided into 5 groups. Group I and V are unbleached and bleached controls groups respectively. Group II, III, IV served as experimental groups. The labial surfaces of groups II, III, IV, V were treated with 35% Carbamide Peroxide for 30mins. Group II specimens were subjected to delayed composite bonding. Group III and IV specimens were subjected to application of 10% Sodium Ascorbate and leaf extract of Aloe Vera following the Carbamide Peroxide bleaching respectively. Specimens were subjected to shear bond strength using universal testing machine and the results were statistically analysed using ANOVA test. Tukey (HSD) Honest Significant Difference test was used to comparatively analyse statistical differences between the groups. A p-value <0.05 is taken as statistically significant. Results The mean shear bond strength values of Group V showed significantly lower bond strengths than Groups I, II, III, IV (p-value <0.05). There was no statistically significant difference between the shear bond strength values of groups I, II, III, IV. Conclusion Treatment of the bleached enamel surface with Aloe Vera and 10% Sodium Ascorbate provided consistently better bond strength. Aloe Vera may be used as an alternative to 10% Sodium Ascorbate. PMID:26674656

  7. Statistical analysis plan for the Alveolar Recruitment for Acute Respiratory Distress Syndrome Trial (ART). A randomized controlled trial

    PubMed Central

    Damiani, Lucas Petri; Berwanger, Otavio; Paisani, Denise; Laranjeira, Ligia Nasi; Suzumura, Erica Aranha; Amato, Marcelo Britto Passos; Carvalho, Carlos Roberto Ribeiro; Cavalcanti, Alexandre Biasi

    2017-01-01

    Background The Alveolar Recruitment for Acute Respiratory Distress Syndrome Trial (ART) is an international multicenter randomized pragmatic controlled trial with allocation concealment involving 120 intensive care units in Brazil, Argentina, Colombia, Italy, Poland, Portugal, Malaysia, Spain, and Uruguay. The primary objective of ART is to determine whether maximum stepwise alveolar recruitment associated with PEEP titration, adjusted according to the static compliance of the respiratory system (ART strategy), is able to increase 28-day survival in patients with acute respiratory distress syndrome compared to conventional treatment (ARDSNet strategy). Objective To describe the data management process and statistical analysis plan. Methods The statistical analysis plan was designed by the trial executive committee and reviewed and approved by the trial steering committee. We provide an overview of the trial design with a special focus on describing the primary (28-day survival) and secondary outcomes. We describe our data management process, data monitoring committee, interim analyses, and sample size calculation. We describe our planned statistical analyses for primary and secondary outcomes as well as pre-specified subgroup analyses. We also provide details for presenting results, including mock tables for baseline characteristics, adherence to the protocol and effect on clinical outcomes. Conclusion According to best trial practice, we report our statistical analysis plan and data management plan prior to locking the database and beginning analyses. We anticipate that this document will prevent analysis bias and enhance the utility of the reported results. Trial registration ClinicalTrials.gov number, NCT01374022. PMID:28977255

  8. Formalizing the definition of meta-analysis in Molecular Ecology.

    PubMed

    ArchMiller, Althea A; Bauer, Eric F; Koch, Rebecca E; Wijayawardena, Bhagya K; Anil, Ammu; Kottwitz, Jack J; Munsterman, Amelia S; Wilson, Alan E

    2015-08-01

    Meta-analysis, the statistical synthesis of pertinent literature to develop evidence-based conclusions, is relatively new to the field of molecular ecology, with the first meta-analysis published in the journal Molecular Ecology in 2003 (Slate & Phua 2003). The goal of this article is to formalize the definition of meta-analysis for the authors, editors, reviewers and readers of Molecular Ecology by completing a review of the meta-analyses previously published in this journal. We also provide a brief overview of the many components required for meta-analysis with a more specific discussion of the issues related to the field of molecular ecology, including the use and statistical considerations of Wright's FST and its related analogues as effect sizes in meta-analysis. We performed a literature review to identify articles published as 'meta-analyses' in Molecular Ecology, which were then evaluated by at least two reviewers. We specifically targeted Molecular Ecology publications because as a flagship journal in this field, meta-analyses published in Molecular Ecology have the potential to set the standard for meta-analyses in other journals. We found that while many of these reviewed articles were strong meta-analyses, others failed to follow standard meta-analytical techniques. One of these unsatisfactory meta-analyses was in fact a secondary analysis. Other studies attempted meta-analyses but lacked the fundamental statistics that are considered necessary for an effective and powerful meta-analysis. By drawing attention to the inconsistency of studies labelled as meta-analyses, we emphasize the importance of understanding the components of traditional meta-analyses to fully embrace the strengths of quantitative data synthesis in the field of molecular ecology. © 2015 John Wiley & Sons Ltd.

  9. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    PubMed

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  10. Level of contamination with mycobiota and contents of mycotoxins from the group of trichothecenes in grain of wheat , oats, barley, rye and triticale harvested in Poland in 2006- 2008.

    PubMed

    Stuper-Szablewska, Kinga; Perkowski, Juliusz

    2017-03-01

    The risk of cereal exposure to microbial contamination is high and possible at any time, starting from the period of plant vegetation, through harvest, up to the processing, storage and transport of the final product. Contents of mycotoxins in grain are inseparably connected with the presence of fungal biomass, the presence of which may indicate the occurrence of a fungus, and indirectly also products of its metabolism. Analyses were conducted on 378 grain samples of wheat, triticale, barley, rye and oats collected from grain silos located at grain purchase stations and at mills in Poland in 2006, 2007 and 2008. The concentrations of ERG and mycotoxins from the group of trichothecenes, as well as CFU numbers were analysed. The tested cereals were characterised by similarly low concentrations of both the investigated fungal metabolites and the level of microscopic fungi. However, conducted statistical analyses showed significant variation between tested treatments. Oat and rye grain contained the highest amounts of ERG, total toxins and CFU. In turn, the lowest values of investigated parameters were found in grain of wheat and triticale. Chemometric analyses, based on the results of chemical and microbiological tests, showed slight differences between contents of analysed metabolites between the years of the study, and do not confirm the observations on the significance of the effect of weather conditions on the development of mycobiota and production of mycotoxins; however, it does pertain to treatments showing no significant infestation. Highly significant correlations between contents of trichothecenes and ERG concentration (higher than in the case of the correlation of the total toxin concentrations/log cfu/g), indicate that the level of this metabolite is inseparably connected with mycotoxin contents in grain.

  11. Potential of IMU Sensors in Performance Analysis of Professional Alpine Skiers

    PubMed Central

    Yu, Gwangjae; Jang, Young Jae; Kim, Jinhyeok; Kim, Jin Hae; Kim, Hye Young; Kim, Kitae; Panday, Siddhartha Bikram

    2016-01-01

    In this paper, we present an analysis to identify a sensor location for an inertial measurement unit (IMU) on the body of a skier and propose the best location to capture turn motions for training. We also validate the manner in which the data from the IMU sensor on the proposed location can characterize ski turns and performance with a series of statistical analyses, including a comparison with data collected from foot pressure sensors. The goal of the study is to logically identify the ideal location on the skier’s body to attach the IMU sensor and the best use of the data collected for the skier. The statistical analyses and the hierarchical clustering method indicate that the pelvis is the best location for attachment of an IMU, and numerical validation shows that the data collected from this location can effectively estimate the performance and characteristics of the skier. Moreover, placement of the sensor at this location does not distract the skier’s motion, and the sensor can be easily attached and detached. The findings of this study can be used for the development of a wearable device for the routine training of professional skiers. PMID:27043579

  12. Micromechanical investigation of sand migration in gas hydrate-bearing sediments

    NASA Astrophysics Data System (ADS)

    Uchida, S.; Klar, A.; Cohen, E.

    2017-12-01

    Past field gas production tests from hydrate bearing sediments have indicated that sand migration is an important phenomenon that needs to be considered for successful long-term gas production. The authors previously developed the continuum based analytical thermo-hydro-mechanical sand migration model that can be applied to predict wellbore responses during gas production. However, the model parameters involved in the model still needs to be calibrated and studied thoroughly and it still remains a challenge to conduct well-defined laboratory experiments of sand migration, especially in hydrate-bearing sediments. Taking the advantage of capability of micromechanical modelling approach through discrete element method (DEM), this work presents a first step towards quantifying one of the model parameters that governs stresses reduction due to grain detachment. Grains represented by DEM particles are randomly removed from an isotropically loaded DEM specimen and statistical analyses reveal that linear proportionality exists between the normalized volume of detached solids and normalized reduced stresses. The DEM specimen with different porosities (different packing densities) are also considered and statistical analyses show that there is a clear transition between loose sand behavior and dense sand behavior, characterized by the relative density.

  13. Spatial and Alignment Analyses for a field of Small Volcanic Vents South of Pavonis Mons Mars

    NASA Technical Reports Server (NTRS)

    Bleacher, J. E.; Glaze, L. S.; Greeley, R.; Hauber, E.; Baloga, S. M.; Sakimoto, S. E. H.; Williams, D. A.; Glotch, T. D.

    2008-01-01

    The Tharsis province of Mars displays a variety of small volcanic vent (10s krn in diameter) morphologies. These features were identified in Mariner and Viking images [1-4], and Mars Orbiter Laser Altimeter (MOLA) data show them to be more abundant than originally observed [5,6]. Recent studies are classifying their diverse morphologies [7-9]. Building on this work, we are mapping the location of small volcanic vents (small-vents) in the Tharsis province using MOLA, Thermal Emission Imaging System, and High Resolution Stereo Camera data [10]. Here we report on a preliminary study of the spatial and alignment relationships between small-vents south of Pavonis Mons, as determined by nearest neighbor and two-point azimuth statistical analyses. Terrestrial monogenetic volcanic fields display four fundamental characteristics: 1) recurrence rates of eruptions,2 ) vent abundance, 3) vent distribution, and 4) tectonic relationships [11]. While understanding recurrence rates typically requires field measurements, insight into vent abundance, distribution, and tectonic relationships can be established by mapping of remotely sensed data, and subsequent application of spatial statistical studies [11,12], the goal of which is to link the distribution of vents to causal processes.

  14. Characterizing Uncertainty and Variability in PBPK Models ...

    EPA Pesticide Factsheets

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro

  15. The effects of run-of-river hydroelectric power schemes on invertebrate community composition in temperate streams and rivers.

    PubMed

    Bilotta, Gary S; Burnside, Niall G; Turley, Matthew D; Gray, Jeremy C; Orr, Harriet G

    2017-01-01

    Run-of-river (ROR) hydroelectric power (HEP) schemes are often presumed to be less ecologically damaging than large-scale storage HEP schemes. However, there is currently limited scientific evidence on their ecological impact. The aim of this article is to investigate the effects of ROR HEP schemes on communities of invertebrates in temperate streams and rivers, using a multi-site Before-After, Control-Impact (BACI) study design. The study makes use of routine environmental surveillance data collected as part of long-term national and international monitoring programmes at 22 systematically-selected ROR HEP schemes and 22 systematically-selected paired control sites. Five widely-used family-level invertebrate metrics (richness, evenness, LIFE, E-PSI, WHPT) were analysed using a linear mixed effects model. The analyses showed that there was a statistically significant effect (p<0.05) of ROR HEP construction and operation on the evenness of the invertebrate community. However, no statistically significant effects were detected on the four other metrics of community composition. The implications of these findings are discussed in this article and recommendations are made for best-practice study design for future invertebrate community impact studies.

  16. The effects of run-of-river hydroelectric power schemes on invertebrate community composition in temperate streams and rivers

    PubMed Central

    2017-01-01

    Run-of-river (ROR) hydroelectric power (HEP) schemes are often presumed to be less ecologically damaging than large-scale storage HEP schemes. However, there is currently limited scientific evidence on their ecological impact. The aim of this article is to investigate the effects of ROR HEP schemes on communities of invertebrates in temperate streams and rivers, using a multi-site Before-After, Control-Impact (BACI) study design. The study makes use of routine environmental surveillance data collected as part of long-term national and international monitoring programmes at 22 systematically-selected ROR HEP schemes and 22 systematically-selected paired control sites. Five widely-used family-level invertebrate metrics (richness, evenness, LIFE, E-PSI, WHPT) were analysed using a linear mixed effects model. The analyses showed that there was a statistically significant effect (p<0.05) of ROR HEP construction and operation on the evenness of the invertebrate community. However, no statistically significant effects were detected on the four other metrics of community composition. The implications of these findings are discussed in this article and recommendations are made for best-practice study design for future invertebrate community impact studies. PMID:28158282

  17. Prognostic value of inflammation-based scores in patients with osteosarcoma

    PubMed Central

    Liu, Bangjian; Huang, Yujing; Sun, Yuanjue; Zhang, Jianjun; Yao, Yang; Shen, Zan; Xiang, Dongxi; He, Aina

    2016-01-01

    Systemic inflammation responses have been associated with cancer development and progression. C-reactive protein (CRP), Glasgow prognostic score (GPS), neutrophil-lymphocyte ratio (NLR), platelet-lymphocyte ratio (PLR), lymphocyte-monocyte ratio (LMR), and neutrophil-platelet score (NPS) have been shown to be independent risk factors in various types of malignant tumors. This retrospective analysis of 162 osteosarcoma cases was performed to estimate their predictive value of survival in osteosarcoma. All statistical analyses were performed by SPSS statistical software. Receiver operating characteristic (ROC) analysis was generated to set optimal thresholds; area under the curve (AUC) was used to show the discriminatory abilities of inflammation-based scores; Kaplan-Meier analysis was performed to plot the survival curve; cox regression models were employed to determine the independent prognostic factors. The optimal cut-off points of NLR, PLR, and LMR were 2.57, 123.5 and 4.73, respectively. GPS and NLR had a markedly larger AUC than CRP, PLR and LMR. High levels of CRP, GPS, NLR, PLR, and low level of LMR were significantly associated with adverse prognosis (P < 0.05). Multivariate Cox regression analyses revealed that GPS, NLR, and occurrence of metastasis were top risk factors associated with death of osteosarcoma patients. PMID:28008988

  18. SEER Cancer Query Systems (CanQues)

    Cancer.gov

    These applications provide access to cancer statistics including incidence, mortality, survival, prevalence, and probability of developing or dying from cancer. Users can display reports of the statistics or extract them for additional analyses.

  19. Study/Experimental/Research Design: Much More Than Statistics

    PubMed Central

    Knight, Kenneth L.

    2010-01-01

    Abstract Context: The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes “Methods” sections hard to read and understand. Objective: To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs. Description: The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style. At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary. Advantages: Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results. PMID:20064054

  20. A comparative study of two statistical approaches for the analysis of real seismicity sequences and synthetic seismicity generated by a stick-slip experimental model

    NASA Astrophysics Data System (ADS)

    Flores-Marquez, Leticia Elsa; Ramirez Rojaz, Alejandro; Telesca, Luciano

    2015-04-01

    The study of two statistical approaches is analyzed for two different types of data sets, one is the seismicity generated by the subduction processes occurred at south Pacific coast of Mexico between 2005 and 2012, and the other corresponds to the synthetic seismic data generated by a stick-slip experimental model. The statistical methods used for the present study are the visibility graph in order to investigate the time dynamics of the series and the scaled probability density function in the natural time domain to investigate the critical order of the system. This comparison has the purpose to show the similarities between the dynamical behaviors of both types of data sets, from the point of view of critical systems. The observed behaviors allow us to conclude that the experimental set up globally reproduces the behavior observed in the statistical approaches used to analyses the seismicity of the subduction zone. The present study was supported by the Bilateral Project Italy-Mexico Experimental Stick-slip models of tectonic faults: innovative statistical approaches applied to synthetic seismic sequences, jointly funded by MAECI (Italy) and AMEXCID (Mexico) in the framework of the Bilateral Agreement for Scientific and Technological Cooperation PE 2014-2016.

  1. Facilitating the Transition from Bright to Dim Environments

    DTIC Science & Technology

    2016-03-04

    For the parametric data, a multivariate ANOVA was used in determining the systematic presence of any statistically significant performance differences...performed. All significance levels were p < 0.05, and statistical analyses were performed with the Statistical Package for Social Sciences ( SPSS ...1950. Age changes in rate and level of visual dark adaptation. Journal of Applied Physiology, 2, 407–411. Field, A. 2009. Discovering statistics

  2. Detecting differential DNA methylation from sequencing of bisulfite converted DNA of diverse species.

    PubMed

    Huh, Iksoo; Wu, Xin; Park, Taesung; Yi, Soojin V

    2017-07-21

    DNA methylation is one of the most extensively studied epigenetic modifications of genomic DNA. In recent years, sequencing of bisulfite-converted DNA, particularly via next-generation sequencing technologies, has become a widely popular method to study DNA methylation. This method can be readily applied to a variety of species, dramatically expanding the scope of DNA methylation studies beyond the traditionally studied human and mouse systems. In parallel to the increasing wealth of genomic methylation profiles, many statistical tools have been developed to detect differentially methylated loci (DMLs) or differentially methylated regions (DMRs) between biological conditions. We discuss and summarize several key properties of currently available tools to detect DMLs and DMRs from sequencing of bisulfite-converted DNA. However, the majority of the statistical tools developed for DML/DMR analyses have been validated using only mammalian data sets, and less priority has been placed on the analyses of invertebrate or plant DNA methylation data. We demonstrate that genomic methylation profiles of non-mammalian species are often highly distinct from those of mammalian species using examples of honey bees and humans. We then discuss how such differences in data properties may affect statistical analyses. Based on these differences, we provide three specific recommendations to improve the power and accuracy of DML and DMR analyses of invertebrate data when using currently available statistical tools. These considerations should facilitate systematic and robust analyses of DNA methylation from diverse species, thus advancing our understanding of DNA methylation. © The Author 2017. Published by Oxford University Press.

  3. Imaging Depression in Adults with ASD

    DTIC Science & Technology

    2017-10-01

    collected temporally close enough to imaging data in Phase 2 to be confidently incorporated in the planned statistical analyses, and (b) not unduly risk...Phase 2 to be confidently incorporated in the planned statistical analyses, and (b) not unduly risk attrition between Phase 1 and 2, we chose to hold...supervision is ongoing (since 9/2014). • Co-l Dr. Lerner’s 2nd year Clinical Psychology PhD students have participated in ADOS- 2 Introductory Clinical

  4. Statistical Parametric Mapping to Identify Differences between Consensus-Based Joint Patterns during Gait in Children with Cerebral Palsy.

    PubMed

    Nieuwenhuys, Angela; Papageorgiou, Eirini; Desloovere, Kaat; Molenaers, Guy; De Laet, Tinne

    2017-01-01

    Experts recently identified 49 joint motion patterns in children with cerebral palsy during a Delphi consensus study. Pattern definitions were therefore the result of subjective expert opinion. The present study aims to provide objective, quantitative data supporting the identification of these consensus-based patterns. To do so, statistical parametric mapping was used to compare the mean kinematic waveforms of 154 trials of typically developing children (n = 56) to the mean kinematic waveforms of 1719 trials of children with cerebral palsy (n = 356), which were classified following the classification rules of the Delphi study. Three hypotheses stated that: (a) joint motion patterns with 'no or minor gait deviations' (n = 11 patterns) do not differ significantly from the gait pattern of typically developing children; (b) all other pathological joint motion patterns (n = 38 patterns) differ from typically developing gait and the locations of difference within the gait cycle, highlighted by statistical parametric mapping, concur with the consensus-based classification rules. (c) all joint motion patterns at the level of each joint (n = 49 patterns) differ from each other during at least one phase of the gait cycle. Results showed that: (a) ten patterns with 'no or minor gait deviations' differed somewhat unexpectedly from typically developing gait, but these differences were generally small (≤3°); (b) all other joint motion patterns (n = 38) differed from typically developing gait and the significant locations within the gait cycle that were indicated by the statistical analyses, coincided well with the classification rules; (c) joint motion patterns at the level of each joint significantly differed from each other, apart from two sagittal plane pelvic patterns. In addition to these results, for several joints, statistical analyses indicated other significant areas during the gait cycle that were not included in the pattern definitions of the consensus study. Based on these findings, suggestions to improve pattern definitions were made.

  5. Statistical Parametric Mapping to Identify Differences between Consensus-Based Joint Patterns during Gait in Children with Cerebral Palsy

    PubMed Central

    Papageorgiou, Eirini; Desloovere, Kaat; Molenaers, Guy; De Laet, Tinne

    2017-01-01

    Experts recently identified 49 joint motion patterns in children with cerebral palsy during a Delphi consensus study. Pattern definitions were therefore the result of subjective expert opinion. The present study aims to provide objective, quantitative data supporting the identification of these consensus-based patterns. To do so, statistical parametric mapping was used to compare the mean kinematic waveforms of 154 trials of typically developing children (n = 56) to the mean kinematic waveforms of 1719 trials of children with cerebral palsy (n = 356), which were classified following the classification rules of the Delphi study. Three hypotheses stated that: (a) joint motion patterns with ‘no or minor gait deviations’ (n = 11 patterns) do not differ significantly from the gait pattern of typically developing children; (b) all other pathological joint motion patterns (n = 38 patterns) differ from typically developing gait and the locations of difference within the gait cycle, highlighted by statistical parametric mapping, concur with the consensus-based classification rules. (c) all joint motion patterns at the level of each joint (n = 49 patterns) differ from each other during at least one phase of the gait cycle. Results showed that: (a) ten patterns with ‘no or minor gait deviations’ differed somewhat unexpectedly from typically developing gait, but these differences were generally small (≤3°); (b) all other joint motion patterns (n = 38) differed from typically developing gait and the significant locations within the gait cycle that were indicated by the statistical analyses, coincided well with the classification rules; (c) joint motion patterns at the level of each joint significantly differed from each other, apart from two sagittal plane pelvic patterns. In addition to these results, for several joints, statistical analyses indicated other significant areas during the gait cycle that were not included in the pattern definitions of the consensus study. Based on these findings, suggestions to improve pattern definitions were made. PMID:28081229

  6. A multi-criteria evaluation system for marine litter pollution based on statistical analyses of OSPAR beach litter monitoring time series.

    PubMed

    Schulz, Marcus; Neumann, Daniel; Fleet, David M; Matthies, Michael

    2013-12-01

    During the last decades, marine pollution with anthropogenic litter has become a worldwide major environmental concern. Standardized monitoring of litter since 2001 on 78 beaches selected within the framework of the Convention for the Protection of the Marine Environment of the North-East Atlantic (OSPAR) has been used to identify temporal trends of marine litter. Based on statistical analyses of this dataset a two-part multi-criteria evaluation system for beach litter pollution of the North-East Atlantic and the North Sea is proposed. Canonical correlation analyses, linear regression analyses, and non-parametric analyses of variance were used to identify different temporal trends. A classification of beaches was derived from cluster analyses and served to define different states of beach quality according to abundances of 17 input variables. The evaluation system is easily applicable and relies on the above-mentioned classification and on significant temporal trends implied by significant rank correlations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Analogy as a strategy for supporting complex problem solving under uncertainty.

    PubMed

    Chan, Joel; Paletz, Susannah B F; Schunn, Christian D

    2012-11-01

    Complex problem solving in naturalistic environments is fraught with uncertainty, which has significant impacts on problem-solving behavior. Thus, theories of human problem solving should include accounts of the cognitive strategies people bring to bear to deal with uncertainty during problem solving. In this article, we present evidence that analogy is one such strategy. Using statistical analyses of the temporal dynamics between analogy and expressed uncertainty in the naturalistic problem-solving conversations among scientists on the Mars Rover Mission, we show that spikes in expressed uncertainty reliably predict analogy use (Study 1) and that expressed uncertainty reduces to baseline levels following analogy use (Study 2). In addition, in Study 3, we show with qualitative analyses that this relationship between uncertainty and analogy is not due to miscommunication-related uncertainty but, rather, is primarily concentrated on substantive problem-solving issues. Finally, we discuss a hypothesis about how analogy might serve as an uncertainty reduction strategy in naturalistic complex problem solving.

  8. Demographic change and carbon dioxide emissions.

    PubMed

    O'Neill, Brian C; Liddle, Brant; Jiang, Leiwen; Smith, Kirk R; Pachauri, Shonali; Dalton, Michael; Fuchs, Regina

    2012-07-14

    Relations between demographic change and emissions of the major greenhouse gas carbon dioxide (CO(2)) have been studied from different perspectives, but most projections of future emissions only partly take demographic influences into account. We review two types of evidence for how CO(2) emissions from the use of fossil fuels are affected by demographic factors such as population growth or decline, ageing, urbanisation, and changes in household size. First, empirical analyses of historical trends tend to show that CO(2) emissions from energy use respond almost proportionately to changes in population size and that ageing and urbanisation have less than proportional but statistically significant effects. Second, scenario analyses show that alternative population growth paths could have substantial effects on global emissions of CO(2) several decades from now, and that ageing and urbanisation can have important effects in particular world regions. These results imply that policies that slow population growth would probably also have climate-related benefits. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. The application of quantitative methods for identifying and exploring the presence of bias in systematic reviews: PDE-5 inhibitors for erectile dysfunction.

    PubMed

    Bekkering, G E; Abou-Setta, A M; Kleijnen, J

    2008-01-01

    A systematic review of PDE-5 inhibitors for erectile dysfunction was performed to evaluate the utility of quantitative methods for identifying and exploring the influence of bias and study quality on pooled outcomes from meta-analyses. We included 123 randomized controlled trials (RCTs). Methodological quality was poorly reported. All three drugs appeared highly effective. Indirect adjusted analyses showed no differences between the three drugs. Funnel plots and statistical tests showed no evidence of small-study effects for sildenafil whereas there was evidence of such bias for tadalafil and vardenafil. Adjustment for missing studies using trim and fill techniques did not alter the pooled estimates substantially. The exclusion of previous sildenafil nonresponders was associated with larger treatment effects for tadalafil. This investigation was hampered by poor reporting of methodological quality, a low number of studies, heterogeneity and large effect sizes. Despite such limitations, a comprehensive assessment of biases should be a routine in systematic reviews.

  10. An analysis of the AVE-SESAME I period using statistical structure and correlation functions. [Atmospheric Variability Experiment-Severe Environmental Storm and Mesoscale Experiment

    NASA Technical Reports Server (NTRS)

    Fuelberg, H. E.; Meyer, P. J.

    1984-01-01

    Structure and correlation functions are used to describe atmospheric variability during the 10-11 April day of AVE-SESAME 1979 that coincided with the Red River Valley tornado outbreak. The special mesoscale rawinsonde data are employed in calculations involving temperature, geopotential height, horizontal wind speed and mixing ratio. Functional analyses are performed in both the lower and upper troposphere for the composite 24 h experiment period and at individual 3 h observation times. Results show that mesoscale features are prominent during the composite period. Fields of mixing ratio and horizontal wind speed exhibit the greatest amounts of small-scale variance, whereas temperature and geopotential height contain the least. Results for the nine individual times show that small-scale variance is greatest during the convective outbreak. The functions also are used to estimate random errors in the rawinsonde data. Finally, sensitivity analyses are presented to quantify confidence limits of the structure functions.

  11. Statistical Analysis of Individual Participant Data Meta-Analyses: A Comparison of Methods and Recommendations for Practice

    PubMed Central

    Stewart, Gavin B.; Altman, Douglas G.; Askie, Lisa M.; Duley, Lelia; Simmonds, Mark C.; Stewart, Lesley A.

    2012-01-01

    Background Individual participant data (IPD) meta-analyses that obtain “raw” data from studies rather than summary data typically adopt a “two-stage” approach to analysis whereby IPD within trials generate summary measures, which are combined using standard meta-analytical methods. Recently, a range of “one-stage” approaches which combine all individual participant data in a single meta-analysis have been suggested as providing a more powerful and flexible approach. However, they are more complex to implement and require statistical support. This study uses a dataset to compare “two-stage” and “one-stage” models of varying complexity, to ascertain whether results obtained from the approaches differ in a clinically meaningful way. Methods and Findings We included data from 24 randomised controlled trials, evaluating antiplatelet agents, for the prevention of pre-eclampsia in pregnancy. We performed two-stage and one-stage IPD meta-analyses to estimate overall treatment effect and to explore potential treatment interactions whereby particular types of women and their babies might benefit differentially from receiving antiplatelets. Two-stage and one-stage approaches gave similar results, showing a benefit of using anti-platelets (Relative risk 0.90, 95% CI 0.84 to 0.97). Neither approach suggested that any particular type of women benefited more or less from antiplatelets. There were no material differences in results between different types of one-stage model. Conclusions For these data, two-stage and one-stage approaches to analysis produce similar results. Although one-stage models offer a flexible environment for exploring model structure and are useful where across study patterns relating to types of participant, intervention and outcome mask similar relationships within trials, the additional insights provided by their usage may not outweigh the costs of statistical support for routine application in syntheses of randomised controlled trials. Researchers considering undertaking an IPD meta-analysis should not necessarily be deterred by a perceived need for sophisticated statistical methods when combining information from large randomised trials. PMID:23056232

  12. Explorative spatial analysis of traffic accident statistics and road mortality among the provinces of Turkey.

    PubMed

    Erdogan, Saffet

    2009-10-01

    The aim of the study is to describe the inter-province differences in traffic accidents and mortality on roads of Turkey. Two different risk indicators were used to evaluate the road safety performance of the provinces in Turkey. These indicators are the ratios between the number of persons killed in road traffic accidents (1) and the number of accidents (2) (nominators) and their exposure to traffic risk (denominator). Population and the number of registered motor vehicles in the provinces were used as denominators individually. Spatial analyses were performed to the mean annual rate of deaths and to the number of fatal accidents that were calculated for the period of 2001-2006. Empirical Bayes smoothing was used to remove background noise from the raw death and accident rates because of the sparsely populated provinces and small number of accident and death rates of provinces. Global and local spatial autocorrelation analyses were performed to show whether the provinces with high rates of deaths-accidents show clustering or are located closer by chance. The spatial distribution of provinces with high rates of deaths and accidents was nonrandom and detected as clustered with significance of P<0.05 with spatial autocorrelation analyses. Regions with high concentration of fatal accidents and deaths were located in the provinces that contain the roads connecting the Istanbul, Ankara, and Antalya provinces. Accident and death rates were also modeled with some independent variables such as number of motor vehicles, length of roads, and so forth using geographically weighted regression analysis with forward step-wise elimination. The level of statistical significance was taken as P<0.05. Large differences were found between the rates of deaths and accidents according to denominators in the provinces. The geographically weighted regression analyses did significantly better predictions for both accident rates and death rates than did ordinary least regressions, as indicated by adjusted R(2) values. Geographically weighted regression provided values of 0.89-0.99 adjusted R(2) for death and accident rates, compared with 0.88-0.95, respectively, by ordinary least regressions. Geographically weighted regression has the potential to reveal local patterns in the spatial distribution of rates, which would be ignored by the ordinary least regression approach. The application of spatial analysis and modeling of accident statistics and death rates at provincial level in Turkey will help to identification of provinces with outstandingly high accident and death rates. This could help more efficient road safety management in Turkey.

  13. A multi-wave study of organizational justice at work and long-term sickness absence among employees with depressive symptoms.

    PubMed

    Hjarsbech, Pernille U; Christensen, Karl Bang; Bjorner, Jakob B; Madsen, Ida E H; Thorsen, Sannie V; Carneiro, Isabella G; Christensen, Ulla; Rugulies, Reiner

    2014-03-01

    Mental health problems are strong predictors of long-term sickness absence (LTSA). In this study, we investigated whether organizational justice at work - fairness in resolving conflicts and distributing work - prevents risk of LTSA among employees with depressive symptoms. In a longitudinal study with five waves of data collection, we examined a cohort of 1034 employees with depressive symptoms. Depressive symptoms and organizational justice were assessed by self-administered questionnaires and information on LTSA was derived from a national register. Using Poisson regression analyses, we calculated rate ratios (RR) for the prospective association of organizational justice and change in organizational justice with time to onset of LTSA. All analyses were sex stratified. Among men, intermediate levels of organizational justice were statistically significantly associated with a decreased risk of subsequent LTSA after adjustment for covariates [RR 0.49, 95% confidence interval (95% CI) 0.26-0.91]. There was also a decreased risk for men with high levels of organizational justice although these estimates did not reach statistical significance after adjustment (RR 0.47, 95% CI 0.20-1.10). We found no such results for women. In both sexes, neither favorable nor adverse changes in organizational justice were statistically significantly associated with the risk of LTSA. This study shows that organizational justice may have a protective effect on the risk of LTSA among men with depressive symptoms. A protective effect of favorable changes in organizational justice was not found.

  14. Quadriceps Tendon Autograft in Anterior Cruciate Ligament Reconstruction: A Systematic Review.

    PubMed

    Hurley, Eoghan T; Calvo-Gurry, Manuel; Withers, Dan; Farrington, Shane K; Moran, Ray; Moran, Cathal J

    2018-05-01

    To systematically review the current evidence to ascertain whether quadriceps tendon autograft (QT) is a viable option in anterior cruciate ligament reconstruction. A literature review was conducted in accordance with Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines. Cohort studies comparing QT with bone-patellar tendon-bone autograft (BPTB) or hamstring tendon autograft (HT) were included. Clinical outcomes were compared, with all statistical analyses performed using IBM SPSS Statistics for Windows, version 22.0, with P < .05 being considered statistically significant. We identified 15 clinical trials with 1,910 patients. In all included studies, QT resulted in lower rates of anterior knee pain than BPTB. There was no difference in the rate of graft rupture between QT and BPTB or HT in any of the studies reporting this. One study found that QT resulted in greater knee stability than BPTB, and another study found increased stability compared with HT. One study found that QT resulted in improved functional outcomes compared with BPTB, and another found improved outcomes compared with HT, but one study found worse outcomes compared with BPTB. Current literature suggests QT is a viable option in anterior cruciate ligament reconstruction, with published literature showing comparable knee stability, functional outcomes, donor-site morbidity, and rerupture rates compared with BPTB and HT. Level III, systematic review of Level I, II, and III studies. Copyright © 2018 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  15. Spatial variation in the bacterial and denitrifying bacterial community in a biofilter treating subsurface agricultural drainage.

    PubMed

    Andrus, J Malia; Porter, Matthew D; Rodríguez, Luis F; Kuehlhorn, Timothy; Cooke, Richard A C; Zhang, Yuanhui; Kent, Angela D; Zilles, Julie L

    2014-02-01

    Denitrifying biofilters can remove agricultural nitrates from subsurface drainage, reducing nitrate pollution that contributes to coastal hypoxic zones. The performance and reliability of natural and engineered systems dependent upon microbially mediated processes, such as the denitrifying biofilters, can be affected by the spatial structure of their microbial communities. Furthermore, our understanding of the relationship between microbial community composition and function is influenced by the spatial distribution of samples.In this study we characterized the spatial structure of bacterial communities in a denitrifying biofilter in central Illinois. Bacterial communities were assessed using automated ribosomal intergenic spacer analysis for bacteria and terminal restriction fragment length polymorphism of nosZ for denitrifying bacteria.Non-metric multidimensional scaling and analysis of similarity (ANOSIM) analyses indicated that bacteria showed statistically significant spatial structure by depth and transect,while denitrifying bacteria did not exhibit significant spatial structure. For determination of spatial patterns, we developed a package of automated functions for the R statistical environment that allows directional analysis of microbial community composition data using either ANOSIM or Mantel statistics.Applying this package to the biofilter data, the flow path correlation range for the bacterial community was 6.4 m at the shallower, periodically in undated depth and 10.7 m at the deeper, continually submerged depth. These spatial structures suggest a strong influence of hydrology on the microbial community composition in these denitrifying biofilters. Understanding such spatial structure can also guide optimal sample collection strategies for microbial community analyses.

  16. Modeling stimulus variation in three common implicit attitude tasks.

    PubMed

    Wolsiefer, Katie; Westfall, Jacob; Judd, Charles M

    2017-08-01

    We explored the consequences of ignoring the sampling variation due to stimuli in the domain of implicit attitudes. A large literature in psycholinguistics has examined the statistical treatment of random stimulus materials, but the recommendations from this literature have not been applied to the social psychological literature on implicit attitudes. This is partly because of inherent complications in applying crossed random-effect models to some of the most common implicit attitude tasks, and partly because no work to date has demonstrated that random stimulus variation is in fact consequential in implicit attitude measurement. We addressed this problem by laying out statistically appropriate and practically feasible crossed random-effect models for three of the most commonly used implicit attitude measures-the Implicit Association Test, affect misattribution procedure, and evaluative priming task-and then applying these models to large datasets (average N = 3,206) that assess participants' implicit attitudes toward race, politics, and self-esteem. We showed that the test statistics from the traditional analyses are substantially (about 60 %) inflated relative to the more-appropriate analyses that incorporate stimulus variation. Because all three tasks used the same stimulus words and faces, we could also meaningfully compare the relative contributions of stimulus variation across the tasks. In an appendix, we give syntax in R, SAS, and SPSS for fitting the recommended crossed random-effects models to data from all three tasks, as well as instructions on how to structure the data file.

  17. A new statistical method for design and analyses of component tolerance

    NASA Astrophysics Data System (ADS)

    Movahedi, Mohammad Mehdi; Khounsiavash, Mohsen; Otadi, Mahmood; Mosleh, Maryam

    2017-03-01

    Tolerancing conducted by design engineers to meet customers' needs is a prerequisite for producing high-quality products. Engineers use handbooks to conduct tolerancing. While use of statistical methods for tolerancing is not something new, engineers often use known distributions, including the normal distribution. Yet, if the statistical distribution of the given variable is unknown, a new statistical method will be employed to design tolerance. In this paper, we use generalized lambda distribution for design and analyses component tolerance. We use percentile method (PM) to estimate the distribution parameters. The findings indicated that, when the distribution of the component data is unknown, the proposed method can be used to expedite the design of component tolerance. Moreover, in the case of assembled sets, more extensive tolerance for each component with the same target performance can be utilized.

  18. Methods in pharmacoepidemiology: a review of statistical analyses and data reporting in pediatric drug utilization studies.

    PubMed

    Sequi, Marco; Campi, Rita; Clavenna, Antonio; Bonati, Maurizio

    2013-03-01

    To evaluate the quality of data reporting and statistical methods performed in drug utilization studies in the pediatric population. Drug utilization studies evaluating all drug prescriptions to children and adolescents published between January 1994 and December 2011 were retrieved and analyzed. For each study, information on measures of exposure/consumption, the covariates considered, descriptive and inferential analyses, statistical tests, and methods of data reporting was extracted. An overall quality score was created for each study using a 12-item checklist that took into account the presence of outcome measures, covariates of measures, descriptive measures, statistical tests, and graphical representation. A total of 22 studies were reviewed and analyzed. Of these, 20 studies reported at least one descriptive measure. The mean was the most commonly used measure (18 studies), but only five of these also reported the standard deviation. Statistical analyses were performed in 12 studies, with the chi-square test being the most commonly performed test. Graphs were presented in 14 papers. Sixteen papers reported the number of drug prescriptions and/or packages, and ten reported the prevalence of the drug prescription. The mean quality score was 8 (median 9). Only seven of the 22 studies received a score of ≥10, while four studies received a score of <6. Our findings document that only a few of the studies reviewed applied statistical methods and reported data in a satisfactory manner. We therefore conclude that the methodology of drug utilization studies needs to be improved.

  19. Quantitative Methods for Analysing Joint Questionnaire Data: Exploring the Role of Joint in Force Design

    DTIC Science & Technology

    2015-08-01

    the nine questions. The Statistical Package for the Social Sciences ( SPSS ) [11] was used to conduct statistical analysis on the sample. Two types...constructs. SPSS was again used to conduct statistical analysis on the sample. This time factor analysis was conducted. Factor analysis attempts to...Business Research Methods and Statistics using SPSS . P432. 11 IBM SPSS Statistics . (2012) 12 Burns, R.B., Burns, R.A. (2008) ‘Business Research

  20. The statistical analysis of global climate change studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardin, J.W.

    1992-01-01

    The focus of this work is to contribute to the enhancement of the relationship between climatologists and statisticians. The analysis of global change data has been underway for many years by atmospheric scientists. Much of this analysis includes a heavy reliance on statistics and statistical inference. Some specific climatological analyses are presented and the dependence on statistics is documented before the analysis is undertaken. The first problem presented involves the fluctuation-dissipation theorem and its application to global climate models. This problem has a sound theoretical niche in the literature of both climate modeling and physics, but a statistical analysis inmore » which the data is obtained from the model to show graphically the relationship has not been undertaken. It is under this motivation that the author presents this problem. A second problem concerning the standard errors in estimating global temperatures is purely statistical in nature although very little materials exists for sampling on such a frame. This problem not only has climatological and statistical ramifications, but political ones as well. It is planned to use these results in a further analysis of global warming using actual data collected on the earth. In order to simplify the analysis of these problems, the development of a computer program, MISHA, is presented. This interactive program contains many of the routines, functions, graphics, and map projections needed by the climatologist in order to effectively enter the arena of data visualization.« less

  1. [Distribution of soil heavy metal and pollution evaluation on the different sampling scales in farmland on Yellow River irrigation area of Ningxia: a case study in Xingqing County of Yinchuan City].

    PubMed

    Wang, You-Qi; Bai, Yi-Ru; Wang, Jian-Yu

    2014-07-01

    Determining spatial distributions and analyses contamination condition of soil heavy metals play an important role in evaluation of the quality of agricultural ecological environment and the protection of food safety and human health. Topsoil samples (0-20 cm) from 223 sites in farmland were collected at two scales of sampling grid (1 m x 1 m, 10 m x 10 m) in the Yellow River irrigation area of Ningxia. The objectives of this study were to investigate the spatial variability of total copper (Cu), total zinc (Zn), total chrome (Cr), total cadmium (Cd) and total lead (Pb) on the two sampling scales by the classical and geostatistical analyses. The single pollution index (P(i)) and the Nemerow pollution index (P) were used to evaluate the soil heavy metal pollution. The classical statistical analyses showed that all soil heavy metals demonstrated moderate variability, the coefficient of variation (CV) changed in the following sequence: Cd > Pb > Cr > Zn > Cu. Geostatistical analyses showed that the nugget coefficient of Cd on the 10 m x 10 m scale and Pb on the 1 m x 1 m scale were 100% with pure nugget variograms, which showed weak variability affected by random factors. The nugget coefficient of the other indexes was less than 25%, which showed a strong variability affected by structural factors. The results combined with P(i) and P indicated that most soil heavy metals have slight pollution except total copper, and in general there were the trend of heavy metal accumulation in the study area.

  2. Differing long term trends for two common amphibian species (Bufo bufo and Rana temporaria) in alpine landscapes of Salzburg, Austria

    PubMed Central

    Kyek, Martin; Lindner, Robert

    2017-01-01

    This study focuses on the population trends of two widespread European anuran species: the common toad (Bufo bufo) and the common frog (Rana temporaria). The basis of this study is data gathered over two decades of amphibian fencing alongside roads in the Austrian state of Salzburg. Different statistical approaches were used to analyse the data. Overall average increase or decrease of each species was estimated by calculating a simple average locality index. In addition the statistical software TRIM was used to verify these trends as well as to categorize the data based on the geographic location of each migration site. The results show differing overall trends for the two species: the common toad being stable and the common frog showing a substantial decline over the last two decades. Further analyses based on geographic categorization reveal the strongest decrease in the alpine range of the species. Drainage and agricultural intensification are still ongoing problems within alpine areas, not only in Salzburg. Particularly in respect to micro-climate and the availability of spawning places these changes appear to have a greater impact on the habitats of the common frog than the common toad. Therefore we consider habitat destruction to be the main potential reason behind this dramatic decline. We also conclude that the substantial loss of biomass of a widespread species such as the common frog must have a severe, and often overlooked, ecological impact. PMID:29121054

  3. Impact of cosmetic result on selection of surgical treatment in patients with localized prostate cancer.

    PubMed

    Rojo, María Alejandra Egui; Martinez-Salamanca, Juan Ignacio; Maestro, Mario Alvarez; Galarza, Ignacio Sola; Rodriguez, Joaquin Carballido

    2014-01-01

    To analyze the effect of cosmetic outcome as an isolated variable in patients undergoing surgical treatment based on the incision used in the 3 variants of radical prostatectomy: open (infraumbilical incision and Pfannestiel incision) and laparoscopic, or robotic (6 ports) surgery. 612 male patients 40 to 70 years of age with a negative history of prostate disease were invited to participate. Each patient was evaluated by questionnaire accompanied by a set of 6 photographs showing the cosmetic appearance of the 3 approaches, with and without undergarments. Participants ranked the approaches according to preference, on the basis of cosmesis. We also recorded demographic variables: age, body mass index, marital status, education level, and physical activity. Of the 577 patients who completed the questionnaries, the 6-port minimally invasive approach represents the option preferred by 52% of the participants, followed by the Pfannestiel incision (46%), and the infraumbilical incision (11%), respectively. The univariate and multivariate analyses did not show statistically significant differences when comparing the approach preferred by the patients and the sub-analyses for demographic variables, except for patients who exercised who preferred the Pfannestiel incision (58%) instead of minimally invasive approach (42%) with statistically significant differences. The minimally invasive approach was the approach of choice for the majority of patients in the treatment of prostate cancer. The Pfannestiel incision represents an acceptable alternative. More research and investment may be necesary to improve cosmetic outcomes.

  4. Quantum behaviour of open pumped and damped Bose-Hubbard trimers

    NASA Astrophysics Data System (ADS)

    Chianca, C. V.; Olsen, M. K.

    2018-01-01

    We propose and analyse analogs of optical cavities for atoms using three-well inline Bose-Hubbard models with pumping and losses. With one well pumped and one damped, we find that both the mean-field dynamics and the quantum statistics show a qualitative dependence on the choice of damped well. The systems we analyse remain far from equilibrium, although most do enter a steady-state regime. We find quadrature squeezing, bipartite and tripartite inseparability and entanglement, and states exhibiting the EPR paradox, depending on the parameter regimes. We also discover situations where the mean-field solutions of our models are noticeably different from the quantum solutions for the mean fields. Due to recent experimental advances, it should be possible to demonstrate the effects we predict and investigate in this article.

  5. Analysis on the Climate Change Characteristics of Dianchi Lake Basin under the Background of Global Warming

    NASA Astrophysics Data System (ADS)

    Zhenyu, Yu; Luo, Yi; Yang, Kun; Qiongfei, Deng

    2017-05-01

    Based on the data published by the State Statistical Bureau and the weather station data, the annual mean temperature, wind speed, humidity, light duration and precipitation of Dianchi Lake in 1990 ~ 2014 were analysed. Combined with the population The results show that the climatic changes in Dianchi Lake basin are related to the climatic change in the past 25 years, and the correlation between these factors and the main climatic factors are analysed by linear regression, Mann-Kendall test, cumulative anomaly, R/S and Morlet wavelet analysis. Population, housing construction area growth and other aspects of the correlation trends and changes in the process, revealing the population expansion and housing construction area growth on the climate of the main factors of the cycle tendency of significant impact.

  6. A study of the comparative effects of various means of motion cueing during a simulated compensatory tracking task

    NASA Technical Reports Server (NTRS)

    Mckissick, B. T.; Ashworth, B. R.; Parrish, R. V.; Martin, D. J., Jr.

    1980-01-01

    NASA's Langley Research Center conducted a simulation experiment to ascertain the comparative effects of motion cues (combinations of platform motion and g-seat normal acceleration cues) on compensatory tracking performance. In the experiment, a full six-degree-of-freedom YF-16 model was used as the simulated pursuit aircraft. The Langley Visual Motion Simulator (with in-house developed wash-out), and a Langley developed g-seat were principal components of the simulation. The results of the experiment were examined utilizing univariate and multivariate techniques. The statistical analyses demonstrate that the platform motion and g-seat cues provide additional information to the pilot that allows substantial reduction of lateral tracking error. Also, the analyses show that the g-seat cue helps reduce vertical error.

  7. Statistical Literacy in the Data Science Workplace

    ERIC Educational Resources Information Center

    Grant, Robert

    2017-01-01

    Statistical literacy, the ability to understand and make use of statistical information including methods, has particular relevance in the age of data science, when complex analyses are undertaken by teams from diverse backgrounds. Not only is it essential to communicate to the consumers of information but also within the team. Writing from the…

  8. Reporting Practices and Use of Quantitative Methods in Canadian Journal Articles in Psychology.

    PubMed

    Counsell, Alyssa; Harlow, Lisa L

    2017-05-01

    With recent focus on the state of research in psychology, it is essential to assess the nature of the statistical methods and analyses used and reported by psychological researchers. To that end, we investigated the prevalence of different statistical procedures and the nature of statistical reporting practices in recent articles from the four major Canadian psychology journals. The majority of authors evaluated their research hypotheses through the use of analysis of variance (ANOVA), t -tests, and multiple regression. Multivariate approaches were less common. Null hypothesis significance testing remains a popular strategy, but the majority of authors reported a standardized or unstandardized effect size measure alongside their significance test results. Confidence intervals on effect sizes were infrequently employed. Many authors provided minimal details about their statistical analyses and less than a third of the articles presented on data complications such as missing data and violations of statistical assumptions. Strengths of and areas needing improvement for reporting quantitative results are highlighted. The paper concludes with recommendations for how researchers and reviewers can improve comprehension and transparency in statistical reporting.

  9. DNA viewed as an out-of-equilibrium structure

    NASA Astrophysics Data System (ADS)

    Provata, A.; Nicolis, C.; Nicolis, G.

    2014-05-01

    The complexity of the primary structure of human DNA is explored using methods from nonequilibrium statistical mechanics, dynamical systems theory, and information theory. A collection of statistical analyses is performed on the DNA data and the results are compared with sequences derived from different stochastic processes. The use of χ2 tests shows that DNA can not be described as a low order Markov chain of order up to r =6. Although detailed balance seems to hold at the level of a binary alphabet, it fails when all four base pairs are considered, suggesting spatial asymmetry and irreversibility. Furthermore, the block entropy does not increase linearly with the block size, reflecting the long-range nature of the correlations in the human genomic sequences. To probe locally the spatial structure of the chain, we study the exit distances from a specific symbol, the distribution of recurrence distances, and the Hurst exponent, all of which show power law tails and long-range characteristics. These results suggest that human DNA can be viewed as a nonequilibrium structure maintained in its state through interactions with a constantly changing environment. Based solely on the exit distance distribution accounting for the nonequilibrium statistics and using the Monte Carlo rejection sampling method, we construct a model DNA sequence. This method allows us to keep both long- and short-range statistical characteristics of the native DNA data. The model sequence presents the same characteristic exponents as the natural DNA but fails to capture spatial correlations and point-to-point details.

  10. The relation between statistical power and inference in fMRI

    PubMed Central

    Wager, Tor D.; Yarkoni, Tal

    2017-01-01

    Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects), and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial—especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20–30) display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate) prediction methods and meta-analyses with related synthesis-oriented approaches. PMID:29155843

  11. DNA viewed as an out-of-equilibrium structure.

    PubMed

    Provata, A; Nicolis, C; Nicolis, G

    2014-05-01

    The complexity of the primary structure of human DNA is explored using methods from nonequilibrium statistical mechanics, dynamical systems theory, and information theory. A collection of statistical analyses is performed on the DNA data and the results are compared with sequences derived from different stochastic processes. The use of χ^{2} tests shows that DNA can not be described as a low order Markov chain of order up to r=6. Although detailed balance seems to hold at the level of a binary alphabet, it fails when all four base pairs are considered, suggesting spatial asymmetry and irreversibility. Furthermore, the block entropy does not increase linearly with the block size, reflecting the long-range nature of the correlations in the human genomic sequences. To probe locally the spatial structure of the chain, we study the exit distances from a specific symbol, the distribution of recurrence distances, and the Hurst exponent, all of which show power law tails and long-range characteristics. These results suggest that human DNA can be viewed as a nonequilibrium structure maintained in its state through interactions with a constantly changing environment. Based solely on the exit distance distribution accounting for the nonequilibrium statistics and using the Monte Carlo rejection sampling method, we construct a model DNA sequence. This method allows us to keep both long- and short-range statistical characteristics of the native DNA data. The model sequence presents the same characteristic exponents as the natural DNA but fails to capture spatial correlations and point-to-point details.

  12. [Relationship between finger dermatoglyphics and body size indicators in adulthood among Chinese twin population from Qingdao and Lishui cities].

    PubMed

    Sun, Luanluan; Yu, Canqing; Lyu, Jun; Cao, Weihua; Pang, Zengchang; Chen, Weijian; Wang, Shaojie; Chen, Rongfu; Gao, Wenjing; Li, Liming

    2014-01-01

    To study the correlation between fingerprints and body size indicators in adulthood. Samples were composed of twins from two sub-registries of Chinese National Twin Registry (CNTR), including 405 twin pairs in Lishui and 427 twin pairs in Qingdao. All participants were asked to complete the field survey, consisting of questionnaire, physical examination and blood collection. From the 832 twin pairs, those with complete and clear demographic prints were selected as the target population. Information of Fingerprints pixel on the demographic characteristics of these 100 twin pairs and their related adulthood body type indicators were finally chosen to form this research. Descriptive statistics and mixed linear model were used for data analyses. In the mixed linear models adjusted for age and sex, data showed that the body fat percentage of those who had arches was higher than those who did not have the arches (P = 0.002), and those who had radial loops would have higher body fat percentage when compared with ones who did not (P = 0.041). After adjusted for age, there appeared no statistically significant correlation between radial loops and systolic pressure, but the correlations of arches (P = 0.031)and radial loops (P = 0.022) to diastolic pressure still remained statistically significant. Statistically significant correlations were found between fingerprint types and body size indicators, and the fingerprint types showed a useful tool to explore the effects of uterine environment on health status in one's adulthood.

  13. Population activity statistics dissect subthreshold and spiking variability in V1.

    PubMed

    Bányai, Mihály; Koman, Zsombor; Orbán, Gergő

    2017-07-01

    Response variability, as measured by fluctuating responses upon repeated performance of trials, is a major component of neural responses, and its characterization is key to interpret high dimensional population recordings. Response variability and covariability display predictable changes upon changes in stimulus and cognitive or behavioral state, providing an opportunity to test the predictive power of models of neural variability. Still, there is little agreement on which model to use as a building block for population-level analyses, and models of variability are often treated as a subject of choice. We investigate two competing models, the doubly stochastic Poisson (DSP) model assuming stochasticity at spike generation, and the rectified Gaussian (RG) model tracing variability back to membrane potential variance, to analyze stimulus-dependent modulation of both single-neuron and pairwise response statistics. Using a pair of model neurons, we demonstrate that the two models predict similar single-cell statistics. However, DSP and RG models have contradicting predictions on the joint statistics of spiking responses. To test the models against data, we build a population model to simulate stimulus change-related modulations in pairwise response statistics. We use single-unit data from the primary visual cortex (V1) of monkeys to show that while model predictions for variance are qualitatively similar to experimental data, only the RG model's predictions are compatible with joint statistics. These results suggest that models using Poisson-like variability might fail to capture important properties of response statistics. We argue that membrane potential-level modeling of stochasticity provides an efficient strategy to model correlations. NEW & NOTEWORTHY Neural variability and covariability are puzzling aspects of cortical computations. For efficient decoding and prediction, models of information encoding in neural populations hinge on an appropriate model of variability. Our work shows that stimulus-dependent changes in pairwise but not in single-cell statistics can differentiate between two widely used models of neuronal variability. Contrasting model predictions with neuronal data provides hints on the noise sources in spiking and provides constraints on statistical models of population activity. Copyright © 2017 the American Physiological Society.

  14. The SPARC Intercomparison of Middle Atmosphere Climatologies

    NASA Technical Reports Server (NTRS)

    Randel, William; Fleming, Eric; Geller, Marvin; Gelman, Mel; Hamilton, Kevin; Karoly, David; Ortland, Dave; Pawson, Steve; Swinbank, Richard; Udelhofen, Petra

    2003-01-01

    Our current confidence in 'observed' climatological winds and temperatures in the middle atmosphere (over altitudes approx. 10-80 km) is assessed by detailed intercomparisons of contemporary and historic data sets. These data sets include global meteorological analyses and assimilations, climatologies derived from research satellite measurements, and historical reference atmosphere circulation statistics. We also include comparisons with historical rocketsonde wind and temperature data, and with more recent lidar temperature measurements. The comparisons focus on a few basic circulation statistics, such as temperature, zonal wind, and eddy flux statistics. Special attention is focused on tropical winds and temperatures, where large differences exist among separate analyses. Assimilated data sets provide the most realistic tropical variability, but substantial differences exist among current schemes.

  15. Computer program for prediction of fuel consumption statistical data for an upper stage three-axes stabilized on-off control system

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A FORTRAN coded computer program and method to predict the reaction control fuel consumption statistics for a three axis stabilized rocket vehicle upper stage is described. A Monte Carlo approach is used which is more efficient by using closed form estimates of impulses. The effects of rocket motor thrust misalignment, static unbalance, aerodynamic disturbances, and deviations in trajectory, mass properties and control system characteristics are included. This routine can be applied to many types of on-off reaction controlled vehicles. The pseudorandom number generation and statistical analyses subroutines including the output histograms can be used for other Monte Carlo analyses problems.

  16. Inter-model variability in hydrological extremes projections for Amazonian sub-basins

    NASA Astrophysics Data System (ADS)

    Andres Rodriguez, Daniel; Garofolo, Lucas; Lázaro de Siqueira Júnior, José; Samprogna Mohor, Guilherme; Tomasella, Javier

    2014-05-01

    Irreducible uncertainties due to knowledge's limitations, chaotic nature of climate system and human decision-making process drive uncertainties in Climate Change projections. Such uncertainties affect the impact studies, mainly when associated to extreme events, and difficult the decision-making process aimed at mitigation and adaptation. However, these uncertainties allow the possibility to develop exploratory analyses on system's vulnerability to different sceneries. The use of different climate model's projections allows to aboard uncertainties issues allowing the use of multiple runs to explore a wide range of potential impacts and its implications for potential vulnerabilities. Statistical approaches for analyses of extreme values are usually based on stationarity assumptions. However, nonstationarity is relevant at the time scales considered for extreme value analyses and could have great implications in dynamic complex systems, mainly under climate change transformations. Because this, it is required to consider the nonstationarity in the statistical distribution parameters. We carried out a study of the dispersion in hydrological extremes projections using climate change projections from several climate models to feed the Distributed Hydrological Model of the National Institute for Spatial Research, MHD-INPE, applied in Amazonian sub-basins. This model is a large-scale hydrological model that uses a TopModel approach to solve runoff generation processes at the grid-cell scale. MHD-INPE model was calibrated for 1970-1990 using observed meteorological data and comparing observed and simulated discharges by using several performance coeficients. Hydrological Model integrations were performed for present historical time (1970-1990) and for future period (2010-2100). Because climate models simulate the variability of the climate system in statistical terms rather than reproduce the historical behavior of climate variables, the performances of the model's runs during the historical period, when feed with climate model data, were tested using descriptors of the Flow Duration Curves. The analyses of projected extreme values were carried out considering the nonstationarity of the GEV distribution parameters and compared with extremes events in present time. Results show inter-model variability in a broad dispersion on projected extreme's values. Such dispersion implies different degrees of socio-economic impacts associated to extreme hydrological events. Despite the no existence of one optimum result, this variability allows the analyses of adaptation strategies and its potential vulnerabilities.

  17. Lifetime use of cannabis from longitudinal assessments, cannabinoid receptor (CNR1) variation, and reduced volume of the right anterior cingulate

    PubMed Central

    Hill, Shirley Y.; Sharma, Vinod; Jones, Bobby L.

    2016-01-01

    Lifetime measures of cannabis use and co-occurring exposures were obtained from a longitudinal cohort followed an average of 13 years at the time they received a structural MRI scan. MRI scans were analyzed for 88 participants (mean age=25.9 years), 34 of whom were regular users of cannabis. Whole brain voxel based morphometry analyses (SPM8) were conducted using 50 voxel clusters at p=0.005. Controlling for age, familial risk, and gender, we found reduced volume in Regular Users compared to Non-Users, in the lingual gyrus, anterior cingulum (right and left), and the rolandic operculum (right). The right anterior cingulum reached family-wise error statistical significance at p=0.001, controlling for personal lifetime use of alcohol and cigarettes and any prenatal exposures. CNR1 haplotypes were formed from four CNR1 SNPs (rs806368, rs1049353, rs2023239, and rs6454674) and tested with level of cannabis exposure to assess their interactive effects on the lingual gyrus, cingulum (right and left) and rolandic operculum, regions showing cannabis exposure effects in the SPM8 analyses. These analyses used mixed model analyses (SPSS) to control for multiple potentially confounding variables. Level of cannabis exposure was associated with decreased volume of the right anterior cingulum and showed interaction effects with haplotype variation. PMID:27500453

  18. Identifying and characterizing hepatitis C virus hotspots in Massachusetts: a spatial epidemiological approach.

    PubMed

    Stopka, Thomas J; Goulart, Michael A; Meyers, David J; Hutcheson, Marga; Barton, Kerri; Onofrey, Shauna; Church, Daniel; Donahue, Ashley; Chui, Kenneth K H

    2017-04-20

    Hepatitis C virus (HCV) infections have increased during the past decade but little is known about geographic clustering patterns. We used a unique analytical approach, combining geographic information systems (GIS), spatial epidemiology, and statistical modeling to identify and characterize HCV hotspots, statistically significant clusters of census tracts with elevated HCV counts and rates. We compiled sociodemographic and HCV surveillance data (n = 99,780 cases) for Massachusetts census tracts (n = 1464) from 2002 to 2013. We used a five-step spatial epidemiological approach, calculating incremental spatial autocorrelations and Getis-Ord Gi* statistics to identify clusters. We conducted logistic regression analyses to determine factors associated with the HCV hotspots. We identified nine HCV clusters, with the largest in Boston, New Bedford/Fall River, Worcester, and Springfield (p < 0.05). In multivariable analyses, we found that HCV hotspots were independently and positively associated with the percent of the population that was Hispanic (adjusted odds ratio [AOR]: 1.07; 95% confidence interval [CI]: 1.04, 1.09) and the percent of households receiving food stamps (AOR: 1.83; 95% CI: 1.22, 2.74). HCV hotspots were independently and negatively associated with the percent of the population that were high school graduates or higher (AOR: 0.91; 95% CI: 0.89, 0.93) and the percent of the population in the "other" race/ethnicity category (AOR: 0.88; 95% CI: 0.85, 0.91). We identified locations where HCV clusters were a concern, and where enhanced HCV prevention, treatment, and care can help combat the HCV epidemic in Massachusetts. GIS, spatial epidemiological and statistical analyses provided a rigorous approach to identify hotspot clusters of disease, which can inform public health policy and intervention targeting. Further studies that incorporate spatiotemporal cluster analyses, Bayesian spatial and geostatistical models, spatially weighted regression analyses, and assessment of associations between HCV clustering and the built environment are needed to expand upon our combined spatial epidemiological and statistical methods.

  19. Discriminatory power of water polo game-related statistics at the 2008 Olympic Games.

    PubMed

    Escalante, Yolanda; Saavedra, Jose M; Mansilla, Mirella; Tella, Victor

    2011-02-01

    The aims of this study were (1) to compare water polo game-related statistics by context (winning and losing teams) and sex (men and women), and (2) to identify characteristics discriminating the performances for each sex. The game-related statistics of the 64 matches (44 men's and 20 women's) played in the final phase of the Olympic Games held in Beijing in 2008 were analysed. Unpaired t-tests compared winners and losers and men and women, and confidence intervals and effect sizes of the differences were calculated. The results were subjected to a discriminant analysis to identify the differentiating game-related statistics of the winning and losing teams. The results showed the differences between winning and losing men's teams to be in both defence and offence, whereas in women's teams they were only in offence. In men's games, passing (assists), aggressive play (exclusions), centre position effectiveness (centre shots), and goalkeeper defence (goalkeeper-blocked 5-m shots) predominated, whereas in women's games the play was more dynamic (possessions). The variable that most discriminated performance in men was goalkeeper-blocked shots, and in women shooting effectiveness (shots). These results should help coaches when planning training and competition.

  20. Time-to-event methodology improved statistical evaluation in register-based health services research.

    PubMed

    Bluhmki, Tobias; Bramlage, Peter; Volk, Michael; Kaltheuner, Matthias; Danne, Thomas; Rathmann, Wolfgang; Beyersmann, Jan

    2017-02-01

    Complex longitudinal sampling and the observational structure of patient registers in health services research are associated with methodological challenges regarding data management and statistical evaluation. We exemplify common pitfalls and want to stimulate discussions on the design, development, and deployment of future longitudinal patient registers and register-based studies. For illustrative purposes, we use data from the prospective, observational, German DIabetes Versorgungs-Evaluation register. One aim was to explore predictors for the initiation of a basal insulin supported therapy in patients with type 2 diabetes initially prescribed to glucose-lowering drugs alone. Major challenges are missing mortality information, time-dependent outcomes, delayed study entries, different follow-up times, and competing events. We show that time-to-event methodology is a valuable tool for improved statistical evaluation of register data and should be preferred to simple case-control approaches. Patient registers provide rich data sources for health services research. Analyses are accompanied with the trade-off between data availability, clinical plausibility, and statistical feasibility. Cox' proportional hazards model allows for the evaluation of the outcome-specific hazards, but prediction of outcome probabilities is compromised by missing mortality information. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Cognitive predictors of balance in Parkinson's disease.

    PubMed

    Fernandes, Ângela; Mendes, Andreia; Rocha, Nuno; Tavares, João Manuel R S

    2016-06-01

    Postural instability is one of the most incapacitating symptoms of Parkinson's disease (PD) and appears to be related to cognitive deficits. This study aims to determine the cognitive factors that can predict deficits in static and dynamic balance in individuals with PD. A sociodemographic questionnaire characterized 52 individuals with PD for this work. The Trail Making Test, Rule Shift Cards Test, and Digit Span Test assessed the executive functions. The static balance was assessed using a plantar pressure platform, and dynamic balance was based on the Timed Up and Go Test. The results were statistically analysed using SPSS Statistics software through linear regression analysis. The results show that a statistically significant model based on cognitive outcomes was able to explain the variance of motor variables. Also, the explanatory value of the model tended to increase with the addition of individual and clinical variables, although the resulting model was not statistically significant The model explained 25-29% of the variability of the Timed Up and Go Test, while for the anteroposterior displacement it was 23-34%, and for the mediolateral displacement it was 24-39%. From the findings, we conclude that the cognitive performance, especially the executive functions, is a predictor of balance deficit in individuals with PD.

  2. Variation in reaction norms: Statistical considerations and biological interpretation.

    PubMed

    Morrissey, Michael B; Liefting, Maartje

    2016-09-01

    Analysis of reaction norms, the functions by which the phenotype produced by a given genotype depends on the environment, is critical to studying many aspects of phenotypic evolution. Different techniques are available for quantifying different aspects of reaction norm variation. We examine what biological inferences can be drawn from some of the more readily applicable analyses for studying reaction norms. We adopt a strongly biologically motivated view, but draw on statistical theory to highlight strengths and drawbacks of different techniques. In particular, consideration of some formal statistical theory leads to revision of some recently, and forcefully, advocated opinions on reaction norm analysis. We clarify what simple analysis of the slope between mean phenotype in two environments can tell us about reaction norms, explore the conditions under which polynomial regression can provide robust inferences about reaction norm shape, and explore how different existing approaches may be used to draw inferences about variation in reaction norm shape. We show how mixed model-based approaches can provide more robust inferences than more commonly used multistep statistical approaches, and derive new metrics of the relative importance of variation in reaction norm intercepts, slopes, and curvatures. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  3. Statistical Methods Applied to Gamma-ray Spectroscopy Algorithms in Nuclear Security Missions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fagan, Deborah K.; Robinson, Sean M.; Runkle, Robert C.

    2012-10-01

    In a wide range of nuclear security missions, gamma-ray spectroscopy is a critical research and development priority. One particularly relevant challenge is the interdiction of special nuclear material for which gamma-ray spectroscopy supports the goals of detecting and identifying gamma-ray sources. This manuscript examines the existing set of spectroscopy methods, attempts to categorize them by the statistical methods on which they rely, and identifies methods that have yet to be considered. Our examination shows that current methods effectively estimate the effect of counting uncertainty but in many cases do not address larger sources of decision uncertainty—ones that are significantly moremore » complex. We thus explore the premise that significantly improving algorithm performance requires greater coupling between the problem physics that drives data acquisition and statistical methods that analyze such data. Untapped statistical methods, such as Bayes Modeling Averaging and hierarchical and empirical Bayes methods have the potential to reduce decision uncertainty by more rigorously and comprehensively incorporating all sources of uncertainty. We expect that application of such methods will demonstrate progress in meeting the needs of nuclear security missions by improving on the existing numerical infrastructure for which these analyses have not been conducted.« less

  4. [Correlation of dental age and anthropometric parametres of the overall growth and development in children].

    PubMed

    Triković-Janjić, Olivera; Apostolović, Mirjana; Janosević, Mirjana; Filipović, Gordana

    2008-02-01

    Anthropometric methods of measuring the whole body and body parts are the most commonly applied methods of analysing the growth and development of children. Anthropometric measures are interconnected, so that with growth and development the change of one of the parameters causes the change of the other. The aim of the paper was to analyse whether dental development follows the overall growth and development and what the ratio of this interdependence is. The research involved a sample of 134 participants, aged between 6 and 8 years. Dental age was determined as the average of the sum of existing permanent teeth from the participants aged 6, 7 and 8. With the aim of analysing physical growth and development, commonly accepted anthropometric indexes were applied: height, weight, circumference of the head, the chest cavity at its widest point, the upper arm, the abdomen, the thigh and thickness of the epidermis. The dimensions were measured according to the methodology of the International Biological Programme. The influence of the pertinent variables' related size on the analysed variable was deter mined by the statistical method of multivariable regression. The middle values of all the anthropometric parametres, except for the thickness of the epidermis, were slightly bigger with male participants, and the circumference of the chest cavity was statistically considerably bigger (p < 0.05). The results of anthropometric measurement showed in general a distinct homogeneity not only of the sample group but also within gender, in relation to all the dimensions, excyt for the thickness of the epidermis. The average of the dental age of the participants was 10.36, (10.42 and 10.31 for females and males respectively). Considerable correlation (R = 0.59) with high statistical significance (p < 0.001) was determined between dental age and the set of anthropometric parameters of general growth and development. There is a considerable positive correlation (R = 0.59) between dental age and anthropometric parameters of general growth and development, which confirms that dental development follows the overall growth and development of children, aged between 6 and 8 years.

  5. Trends of mortality from Alzheimer's disease in the European Union, 1994-2013.

    PubMed

    Niu, H; Alvarez-Alvarez, I; Guillen-Grima, F; Al-Rahamneh, M J; Aguinaga-Ontoso, I

    2017-06-01

    In many countries, Alzheimer's disease (AD) has gradually become a common disease in elderly populations. The aim of this study was to analyse trends of mortality caused by AD in the 28 member countries in the European Union (EU) over the last two decades. We extracted data for AD deaths for the period 1994-2013 in the EU from the Eurostat and World Health Organization database. Age-standardized mortality rates per 100 000 were computed. Joinpoint regression was used to analyse the trends and compute the annual percent change in the EU as a whole and by country. Analyses by gender and by European regions were conducted. Mortality from AD has risen in the EU throughout the study period. Most of the countries showed upward trends, with the sharpest increases in Slovakia, Lithuania and Romania. We recorded statistically significant increases of 4.7% and 6.0% in mortality rates in men and women, respectively, in the whole EU. Several countries showed changing trends during the study period. According to the regional analysis, northern and eastern countries showed the steepest increases, whereas in the latter years mortality has declined in western countries. Our findings provide evidence that AD mortality has increased in the EU, especially in eastern and northern European countries and in the female population. Our results could be a reference for the development of primary prevention policies. © 2017 EAN.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Sandra D.; Liu, Jia; Arey, Bruce W.

    The distribution of iron resulting from the autocatalytic interaction of aqueous Fe(II) with the hematite (001) surface was directly mapped in three dimensions (3D) for the first time, using iron isotopic labelling and atom probe tomography (APT). Analyses of the mass spectrum showed that natural abundance ratios in 56Fe-dominant hematite are recovered at depth with good accuracy, whereas at the relict interface with 57Fe(II) solution evidence for hematite growth by oxidative adsorption of Fe(II) was found. 3D reconstructions of the isotope positions along the surface normal direction showed a zone enriched in 57Fe, which was consistent with an average netmore » adsorption of 3.2 – 4.3 57Fe atoms nm–2. Statistical analyses utilizing grid-based frequency distribution analyses show a heterogeneous, non-random distribution of oxidized Fe on the (001) surface, consistent with Volmer-Weber-like island growth. The unique 3D nature of the APT data provides an unprecedented means to quantify the atomic-scale distribution of sorbed 57Fe atoms and the extent of segregation on the hematite surface. This new ability to spatially map growth on single crystal faces at the atomic scale will enable resolution to long-standing unanswered questions about the underlying mechanisms for electron and atom exchange involved in a wide variety of redox-catalyzed processes at this archetypal and broadly relevant interface.« less

  7. Prognostic Significance of Blood Transfusion in Newly Diagnosed Multiple Myeloma Patients without Autologous Hematopoietic Stem Cell Transplantation

    PubMed Central

    Fan, Liping; Fu, Danhui; Zhang, Jinping; Wang, Qingqing; Ye, Yamei; Xie, Qianling

    2017-01-01

    The aim of this study was to evaluate whether blood transfusions affect overall survival (OS) and progression-free survival (PFS) in newly diagnosed multiple myeloma (MM) patients without hematopoietic stem cell transplantation. A total of 181 patients were enrolled and divided into two groups: 68 patients in the transfused group and 113 patients in the nontransfused group. Statistical analyses showed that there were significant differences in ECOG scoring, Ig isotype, platelet (Plt) counts, hemoglobin (Hb) level, serum creatinine (Scr) level, and β2-microglobulin (β2-MG) level between the two groups. Univariate analyses showed that higher International Staging System staging, Plt counts < 100 × 109/L, Scr level ≥ 177 μmol/L, serum β2-MG ≥ 5.5 μmol/L, serum calcium (Ca) ≥ 2.75 mmol/L, and thalidomide use were associated with both OS and PFS in MM patients. Age ≥ 60 was associated with OS and Ig isotype was associated with PFS in MM patients. Moreover, blood transfusion was associated with PFS but not OS in MM patients. Multivariate analyses showed that blood transfusion was not an independent factor for PFS in MM patients. Our preliminary results suggested that newly diagnosed MM patients may benefit from a liberal blood transfusion strategy, since blood transfusion is not an independent impact factor for survival. PMID:28567420

  8. Robust hierarchical state-space models reveal diel variation in travel rates of migrating leatherback turtles.

    PubMed

    Jonsen, Ian D; Myers, Ransom A; James, Michael C

    2006-09-01

    1. Biological and statistical complexity are features common to most ecological data that hinder our ability to extract meaningful patterns using conventional tools. Recent work on implementing modern statistical methods for analysis of such ecological data has focused primarily on population dynamics but other types of data, such as animal movement pathways obtained from satellite telemetry, can also benefit from the application of modern statistical tools. 2. We develop a robust hierarchical state-space approach for analysis of multiple satellite telemetry pathways obtained via the Argos system. State-space models are time-series methods that allow unobserved states and biological parameters to be estimated from data observed with error. We show that the approach can reveal important patterns in complex, noisy data where conventional methods cannot. 3. Using the largest Atlantic satellite telemetry data set for critically endangered leatherback turtles, we show that the diel pattern in travel rates of these turtles changes over different phases of their migratory cycle. While foraging in northern waters the turtles show similar travel rates during day and night, but on their southward migration to tropical waters travel rates are markedly faster during the day. These patterns are generally consistent with diving data, and may be related to changes in foraging behaviour. Interestingly, individuals that migrate southward to breed generally show higher daytime travel rates than individuals that migrate southward in a non-breeding year. 4. Our approach is extremely flexible and can be applied to many ecological analyses that use complex, sequential data.

  9. The Thurgood Marshall School of Law Empirical Findings: A Report of the Statistical Analysis of the July 2010 TMSL Texas Bar Results

    ERIC Educational Resources Information Center

    Kadhi, Tau; Holley, D.

    2010-01-01

    The following report gives the statistical findings of the July 2010 TMSL Bar results. Procedures: Data is pre-existing and was given to the Evaluator by email from the Registrar and Dean. Statistical analyses were run using SPSS 17 to address the following research questions: 1. What are the statistical descriptors of the July 2010 overall TMSL…

  10. Swimming Motility Reduces Deposition to Silica Surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Nanxi; Massoudieh, Arash; Liang, Xiaomeng

    The role of swimming motility on bacterial transport and fate in porous media was evaluated. We present microscopic evidence showing that strong swimming motility reduces attachment of Azotobacter vinelandii cells to silica surfaces. Applying global and cluster statistical analyses to microscopic videos taken under non-flow conditions, wild type, flagellated A. vinelandii strain DJ showed strong swimming ability with an average speed of 13.1 μm/s, DJ77 showed impaired swimming averaged at 8.7 μm/s, and both the non-flagellated JZ52 and chemically treated DJ cells were non-motile. Quantitative analyses of trajectories observed at different distances above the collector of a radial stagnation pointmore » flow cell (RSPF) revealed that both swimming and non-swimming cells moved with the flow when at a distance of at least 20 μm from the collector surface. Near the surface, DJ cells showed both horizontal and vertical movement diverging them from reaching surfaces, while chemically treated DJ cells moved with the flow to reach surfaces, suggesting that strong swimming reduced attachment. In agreement with the RSPF results, the deposition rates obtained for two-dimensional multiple-collector micromodels were also lowest for DJ, while DJ77 and JZ52 showed similar values. Strong swimming specifically reduced deposition on the upstream surfaces of the micromodel collectors.« less

  11. A robust and efficient statistical method for genetic association studies using case and control samples from multiple cohorts

    PubMed Central

    2013-01-01

    Background The theoretical basis of genome-wide association studies (GWAS) is statistical inference of linkage disequilibrium (LD) between any polymorphic marker and a putative disease locus. Most methods widely implemented for such analyses are vulnerable to several key demographic factors and deliver a poor statistical power for detecting genuine associations and also a high false positive rate. Here, we present a likelihood-based statistical approach that accounts properly for non-random nature of case–control samples in regard of genotypic distribution at the loci in populations under study and confers flexibility to test for genetic association in presence of different confounding factors such as population structure, non-randomness of samples etc. Results We implemented this novel method together with several popular methods in the literature of GWAS, to re-analyze recently published Parkinson’s disease (PD) case–control samples. The real data analysis and computer simulation show that the new method confers not only significantly improved statistical power for detecting the associations but also robustness to the difficulties stemmed from non-randomly sampling and genetic structures when compared to its rivals. In particular, the new method detected 44 significant SNPs within 25 chromosomal regions of size < 1 Mb but only 6 SNPs in two of these regions were previously detected by the trend test based methods. It discovered two SNPs located 1.18 Mb and 0.18 Mb from the PD candidates, FGF20 and PARK8, without invoking false positive risk. Conclusions We developed a novel likelihood-based method which provides adequate estimation of LD and other population model parameters by using case and control samples, the ease in integration of these samples from multiple genetically divergent populations and thus confers statistically robust and powerful analyses of GWAS. On basis of simulation studies and analysis of real datasets, we demonstrated significant improvement of the new method over the non-parametric trend test, which is the most popularly implemented in the literature of GWAS. PMID:23394771

  12. The validation and application of the Chinese version of perceived nursing work environment scale.

    PubMed

    Zhao, Peng; Chen, Fen Ju; Jia, Xiao Hui; Lv, Hui; Cheng, Piao Piao; Zhang, Li Ping

    2013-07-01

    To improve the development of the Chinese version of Perceived Nursing Work Environment (C-PNWE) scale by examination and application and to explore the nurses' perception of their working environment in a hospital. The C-PNWE scale was translated and revised from the PNWE scale. The least of perfection is that the development of C-PNWE ignored that the psychometric properties of the PNWE instrument were established of critical care nurses and further application and testing of the PNWE in various patient care settings were recommended. This is a cross-sectional design. Nurses from different departments of a hospital were sampled by convenience sampling and investigated by self-administrated questionnaire. Data obtained through questionnaires were analysed by descriptive statistical analyses and profile analyses using the Statistical Package for the Social Sciences (SPSS) Chinese version 17.0 software. The coincident and level profile analyses indicated that these groups can merge into one group, and the profile of measurement result of this merged group would not exhibit flatness. Among six dimensions of C-PNWE scale, the Staffing and Resource Adequacy got the lowest average score. Among 41 items, 'Opportunity for staff nurse to participate in policy decisions' got the lowest mean. The C-PNWE scale shows good psychometric properties and can be used to explore nurses' perspectives of the nursing practice environment in China. And the situation of nurses' perceived working environment in China needs further study. Shaping nursing practice environments to promote desired outcomes requires valid and reliable measures to assess practice environments prior to, during and following efforts to implement change. The C-PNWE scale can be a useful measurement tool for administrators to improve the nursing work environment in China. © 2013 John Wiley & Sons Ltd.

  13. Investigation of serum biomarkers in primary gout patients using iTRAQ-based screening.

    PubMed

    Ying, Ying; Chen, Yong; Zhang, Shun; Huang, Haiyan; Zou, Rouxin; Li, Xiaoke; Chu, Zanbo; Huang, Xianqian; Peng, Yong; Gan, Minzhi; Geng, Baoqing; Zhu, Mengya; Ying, Yinyan; Huang, Zuoan

    2018-03-21

    Primary gout is a major disease that affects human health; however, its pathogenesis is not well known. The purpose of this study was to identify biomarkers to explore the underlying mechanisms of primary gout. We used the isobaric tags for relative and absolute quantitation (iTRAQ) technique combined with liquid chromatography-tandem mass spectrometry to screen differentially expressed proteins between gout patients and controls. We also identified proteins potentially involved in gout pathogenesis by analysing biological processes, cellular components, molecular functions, Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways and protein-protein interactions. We further verified some samples using enzyme-linked immunosorbent assay (ELISA). Statistical analyses were carried out using SPSS v. 20.0 and ROC (receiver operating characterstic) curve analyses were carried out using Medcalc software. Two-sided p-values <0.05 were deemed to be statistically significant for all analyses. We identified 95 differentially expressed proteins (50 up-regulated and 45 down-regulated), and selected nine proteins (α-enolase (ENOA), glyceraldehyde-3-phosphate dehydrogenase (G3P), complement component C9 (CO9), profilin-1 (PROF1), lipopolysaccharide-binding protein (LBP), tubulin beta-4A chain (TBB4A), phosphoglycerate kinase (PGK1), glucose-6-phosphate isomerase (G6PI), and transketolase (TKT)) for verification. This showed that the level of TBB4A was significantly higher in primary gout than in controls (p=0.023). iTRAQ technology was useful in the selection of differentially expressed proteins from proteomes, and provides a strong theoretical basis for the study of biomarkers and mechanisms in primary gout. In addition, TBB4A protein may be associated with primary gout.

  14. Meta-analysis of thirty-two case-control and two ecological radon studies of lung cancer.

    PubMed

    Dobrzynski, Ludwik; Fornalski, Krzysztof W; Reszczynska, Joanna

    2018-03-01

    A re-analysis has been carried out of thirty-two case-control and two ecological studies concerning the influence of radon, a radioactive gas, on the risk of lung cancer. Three mathematically simplest dose-response relationships (models) were tested: constant (zero health effect), linear, and parabolic (linear-quadratic). Health effect end-points reported in the analysed studies are odds ratios or relative risk ratios, related either to morbidity or mortality. In our preliminary analysis, we show that the results of dose-response fitting are qualitatively (within uncertainties, given as error bars) the same, whichever of these health effect end-points are applied. Therefore, we deemed it reasonable to aggregate all response data into the so-called Relative Health Factor and jointly analysed such mixed data, to obtain better statistical power. In the second part of our analysis, robust Bayesian and classical methods of analysis were applied to this combined dataset. In this part of our analysis, we selected different subranges of radon concentrations. In view of substantial differences between the methodology used by the authors of case-control and ecological studies, the mathematical relationships (models) were applied mainly to the thirty-two case-control studies. The degree to which the two ecological studies, analysed separately, affect the overall results when combined with the thirty-two case-control studies, has also been evaluated. In all, as a result of our meta-analysis of the combined cohort, we conclude that the analysed data concerning radon concentrations below ~1000 Bq/m3 (~20 mSv/year of effective dose to the whole body) do not support the thesis that radon may be a cause of any statistically significant increase in lung cancer incidence.

  15. Assessing the effects of habitat patches ensuring propagule supply and different costs inclusion in marine spatial planning through multivariate analyses.

    PubMed

    Appolloni, L; Sandulli, R; Vetrano, G; Russo, G F

    2018-05-15

    Marine Protected Areas are considered key tools for conservation of coastal ecosystems. However, many reserves are characterized by several problems mainly related to inadequate zonings that often do not protect high biodiversity and propagule supply areas precluding, at the same time, economic important zones for local interests. The Gulf of Naples is here employed as a study area to assess the effects of inclusion of different conservation features and costs in reserve design process. In particular eight scenarios are developed using graph theory to identify propagule source patches and fishing and exploitation activities as costs-in-use for local population. Scenarios elaborated by MARXAN, software commonly used for marine conservation planning, are compared using multivariate analyses (MDS, PERMANOVA and PERMDISP) in order to assess input data having greatest effects on protected areas selection. MARXAN is heuristic software able to give a number of different correct results, all of them near to the best solution. Its outputs show that the most important areas to be protected, in order to ensure long-term habitat life and adequate propagule supply, are mainly located around the Gulf islands. In addition through statistical analyses it allowed us to prove that different choices on conservation features lead to statistically different scenarios. The presence of propagule supply patches forces MARXAN to select almost the same areas to protect decreasingly different MARXAN results and, thus, choices for reserves area selection. The multivariate analyses applied here to marine spatial planning proved to be very helpful allowing to identify i) how different scenario input data affect MARXAN and ii) what features have to be taken into account in study areas characterized by peculiar biological and economic interests. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Measurement issues in research on social support and health.

    PubMed Central

    Dean, K; Holst, E; Kreiner, S; Schoenborn, C; Wilson, R

    1994-01-01

    STUDY OBJECTIVE--The aims were: (1) to identify methodological problems that may explain the inconsistencies and contradictions in the research evidence on social support and health, and (2) to validate a frequently used measure of social support in order to determine whether or not it could be used in multivariate analyses of population data in research on social support and health. DESIGN AND METHODS--Secondary analysis of data collected in a cross sectional survey of a multistage cluster sample of the population of the United States, designed to study relationships in behavioural, social support and health variables. Statistical models based on item response theory and graph theory were used to validate the measure of social support to be used in subsequent analyses. PARTICIPANTS--Data on 1755 men and women aged 20 to 64 years were available for the scale validation. RESULTS--Massive evidence of item bias was found for all items of a group membership subscale. The most serious problems were found in relationship to an item measuring membership in work related groups. Using that item in the social network scale in multivariate analyses would distort findings on the statistical effects of education, employment status, and household income. Evidence of item bias was also found for a sociability subscale. When marital status was included to create what is called an intimate contacts subscale, the confounding grew worse. CONCLUSIONS--The composite measure of social network is not valid and would seriously distort the findings of analyses attempting to study relationships between the index and other variables. The findings show that valid measurement is a methodological issue that must be addressed in scientific research on population health. PMID:8189179

  17. Evaluating the consistency of gene sets used in the analysis of bacterial gene expression data.

    PubMed

    Tintle, Nathan L; Sitarik, Alexandra; Boerema, Benjamin; Young, Kylie; Best, Aaron A; Dejongh, Matthew

    2012-08-08

    Statistical analyses of whole genome expression data require functional information about genes in order to yield meaningful biological conclusions. The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) are common sources of functionally grouped gene sets. For bacteria, the SEED and MicrobesOnline provide alternative, complementary sources of gene sets. To date, no comprehensive evaluation of the data obtained from these resources has been performed. We define a series of gene set consistency metrics directly related to the most common classes of statistical analyses for gene expression data, and then perform a comprehensive analysis of 3581 Affymetrix® gene expression arrays across 17 diverse bacteria. We find that gene sets obtained from GO and KEGG demonstrate lower consistency than those obtained from the SEED and MicrobesOnline, regardless of gene set size. Despite the widespread use of GO and KEGG gene sets in bacterial gene expression data analysis, the SEED and MicrobesOnline provide more consistent sets for a wide variety of statistical analyses. Increased use of the SEED and MicrobesOnline gene sets in the analysis of bacterial gene expression data may improve statistical power and utility of expression data.

  18. A Monte Carlo Analysis of the Thrust Imbalance for the RSRMV Booster During Both the Ignition Transient and Steady State Operation

    NASA Technical Reports Server (NTRS)

    Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.

    2014-01-01

    This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle

  19. A Monte Carlo Analysis of the Thrust Imbalance for the Space Launch System Booster During Both the Ignition Transient and Steady State Operation

    NASA Technical Reports Server (NTRS)

    Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.

    2014-01-01

    This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle.

  20. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less

Top