Sample records for additional statistical analyses

  1. Research of Extension of the Life Cycle of Helicopter Rotor Blade in Hungary

    DTIC Science & Technology

    2003-02-01

    Radiography (DXR), and (iii) Vibration Diagnostics (VD) with Statistical Energy Analysis (SEA) were semi- simultaneously applied [1]. The used three...2.2. Vibration Diagnostics (VD)) Parallel to the NDT measurements the Statistical Energy Analysis (SEA) as a vibration diagnostical tool were...noises were analysed with a dual-channel real time frequency analyser (BK2035). In addition to the Statistical Energy Analysis measurement a small

  2. DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTIC ASSESSMENT

    EPA Science Inventory

    Databases designed for statistical analyses have characteristics that distinguish them from databases intended for general use. EMAP uses a probabilistic sampling design to collect data to produce statistical assessments of environmental conditions. In addition to supporting the ...

  3. SEER Cancer Query Systems (CanQues)

    Cancer.gov

    These applications provide access to cancer statistics including incidence, mortality, survival, prevalence, and probability of developing or dying from cancer. Users can display reports of the statistics or extract them for additional analyses.

  4. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth

    ERIC Educational Resources Information Center

    Steyvers, Mark; Tenenbaum, Joshua B.

    2005-01-01

    We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…

  5. [Statistical analysis using freely-available "EZR (Easy R)" software].

    PubMed

    Kanda, Yoshinobu

    2015-10-01

    Clinicians must often perform statistical analyses for purposes such evaluating preexisting evidence and designing or executing clinical studies. R is a free software environment for statistical computing. R supports many statistical analysis functions, but does not incorporate a statistical graphical user interface (GUI). The R commander provides an easy-to-use basic-statistics GUI for R. However, the statistical function of the R commander is limited, especially in the field of biostatistics. Therefore, the author added several important statistical functions to the R commander and named it "EZR (Easy R)", which is now being distributed on the following website: http://www.jichi.ac.jp/saitama-sct/. EZR allows the application of statistical functions that are frequently used in clinical studies, such as survival analyses, including competing risk analyses and the use of time-dependent covariates and so on, by point-and-click access. In addition, by saving the script automatically created by EZR, users can learn R script writing, maintain the traceability of the analysis, and assure that the statistical process is overseen by a supervisor.

  6. Implementation of Head Start Planned Variation: 1970-1971. Part II.

    ERIC Educational Resources Information Center

    Lukas, Carol Van Deusen; Wohlleb, Cynthia

    This volume of appendices is Part II of a study of program implementation in 12 models of Head Start Planned Variation. It presents details of the data analysis, copies of data collection instruments, and additional analyses and statistics. The appendices are: (A) Analysis of Variance Designs, (B) Copies of Instruments, (C) Additional Analyses,…

  7. Improving phylogenetic analyses by incorporating additional information from genetic sequence databases.

    PubMed

    Liang, Li-Jung; Weiss, Robert E; Redelings, Benjamin; Suchard, Marc A

    2009-10-01

    Statistical analyses of phylogenetic data culminate in uncertain estimates of underlying model parameters. Lack of additional data hinders the ability to reduce this uncertainty, as the original phylogenetic dataset is often complete, containing the entire gene or genome information available for the given set of taxa. Informative priors in a Bayesian analysis can reduce posterior uncertainty; however, publicly available phylogenetic software specifies vague priors for model parameters by default. We build objective and informative priors using hierarchical random effect models that combine additional datasets whose parameters are not of direct interest but are similar to the analysis of interest. We propose principled statistical methods that permit more precise parameter estimates in phylogenetic analyses by creating informative priors for parameters of interest. Using additional sequence datasets from our lab or public databases, we construct a fully Bayesian semiparametric hierarchical model to combine datasets. A dynamic iteratively reweighted Markov chain Monte Carlo algorithm conveniently recycles posterior samples from the individual analyses. We demonstrate the value of our approach by examining the insertion-deletion (indel) process in the enolase gene across the Tree of Life using the phylogenetic software BALI-PHY; we incorporate prior information about indels from 82 curated alignments downloaded from the BAliBASE database.

  8. Contour plot assessment of existing meta-analyses confirms robust association of statin use and acute kidney injury risk.

    PubMed

    Chevance, Aurélie; Schuster, Tibor; Steele, Russell; Ternès, Nils; Platt, Robert W

    2015-10-01

    Robustness of an existing meta-analysis can justify decisions on whether to conduct an additional study addressing the same research question. We illustrate the graphical assessment of the potential impact of an additional study on an existing meta-analysis using published data on statin use and the risk of acute kidney injury. A previously proposed graphical augmentation approach is used to assess the sensitivity of the current test and heterogeneity statistics extracted from existing meta-analysis data. In addition, we extended the graphical augmentation approach to assess potential changes in the pooled effect estimate after updating a current meta-analysis and applied the three graphical contour definitions to data from meta-analyses on statin use and acute kidney injury risk. In the considered example data, the pooled effect estimates and heterogeneity indices demonstrated to be considerably robust to the addition of a future study. Supportingly, for some previously inconclusive meta-analyses, a study update might yield statistically significant kidney injury risk increase associated with higher statin exposure. The illustrated contour approach should become a standard tool for the assessment of the robustness of meta-analyses. It can guide decisions on whether to conduct additional studies addressing a relevant research question. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.

    PubMed

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L

    2014-01-01

    We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.

  10. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.

  11. Study Designs and Statistical Analyses for Biomarker Research

    PubMed Central

    Gosho, Masahiko; Nagashima, Kengo; Sato, Yasunori

    2012-01-01

    Biomarkers are becoming increasingly important for streamlining drug discovery and development. In addition, biomarkers are widely expected to be used as a tool for disease diagnosis, personalized medication, and surrogate endpoints in clinical research. In this paper, we highlight several important aspects related to study design and statistical analysis for clinical research incorporating biomarkers. We describe the typical and current study designs for exploring, detecting, and utilizing biomarkers. Furthermore, we introduce statistical issues such as confounding and multiplicity for statistical tests in biomarker research. PMID:23012528

  12. Statistical Analyses of Raw Material Data for MTM45-1/CF7442A-36% RW: CMH Cure Cycle

    NASA Technical Reports Server (NTRS)

    Coroneos, Rula; Pai, Shantaram, S.; Murthy, Pappu

    2013-01-01

    This report describes statistical characterization of physical properties of the composite material system MTM45-1/CF7442A, which has been tested and is currently being considered for use on spacecraft structures. This composite system is made of 6K plain weave graphite fibers in a highly toughened resin system. This report summarizes the distribution types and statistical details of the tests and the conditions for the experimental data generated. These distributions will be used in multivariate regression analyses to help determine material and design allowables for similar material systems and to establish a procedure for other material systems. Additionally, these distributions will be used in future probabilistic analyses of spacecraft structures. The specific properties that are characterized are the ultimate strength, modulus, and Poisson??s ratio by using a commercially available statistical package. Results are displayed using graphical and semigraphical methods and are included in the accompanying appendixes.

  13. Sunspot activity and influenza pandemics: a statistical assessment of the purported association.

    PubMed

    Towers, S

    2017-10-01

    Since 1978, a series of papers in the literature have claimed to find a significant association between sunspot activity and the timing of influenza pandemics. This paper examines these analyses, and attempts to recreate the three most recent statistical analyses by Ertel (1994), Tapping et al. (2001), and Yeung (2006), which all have purported to find a significant relationship between sunspot numbers and pandemic influenza. As will be discussed, each analysis had errors in the data. In addition, in each analysis arbitrary selections or assumptions were also made, and the authors did not assess the robustness of their analyses to changes in those arbitrary assumptions. Varying the arbitrary assumptions to other, equally valid, assumptions negates the claims of significance. Indeed, an arbitrary selection made in one of the analyses appears to have resulted in almost maximal apparent significance; changing it only slightly yields a null result. This analysis applies statistically rigorous methodology to examine the purported sunspot/pandemic link, using more statistically powerful un-binned analysis methods, rather than relying on arbitrarily binned data. The analyses are repeated using both the Wolf and Group sunspot numbers. In all cases, no statistically significant evidence of any association was found. However, while the focus in this particular analysis was on the purported relationship of influenza pandemics to sunspot activity, the faults found in the past analyses are common pitfalls; inattention to analysis reproducibility and robustness assessment are common problems in the sciences, that are unfortunately not noted often enough in review.

  14. Comparability of a Paper-Based Language Test and a Computer-Based Language Test.

    ERIC Educational Resources Information Center

    Choi, Inn-Chull; Kim, Kyoung Sung; Boo, Jaeyool

    2003-01-01

    Utilizing the Test of English Proficiency, developed by Seoul National University (TEPS), examined comparability between the paper-based language test and the computer-based language test based on content and construct validation employing content analyses based on corpus linguistic techniques in addition to such statistical analyses as…

  15. Analyses of the 1981-82 Illinois Public Library Statistics.

    ERIC Educational Resources Information Center

    Wallace, Danny P.

    Using data provided by the annual reports of Illinois public libraries and by the Illinois state library, this publication is a companion to the November 1982 issue of "Illinois Libraries," which enumerated the 16 data elements upon which the analyses are based. Three additional types of information are provided for each of six…

  16. Evaluation of a weighted test in the analysis of ordinal gait scores in an additivity model for five OP pesticides.

    EPA Science Inventory

    Appropriate statistical analyses are critical for evaluating interactions of mixtures with a common mode of action, as is often the case for cumulative risk assessments. Our objective is to develop analyses for use when a response variable is ordinal, and to test for interaction...

  17. Conceptual and statistical problems associated with the use of diversity indices in ecology.

    PubMed

    Barrantes, Gilbert; Sandoval, Luis

    2009-09-01

    Diversity indices, particularly the Shannon-Wiener index, have extensively been used in analyzing patterns of diversity at different geographic and ecological scales. These indices have serious conceptual and statistical problems which make comparisons of species richness or species abundances across communities nearly impossible. There is often no a single statistical method that retains all information needed to answer even a simple question. However, multivariate analyses could be used instead of diversity indices, such as cluster analyses or multiple regressions. More complex multivariate analyses, such as Canonical Correspondence Analysis, provide very valuable information on environmental variables associated to the presence and abundance of the species in a community. In addition, particular hypotheses associated to changes in species richness across localities, or change in abundance of one, or a group of species can be tested using univariate, bivariate, and/or rarefaction statistical tests. The rarefaction method has proved to be robust to standardize all samples to a common size. Even the simplest method as reporting the number of species per taxonomic category possibly provides more information than a diversity index value.

  18. Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review.

    PubMed

    Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C

    2018-03-07

    Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.

  19. One-dimensional statistical parametric mapping in Python.

    PubMed

    Pataky, Todd C

    2012-01-01

    Statistical parametric mapping (SPM) is a topological methodology for detecting field changes in smooth n-dimensional continua. Many classes of biomechanical data are smooth and contained within discrete bounds and as such are well suited to SPM analyses. The current paper accompanies release of 'SPM1D', a free and open-source Python package for conducting SPM analyses on a set of registered 1D curves. Three example applications are presented: (i) kinematics, (ii) ground reaction forces and (iii) contact pressure distribution in probabilistic finite element modelling. In addition to offering a high-level interface to a variety of common statistical tests like t tests, regression and ANOVA, SPM1D also emphasises fundamental concepts of SPM theory through stand-alone example scripts. Source code and documentation are available at: www.tpataky.net/spm1d/.

  20. Football goal distributions and extremal statistics

    NASA Astrophysics Data System (ADS)

    Greenhough, J.; Birch, P. C.; Chapman, S. C.; Rowlands, G.

    2002-12-01

    We analyse the distributions of the number of goals scored by home teams, away teams, and the total scored in the match, in domestic football games from 169 countries between 1999 and 2001. The probability density functions (PDFs) of goals scored are too heavy-tailed to be fitted over their entire ranges by Poisson or negative binomial distributions which would be expected for uncorrelated processes. Log-normal distributions cannot include zero scores and here we find that the PDFs are consistent with those arising from extremal statistics. In addition, we show that it is sufficient to model English top division and FA Cup matches in the seasons of 1970/71-2000/01 on Poisson or negative binomial distributions, as reported in analyses of earlier seasons, and that these are not consistent with extremal statistics.

  1. A Genome-Wide Association Analysis Reveals Epistatic Cancellation of Additive Genetic Variance for Root Length in Arabidopsis thaliana.

    PubMed

    Lachowiec, Jennifer; Shen, Xia; Queitsch, Christine; Carlborg, Örjan

    2015-01-01

    Efforts to identify loci underlying complex traits generally assume that most genetic variance is additive. Here, we examined the genetics of Arabidopsis thaliana root length and found that the genomic narrow-sense heritability for this trait in the examined population was statistically zero. The low amount of additive genetic variance that could be captured by the genome-wide genotypes likely explains why no associations to root length could be found using standard additive-model-based genome-wide association (GWA) approaches. However, as the broad-sense heritability for root length was significantly larger, and primarily due to epistasis, we also performed an epistatic GWA analysis to map loci contributing to the epistatic genetic variance. Four interacting pairs of loci were revealed, involving seven chromosomal loci that passed a standard multiple-testing corrected significance threshold. The genotype-phenotype maps for these pairs revealed epistasis that cancelled out the additive genetic variance, explaining why these loci were not detected in the additive GWA analysis. Small population sizes, such as in our experiment, increase the risk of identifying false epistatic interactions due to testing for associations with very large numbers of multi-marker genotypes in few phenotyped individuals. Therefore, we estimated the false-positive risk using a new statistical approach that suggested half of the associated pairs to be true positive associations. Our experimental evaluation of candidate genes within the seven associated loci suggests that this estimate is conservative; we identified functional candidate genes that affected root development in four loci that were part of three of the pairs. The statistical epistatic analyses were thus indispensable for confirming known, and identifying new, candidate genes for root length in this population of wild-collected A. thaliana accessions. We also illustrate how epistatic cancellation of the additive genetic variance explains the insignificant narrow-sense and significant broad-sense heritability by using a combination of careful statistical epistatic analyses and functional genetic experiments.

  2. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    PubMed

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.

  3. Use of Statistical Analyses in the Ophthalmic Literature

    PubMed Central

    Lisboa, Renato; Meira-Freitas, Daniel; Tatham, Andrew J.; Marvasti, Amir H.; Sharpsten, Lucie; Medeiros, Felipe A.

    2014-01-01

    Purpose To identify the most commonly used statistical analyses in the ophthalmic literature and to determine the likely gain in comprehension of the literature that readers could expect if they were to sequentially add knowledge of more advanced techniques to their statistical repertoire. Design Cross-sectional study Methods All articles published from January 2012 to December 2012 in Ophthalmology, American Journal of Ophthalmology and Archives of Ophthalmology were reviewed. A total of 780 peer-reviewed articles were included. Two reviewers examined each article and assigned categories to each one depending on the type of statistical analyses used. Discrepancies between reviewers were resolved by consensus. Main Outcome Measures Total number and percentage of articles containing each category of statistical analysis were obtained. Additionally we estimated the accumulated number and percentage of articles that a reader would be expected to be able to interpret depending on their statistical repertoire. Results Readers with little or no statistical knowledge would be expected to be able to interpret the statistical methods presented in only 20.8% of articles. In order to understand more than half (51.4%) of the articles published, readers were expected to be familiar with at least 15 different statistical methods. Knowledge of 21 categories of statistical methods was necessary to comprehend 70.9% of articles, while knowledge of more than 29 categories was necessary to comprehend more than 90% of articles. Articles in retina and glaucoma subspecialties showed a tendency for using more complex analysis when compared to cornea. Conclusions Readers of clinical journals in ophthalmology need to have substantial knowledge of statistical methodology to understand the results of published studies in the literature. The frequency of use of complex statistical analyses also indicates that those involved in the editorial peer-review process must have sound statistical knowledge in order to critically appraise articles submitted for publication. The results of this study could provide guidance to direct the statistical learning of clinical ophthalmologists, researchers and educators involved in the design of courses for residents and medical students. PMID:24612977

  4. Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.

    PubMed

    Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi

    2017-05-01

    Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.

  5. Mediation analysis in nursing research: a methodological review.

    PubMed

    Liu, Jianghong; Ulrich, Connie

    2016-12-01

    Mediation statistical models help clarify the relationship between independent predictor variables and dependent outcomes of interest by assessing the impact of third variables. This type of statistical analysis is applicable for many clinical nursing research questions, yet its use within nursing remains low. Indeed, mediational analyses may help nurse researchers develop more effective and accurate prevention and treatment programs as well as help bridge the gap between scientific knowledge and clinical practice. In addition, this statistical approach allows nurse researchers to ask - and answer - more meaningful and nuanced questions that extend beyond merely determining whether an outcome occurs. Therefore, the goal of this paper is to provide a brief tutorial on the use of mediational analyses in clinical nursing research by briefly introducing the technique and, through selected empirical examples from the nursing literature, demonstrating its applicability in advancing nursing science.

  6. Trends in selected streamflow statistics at 19 long-term streamflow-gaging stations indicative of outflows from Texas to Arkansas, Louisiana, Galveston Bay, and the Gulf of Mexico, 1922-2009

    USGS Publications Warehouse

    Barbie, Dana L.; Wehmeyer, Loren L.

    2012-01-01

    Trends in selected streamflow statistics during 1922-2009 were evaluated at 19 long-term streamflow-gaging stations considered indicative of outflows from Texas to Arkansas, Louisiana, Galveston Bay, and the Gulf of Mexico. The U.S. Geological Survey, in cooperation with the Texas Water Development Board, evaluated streamflow data from streamflow-gaging stations with more than 50 years of record that were active as of 2009. The outflows into Arkansas and Louisiana were represented by 3 streamflow-gaging stations, and outflows into the Gulf of Mexico, including Galveston Bay, were represented by 16 streamflow-gaging stations. Monotonic trend analyses were done using the following three streamflow statistics generated from daily mean values of streamflow: (1) annual mean daily discharge, (2) annual maximum daily discharge, and (3) annual minimum daily discharge. The trend analyses were based on the nonparametric Kendall's Tau test, which is useful for the detection of monotonic upward or downward trends with time. A total of 69 trend analyses by Kendall's Tau were computed - 19 periods of streamflow multiplied by the 3 streamflow statistics plus 12 additional trend analyses because the periods of record for 2 streamflow-gaging stations were divided into periods representing pre- and post-reservoir impoundment. Unless otherwise described, each trend analysis used the entire period of record for each streamflow-gaging station. The monotonic trend analysis detected 11 statistically significant downward trends, 37 instances of no trend, and 21 statistically significant upward trends. One general region studied, which seemingly has relatively more upward trends for many of the streamflow statistics analyzed, includes the rivers and associated creeks and bayous to Galveston Bay in the Houston metropolitan area. Lastly, the most western river basins considered (the Nueces and Rio Grande) had statistically significant downward trends for many of the streamflow statistics analyzed.

  7. Mediation analysis in nursing research: a methodological review

    PubMed Central

    Liu, Jianghong; Ulrich, Connie

    2017-01-01

    Mediation statistical models help clarify the relationship between independent predictor variables and dependent outcomes of interest by assessing the impact of third variables. This type of statistical analysis is applicable for many clinical nursing research questions, yet its use within nursing remains low. Indeed, mediational analyses may help nurse researchers develop more effective and accurate prevention and treatment programs as well as help bridge the gap between scientific knowledge and clinical practice. In addition, this statistical approach allows nurse researchers to ask – and answer – more meaningful and nuanced questions that extend beyond merely determining whether an outcome occurs. Therefore, the goal of this paper is to provide a brief tutorial on the use of mediational analyses in clinical nursing research by briefly introducing the technique and, through selected empirical examples from the nursing literature, demonstrating its applicability in advancing nursing science. PMID:26176804

  8. SPARC Intercomparison of Middle Atmosphere Climatologies

    NASA Technical Reports Server (NTRS)

    Randel, William; Fleming, Eric; Geller, Marvin; Hamilton, Kevin; Karoly, David; Ortland, Dave; Pawson, Steve; Swinbank, Richard; Udelhofen, Petra

    2002-01-01

    This atlas presents detailed incomparisons of several climatological wind and temperature data sets which cover the middle atmosphere (over altitudes approx. 10-80 km). A number of middle atmosphere climatologies have been developed in the research community based on a variety of meteorological analyses and satellite data sets. Here we present comparisons between these climatological data sets for a number of basic circulation statistics, such as zonal mean temperature, winds and eddy flux statistics. Special attention is focused on tropical winds and temperatures, where large differences exist among separate analyses. We also include comparisons between the global climatologies and historical rocketsonde wind and temperature measurements, and also with more recent lidar temperature data. These comparisons highlight differences and uncertainties in contemporary middle atmosphere data sets, and allow biases in particular analyses to be isolated. In addition, a brief atlas of zonal mean temperature and wind statistics is provided to highlight data availability and as a quick-look reference. This technical report is intended as a companion to the climatological data sets held in archive at the SPARC Data Center (http://www.sparc.sunysb.edu).

  9. Phylogenetic relationships of South American lizards of the genus Stenocercus (Squamata: Iguania): A new approach using a general mixture model for gene sequence data.

    PubMed

    Torres-Carvajal, Omar; Schulte, James A; Cadle, John E

    2006-04-01

    The South American iguanian lizard genus Stenocercus includes 54 species occurring mostly in the Andes and adjacent lowland areas from northern Venezuela and Colombia to central Argentina at elevations of 0-4000m. Small taxon or character sampling has characterized all phylogenetic analyses of Stenocercus, which has long been recognized as sister taxon to the Tropidurus Group. In this study, we use mtDNA sequence data to perform phylogenetic analyses that include 32 species of Stenocercus and 12 outgroup taxa. Monophyly of this genus is strongly supported by maximum parsimony and Bayesian analyses. Evolutionary relationships within Stenocercus are further analyzed with a Bayesian implementation of a general mixture model, which accommodates variability in the pattern of evolution across sites. These analyses indicate a basal split of Stenocercus into two clades, one of which receives very strong statistical support. In addition, we test previous hypotheses using non-parametric and parametric statistical methods, and provide a phylogenetic classification for Stenocercus.

  10. SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit

    PubMed Central

    Chu, Annie; Cui, Jenny; Dinov, Ivo D.

    2011-01-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models. PMID:21546994

  11. Primary laws and fine levels are associated with increases in seat belt use, 1997-2008

    DOT National Transportation Integrated Search

    2010-11-01

    Increasing fine levels is a strategy that has potential to further raise seat belt use, in addition to primary law upgrades and high-visibility enforcement. Although the regression analyses did not find a statistically significant effect associated w...

  12. Limitations of Using Microsoft Excel Version 2016 (MS Excel 2016) for Statistical Analysis for Medical Research.

    PubMed

    Tanavalee, Chotetawan; Luksanapruksa, Panya; Singhatanadgige, Weerasak

    2016-06-01

    Microsoft Excel (MS Excel) is a commonly used program for data collection and statistical analysis in biomedical research. However, this program has many limitations, including fewer functions that can be used for analysis and a limited number of total cells compared with dedicated statistical programs. MS Excel cannot complete analyses with blank cells, and cells must be selected manually for analysis. In addition, it requires multiple steps of data transformation and formulas to plot survival analysis graphs, among others. The Megastat add-on program, which will be supported by MS Excel 2016 soon, would eliminate some limitations of using statistic formulas within MS Excel.

  13. Detection of semi-volatile organic compounds in permeable ...

    EPA Pesticide Factsheets

    Abstract The Edison Environmental Center (EEC) has a research and demonstration permeable parking lot comprised of three different permeable systems: permeable asphalt, porous concrete and interlocking concrete permeable pavers. Water quality and quantity analysis has been ongoing since January, 2010. This paper describes a subset of the water quality analysis, analysis of semivolatile organic compounds (SVOCs) to determine if hydrocarbons were in water infiltrated through the permeable surfaces. SVOCs were analyzed in samples collected from 11 dates over a 3 year period, from 2/8/2010 to 4/1/2013.Results are broadly divided into three categories: 42 chemicals were never detected; 12 chemicals (11 chemical test) were detected at a rate of less than 10% or less; and 22 chemicals were detected at a frequency of 10% or greater (ranging from 10% to 66.5% detections). Fundamental and exploratory statistical analyses were performed on these latter analyses results by grouping results by surface type. The statistical analyses were limited due to low frequency of detections and dilutions of samples which impacted detection limits. The infiltrate data through three permeable surfaces were analyzed as non-parametric data by the Kaplan-Meier estimation method for fundamental statistics; there were some statistically observable difference in concentration between pavement types when using Tarone-Ware Comparison Hypothesis Test. Additionally Spearman Rank order non-parame

  14. New software for statistical analysis of Cambridge Structural Database data

    PubMed Central

    Sykes, Richard A.; McCabe, Patrick; Allen, Frank H.; Battle, Gary M.; Bruno, Ian J.; Wood, Peter A.

    2011-01-01

    A collection of new software tools is presented for the analysis of geometrical, chemical and crystallographic data from the Cambridge Structural Database (CSD). This software supersedes the program Vista. The new functionality is integrated into the program Mercury in order to provide statistical, charting and plotting options alongside three-dimensional structural visualization and analysis. The integration also permits immediate access to other information about specific CSD entries through the Mercury framework, a common requirement in CSD data analyses. In addition, the new software includes a range of more advanced features focused towards structural analysis such as principal components analysis, cone-angle correction in hydrogen-bond analyses and the ability to deal with topological symmetry that may be exhibited in molecular search fragments. PMID:22477784

  15. Implication of the cause of differences in 3D structures of proteins with high sequence identity based on analyses of amino acid sequences and 3D structures.

    PubMed

    Matsuoka, Masanari; Sugita, Masatake; Kikuchi, Takeshi

    2014-09-18

    Proteins that share a high sequence homology while exhibiting drastically different 3D structures are investigated in this study. Recently, artificial proteins related to the sequences of the GA and IgG binding GB domains of human serum albumin have been designed. These artificial proteins, referred to as GA and GB, share 98% amino acid sequence identity but exhibit different 3D structures, namely, a 3α bundle versus a 4β + α structure. Discriminating between their 3D structures based on their amino acid sequences is a very difficult problem. In the present work, in addition to using bioinformatics techniques, an analysis based on inter-residue average distance statistics is used to address this problem. It was hard to distinguish which structure a given sequence would take only with the results of ordinary analyses like BLAST and conservation analyses. However, in addition to these analyses, with the analysis based on the inter-residue average distance statistics and our sequence tendency analysis, we could infer which part would play an important role in its structural formation. The results suggest possible determinants of the different 3D structures for sequences with high sequence identity. The possibility of discriminating between the 3D structures based on the given sequences is also discussed.

  16. Evaluation and application of summary statistic imputation to discover new height-associated loci.

    PubMed

    Rüeger, Sina; McDaid, Aaron; Kutalik, Zoltán

    2018-05-01

    As most of the heritability of complex traits is attributed to common and low frequency genetic variants, imputing them by combining genotyping chips and large sequenced reference panels is the most cost-effective approach to discover the genetic basis of these traits. Association summary statistics from genome-wide meta-analyses are available for hundreds of traits. Updating these to ever-increasing reference panels is very cumbersome as it requires reimputation of the genetic data, rerunning the association scan, and meta-analysing the results. A much more efficient method is to directly impute the summary statistics, termed as summary statistics imputation, which we improved to accommodate variable sample size across SNVs. Its performance relative to genotype imputation and practical utility has not yet been fully investigated. To this end, we compared the two approaches on real (genotyped and imputed) data from 120K samples from the UK Biobank and show that, genotype imputation boasts a 3- to 5-fold lower root-mean-square error, and better distinguishes true associations from null ones: We observed the largest differences in power for variants with low minor allele frequency and low imputation quality. For fixed false positive rates of 0.001, 0.01, 0.05, using summary statistics imputation yielded a decrease in statistical power by 9, 43 and 35%, respectively. To test its capacity to discover novel associations, we applied summary statistics imputation to the GIANT height meta-analysis summary statistics covering HapMap variants, and identified 34 novel loci, 19 of which replicated using data in the UK Biobank. Additionally, we successfully replicated 55 out of the 111 variants published in an exome chip study. Our study demonstrates that summary statistics imputation is a very efficient and cost-effective way to identify and fine-map trait-associated loci. Moreover, the ability to impute summary statistics is important for follow-up analyses, such as Mendelian randomisation or LD-score regression.

  17. Evaluation and application of summary statistic imputation to discover new height-associated loci

    PubMed Central

    2018-01-01

    As most of the heritability of complex traits is attributed to common and low frequency genetic variants, imputing them by combining genotyping chips and large sequenced reference panels is the most cost-effective approach to discover the genetic basis of these traits. Association summary statistics from genome-wide meta-analyses are available for hundreds of traits. Updating these to ever-increasing reference panels is very cumbersome as it requires reimputation of the genetic data, rerunning the association scan, and meta-analysing the results. A much more efficient method is to directly impute the summary statistics, termed as summary statistics imputation, which we improved to accommodate variable sample size across SNVs. Its performance relative to genotype imputation and practical utility has not yet been fully investigated. To this end, we compared the two approaches on real (genotyped and imputed) data from 120K samples from the UK Biobank and show that, genotype imputation boasts a 3- to 5-fold lower root-mean-square error, and better distinguishes true associations from null ones: We observed the largest differences in power for variants with low minor allele frequency and low imputation quality. For fixed false positive rates of 0.001, 0.01, 0.05, using summary statistics imputation yielded a decrease in statistical power by 9, 43 and 35%, respectively. To test its capacity to discover novel associations, we applied summary statistics imputation to the GIANT height meta-analysis summary statistics covering HapMap variants, and identified 34 novel loci, 19 of which replicated using data in the UK Biobank. Additionally, we successfully replicated 55 out of the 111 variants published in an exome chip study. Our study demonstrates that summary statistics imputation is a very efficient and cost-effective way to identify and fine-map trait-associated loci. Moreover, the ability to impute summary statistics is important for follow-up analyses, such as Mendelian randomisation or LD-score regression. PMID:29782485

  18. 75 FR 69681 - Proposed Collection; Comment Request; California Health Interview Survey Cancer Control Module...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-15

    ... State. The CHIS sample is designed to provide statistically reliable estimates statewide, for California... activity, obesity, and human papillomavirus. Additionally, CHIS is designed to be comparable to the National Health Interview Survey (NHIS) data in order to conduct comparative analyses. CHIS provides...

  19. 76 FR 2398 - Submission for OMB Review; Comment Request; California Health Interview Survey Cancer Control...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-13

    ... designed to provide statistically reliable estimates statewide, for California counties, and for California... papillomavirus. Additionally, CHIS is designed to be comparable to the National Health Interview Survey (NHIS) data in order to conduct comparative analyses. CHIS provides enhanced estimates for cancer risk factors...

  20. Michigan's forests, 2004: statistics and quality assurance

    Treesearch

    Scott A. Pugh; Mark H. Hansen; Gary Brand; Ronald E. McRoberts

    2010-01-01

    The first annual inventory of Michigan's forests was completed in 2004 after 18,916 plots were selected and 10,355 forested plots were visited. This report includes detailed information on forest inventory methods, quality of estimates, and additional tables. An earlier publication presented analyses of the inventoried data (Pugh et al. 2009).

  1. A Measurement of Alienation in College Student Marihuana Users and Non-Users.

    ERIC Educational Resources Information Center

    Harris, Eileen M.

    A three part questionnaire was administered to 1380 Southern Illinois University students to: (1) elicit demographic data; (2) determine the extent of experience with marihuana; and (3) measure alienation utilizing Dean's scale. In addition, the Minnesota Multiphasic Personality Lie Inventory was given. Statistical analyses were performed to…

  2. Misclassification bias in areal estimates

    Treesearch

    Raymond L. Czaplewski

    1992-01-01

    In addition to thematic maps, remote sensing provides estimates of area in different thematic categories. Areal estimates are frequently used for resource inventories, management planning, and assessment analyses. Misclassification causes bias in these statistical areal estimates. For example, if a small percentage of a common cover type is misclassified as a rare...

  3. Evaluation of Evidence of Statistical Support and Corroboration of Subgroup Claims in Randomized Clinical Trials.

    PubMed

    Wallach, Joshua D; Sullivan, Patrick G; Trepanowski, John F; Sainani, Kristin L; Steyerberg, Ewout W; Ioannidis, John P A

    2017-04-01

    Many published randomized clinical trials (RCTs) make claims for subgroup differences. To evaluate how often subgroup claims reported in the abstracts of RCTs are actually supported by statistical evidence (P < .05 from an interaction test) and corroborated by subsequent RCTs and meta-analyses. This meta-epidemiological survey examines data sets of trials with at least 1 subgroup claim, including Subgroup Analysis of Trials Is Rarely Easy (SATIRE) articles and Discontinuation of Randomized Trials (DISCO) articles. We used Scopus (updated July 2016) to search for English-language articles citing each of the eligible index articles with at least 1 subgroup finding in the abstract. Articles with a subgroup claim in the abstract with or without evidence of statistical heterogeneity (P < .05 from an interaction test) in the text and articles attempting to corroborate the subgroup findings. Study characteristics of trials with at least 1 subgroup claim in the abstract were recorded. Two reviewers extracted the data necessary to calculate subgroup-level effect sizes, standard errors, and the P values for interaction. For individual RCTs and meta-analyses that attempted to corroborate the subgroup findings from the index articles, trial characteristics were extracted. Cochran Q test was used to reevaluate heterogeneity with the data from all available trials. The number of subgroup claims in the abstracts of RCTs, the number of subgroup claims in the abstracts of RCTs with statistical support (subgroup findings), and the number of subgroup findings corroborated by subsequent RCTs and meta-analyses. Sixty-four eligible RCTs made a total of 117 subgroup claims in their abstracts. Of these 117 claims, only 46 (39.3%) in 33 articles had evidence of statistically significant heterogeneity from a test for interaction. In addition, out of these 46 subgroup findings, only 16 (34.8%) ensured balance between randomization groups within the subgroups (eg, through stratified randomization), 13 (28.3%) entailed a prespecified subgroup analysis, and 1 (2.2%) was adjusted for multiple testing. Only 5 (10.9%) of the 46 subgroup findings had at least 1 subsequent pure corroboration attempt by a meta-analysis or an RCT. In all 5 cases, the corroboration attempts found no evidence of a statistically significant subgroup effect. In addition, all effect sizes from meta-analyses were attenuated toward the null. A minority of subgroup claims made in the abstracts of RCTs are supported by their own data (ie, a significant interaction effect). For those that have statistical support (P < .05 from an interaction test), most fail to meet other best practices for subgroup tests, including prespecification, stratified randomization, and adjustment for multiple testing. Attempts to corroborate statistically significant subgroup differences are rare; when done, the initially observed subgroup differences are not reproduced.

  4. Scripts for TRUMP data analyses. Part II (HLA-related data): statistical analyses specific for hematopoietic stem cell transplantation.

    PubMed

    Kanda, Junya

    2016-01-01

    The Transplant Registry Unified Management Program (TRUMP) made it possible for members of the Japan Society for Hematopoietic Cell Transplantation (JSHCT) to analyze large sets of national registry data on autologous and allogeneic hematopoietic stem cell transplantation. However, as the processes used to collect transplantation information are complex and differed over time, the background of these processes should be understood when using TRUMP data. Previously, information on the HLA locus of patients and donors had been collected using a questionnaire-based free-description method, resulting in some input errors. To correct minor but significant errors and provide accurate HLA matching data, the use of a Stata or EZR/R script offered by the JSHCT is strongly recommended when analyzing HLA data in the TRUMP dataset. The HLA mismatch direction, mismatch counting method, and different impacts of HLA mismatches by stem cell source are other important factors in the analysis of HLA data. Additionally, researchers should understand the statistical analyses specific for hematopoietic stem cell transplantation, such as competing risk, landmark analysis, and time-dependent analysis, to correctly analyze transplant data. The data center of the JSHCT can be contacted if statistical assistance is required.

  5. Comparative statistical component analysis of transgenic, cyanophycin-producing potatoes in greenhouse and field trials.

    PubMed

    Schmidt, Kerstin; Schmidtke, Jörg; Mast, Yvonne; Waldvogel, Eva; Wohlleben, Wolfgang; Klemke, Friederike; Lockau, Wolfgang; Hausmann, Tina; Hühns, Maja; Broer, Inge

    2017-08-01

    Potatoes are a promising system for industrial production of the biopolymer cyanophycin as a second compound in addition to starch. To assess the efficiency in the field, we analysed the stability of the system, specifically its sensitivity to environmental factors. Field and greenhouse trials with transgenic potatoes (two independent events) were carried out for three years. The influence of environmental factors was measured and target compounds in the transgenic plants (cyanophycin, amino acids) were analysed for differences to control plants. Furthermore, non-target parameters (starch content, number, weight and size of tubers) were analysed for equivalence with control plants. The huge amount of data received was handled using modern statistical approaches to model the correlation between influencing environmental factors (year of cultivation, nitrogen fertilization, origin of plants, greenhouse or field cultivation) and key components (starch, amino acids, cyanophycin) and agronomic characteristics. General linear models were used for modelling, and standard effect sizes were applied to compare conventional and genetically modified plants. Altogether, the field trials prove that significant cyanophycin production is possible without reduction of starch content. Non-target compound composition seems to be equivalent under varying environmental conditions. Additionally, a quick test to measure cyanophycin content gives similar results compared to the extensive enzymatic test. This work facilitates the commercial cultivation of cyanophycin potatoes.

  6. Topographic ERP analyses: a step-by-step tutorial review.

    PubMed

    Murray, Micah M; Brunet, Denis; Michel, Christoph M

    2008-06-01

    In this tutorial review, we detail both the rationale for as well as the implementation of a set of analyses of surface-recorded event-related potentials (ERPs) that uses the reference-free spatial (i.e. topographic) information available from high-density electrode montages to render statistical information concerning modulations in response strength, latency, and topography both between and within experimental conditions. In these and other ways these topographic analysis methods allow the experimenter to glean additional information and neurophysiologic interpretability beyond what is available from canonical waveform analyses. In this tutorial we present the example of somatosensory evoked potentials (SEPs) in response to stimulation of each hand to illustrate these points. For each step of these analyses, we provide the reader with both a conceptual and mathematical description of how the analysis is carried out, what it yields, and how to interpret its statistical outcome. We show that these topographic analysis methods are intuitive and easy-to-use approaches that can remove much of the guesswork often confronting ERP researchers and also assist in identifying the information contained within high-density ERP datasets.

  7. Analysis and meta-analysis of single-case designs: an introduction.

    PubMed

    Shadish, William R

    2014-04-01

    The last 10 years have seen great progress in the analysis and meta-analysis of single-case designs (SCDs). This special issue includes five articles that provide an overview of current work on that topic, including standardized mean difference statistics, multilevel models, Bayesian statistics, and generalized additive models. Each article analyzes a common example across articles and presents syntax or macros for how to do them. These articles are followed by commentaries from single-case design researchers and journal editors. This introduction briefly describes each article and then discusses several issues that must be addressed before we can know what analyses will eventually be best to use in SCD research. These issues include modeling trend, modeling error covariances, computing standardized effect size estimates, assessing statistical power, incorporating more accurate models of outcome distributions, exploring whether Bayesian statistics can improve estimation given the small samples common in SCDs, and the need for annotated syntax and graphical user interfaces that make complex statistics accessible to SCD researchers. The article then discusses reasons why SCD researchers are likely to incorporate statistical analyses into their research more often in the future, including changing expectations and contingencies regarding SCD research from outside SCD communities, changes and diversity within SCD communities, corrections of erroneous beliefs about the relationship between SCD research and statistics, and demonstrations of how statistics can help SCD researchers better meet their goals. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  8. Catalog of Computer Programs Used in Undergraduate Geological Education, Second Edition: Installment 2.

    ERIC Educational Resources Information Center

    Burger, H. Robert

    1983-01-01

    Part 1 (SE 533 635) presented programs for use in mineralogy, petrology, and geochemistry. This part presents an annotated list of 64 additional programs, focusing on introductory geology, mapping, and statistical packages for geological analyses. A brief description, source, suggested use(s), programing language, and other information are…

  9. Distributional Analysis in Educational Evaluation: A Case Study from the New York City Voucher Program

    ERIC Educational Resources Information Center

    Bitler, Marianne; Domina, Thurston; Penner, Emily; Hoynes, Hilary

    2015-01-01

    We use quantile treatment effects estimation to examine the consequences of the random-assignment New York City School Choice Scholarship Program across the distribution of student achievement. Our analyses suggest that the program had negligible and statistically insignificant effects across the skill distribution. In addition to contributing to…

  10. Are Public School Teacher Salaries Paid Compensating Wage Differentials for Student Racial and Ethnic Characteristics?

    ERIC Educational Resources Information Center

    Martin, Stephanie M.

    2010-01-01

    The present paper examines the relationship between public school teacher salaries and the racial concentration and segregation of students in the district. A particularly rich set of control variables is included to better measure the effect of racial characteristics. Additional analyses included Metropolitan Statistical Area fixed effects and…

  11. Four modes of optical parametric operation for squeezed state generation

    NASA Astrophysics Data System (ADS)

    Andersen, U. L.; Buchler, B. C.; Lam, P. K.; Wu, J. W.; Gao, J. R.; Bachor, H.-A.

    2003-11-01

    We report a versatile instrument, based on a monolithic optical parametric amplifier, which reliably generates four different types of squeezed light. We obtained vacuum squeezing, low power amplitude squeezing, phase squeezing and bright amplitude squeezing. We show a complete analysis of this light, including a full quantum state tomography. In addition we demonstrate the direct detection of the squeezed state statistics without the aid of a spectrum analyser. This technique makes the nonclassical properties directly visible and allows complete measurement of the statistical moments of the squeezed quadrature.

  12. Additive interaction between heterogeneous environmental ...

    EPA Pesticide Factsheets

    BACKGROUND Environmental exposures often occur in tandem; however, epidemiological research often focuses on singular exposures. Statistical interactions among broad, well-characterized environmental domains have not yet been evaluated in association with health. We address this gap by conducting a county-level cross-sectional analysis of interactions between Environmental Quality Index (EQI) domain indices on preterm birth in the Unites States from 2000-2005.METHODS: The EQI, a county-level index constructed for the 2000-2005 time period, was constructed from five domain-specific indices (air, water, land, built and sociodemographic) using principal component analyses. County-level preterm birth rates (n=3141) were estimated using live births from the National Center for Health Statistics. Linear regression was used to estimate prevalence differences (PD) and 95% confidence intervals (CI) comparing worse environmental quality to the better quality for each model for a) each individual domain main effect b) the interaction contrast and c) the two main effects plus interaction effect (i.e. the “net effect”) to show departure from additive interaction for the all U.S counties. Analyses were also performed for subgroupings by four urban/rural strata. RESULTS: We found the suggestion of antagonistic interactions but no synergism, along with several purely additive (i.e., no interaction) associations. In the non-stratified model, we observed antagonistic interac

  13. Colorimetric determination of nitrate plus nitrite in water by enzymatic reduction, automated discrete analyzer methods

    USGS Publications Warehouse

    Patton, Charles J.; Kryskalla, Jennifer R.

    2011-01-01

    In addition to operational details and performance benchmarks for these new DA-AtNaR2 nitrate + nitrite assays, this report also provides results of interference studies for common inorganic and organic matrix constituents at 1, 10, and 100 times their median concentrations in surface-water and groundwater samples submitted annually to the NWQL for nitrate + nitrite analyses. Paired t-test and Wilcoxon signed-rank statistical analyses of results determined by CFA-CdR methods and DA-AtNaR2 methods indicate that nitrate concentration differences between population means or sign ranks were either statistically equivalent to zero at the 95 percent confidence level (p ≥ 0.05) or analytically equivalent to zero-that is, when p < 0.05, concentration differences between population means or medians were less than MDLs.

  14. Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis

    PubMed Central

    Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.

    2006-01-01

    In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709

  15. Graphical augmentations to the funnel plot assess the impact of additional evidence on a meta-analysis.

    PubMed

    Langan, Dean; Higgins, Julian P T; Gregory, Walter; Sutton, Alexander J

    2012-05-01

    We aim to illustrate the potential impact of a new study on a meta-analysis, which gives an indication of the robustness of the meta-analysis. A number of augmentations are proposed to one of the most widely used of graphical displays, the funnel plot. Namely, 1) statistical significance contours, which define regions of the funnel plot in which a new study would have to be located to change the statistical significance of the meta-analysis; and 2) heterogeneity contours, which show how a new study would affect the extent of heterogeneity in a given meta-analysis. Several other features are also described, and the use of multiple features simultaneously is considered. The statistical significance contours suggest that one additional study, no matter how large, may have a very limited impact on the statistical significance of a meta-analysis. The heterogeneity contours illustrate that one outlying study can increase the level of heterogeneity dramatically. The additional features of the funnel plot have applications including 1) informing sample size calculations for the design of future studies eligible for inclusion in the meta-analysis; and 2) informing the updating prioritization of a portfolio of meta-analyses such as those prepared by the Cochrane Collaboration. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Putting Meaning Back Into the Mean: A Comment on the Misuse of Elementary Statistics in a Sample of Manuscripts Submitted to Clinical Therapeutics.

    PubMed

    Forrester, Janet E

    2015-12-01

    Errors in the statistical presentation and analyses of data in the medical literature remain common despite efforts to improve the review process, including the creation of guidelines for authors and the use of statistical reviewers. This article discusses common elementary statistical errors seen in manuscripts recently submitted to Clinical Therapeutics and describes some ways in which authors and reviewers can identify errors and thus correct them before publication. A nonsystematic sample of manuscripts submitted to Clinical Therapeutics over the past year was examined for elementary statistical errors. Clinical Therapeutics has many of the same errors that reportedly exist in other journals. Authors require additional guidance to avoid elementary statistical errors and incentives to use the guidance. Implementation of reporting guidelines for authors and reviewers by journals such as Clinical Therapeutics may be a good approach to reduce the rate of statistical errors. Copyright © 2015 Elsevier HS Journals, Inc. All rights reserved.

  17. Environmental implications of element emissions from phosphate-processing operations in southeastern Idaho

    USGS Publications Warehouse

    Severson, R.C.; Gough, L.P.

    1979-01-01

    In order to assess the contribution to plants and soils of certain elements emitted by phosphate processing, we sampled sagebrush, grasses, and A- and C-horizon soils along upwind and downwind transects at Pocatello and Soda Springs, Idaho. Analyses for 70 elements in plants showed that, statistically, the concentration of 7 environmentally important elements, cadmium, chromium, fluorine, selenium, uranium, vanadium, and zinc, were related to emissions from phosphate-processing operations. Two additional elements, lithium and nickel, show probable relationships. The literature on the effects of these elements on plant and animal health is briefly surveyed. Relations between element content in plants and distance from the phosphate-processing operations were stronger at Soda Springs than at Pocatello and, in general, stronger in sagebrush than in the grasses. Analyses for 58 elements in soils showed that, statistically, beryllium, fluorine, iron, lead, lithium, potassium, rubidium, thorium, and zinc were related to emissions only at Pocatello and only in the A horizon. Moreover, six additional elements, copper, mercury, nickel, titanium, uranium, and vanadium, probably are similarly related along the same transect. The approximate amounts of elements added to the soils by the emissions are estimated. In C-horizon soils, no statistically significant relations were observed between element concentrations and distance from the processing sites. At Soda Springs, the nonuniformity of soils at the sampling locations may have obscured the relationship between soil-element content and emissions from phosphate processing.

  18. Meta-analysis in applied ecology.

    PubMed

    Stewart, Gavin

    2010-02-23

    This overview examines research synthesis in applied ecology and conservation. Vote counting and pooling unweighted averages are widespread despite the superiority of syntheses based on weighted combination of effects. Such analyses allow exploration of methodological uncertainty in addition to consistency of effects across species, space and time, but exploring heterogeneity remains controversial. Meta-analyses are required to generalize in ecology, and to inform evidence-based decision-making, but the more sophisticated statistical techniques and registers of research used in other disciplines must be employed in ecology to fully realize their benefits.

  19. Distributed management of scientific projects - An analysis of two computer-conferencing experiments at NASA

    NASA Technical Reports Server (NTRS)

    Vallee, J.; Gibbs, B.

    1976-01-01

    Between August 1975 and March 1976, two NASA projects with geographically separated participants used a computer-conferencing system developed by the Institute for the Future for portions of their work. Monthly usage statistics for the system were collected in order to examine the group and individual participation figures for all conferences. The conference transcripts were analysed to derive observations about the use of the medium. In addition to the results of these analyses, the attitudes of users and the major components of the costs of computer conferencing are discussed.

  20. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    PubMed

    West, Brady T; Sakshaug, Joseph W; Aurelien, Guy Alain S

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data.

  1. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    PubMed Central

    West, Brady T.; Sakshaug, Joseph W.; Aurelien, Guy Alain S.

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data. PMID:27355817

  2. The Influence of Statistical versus Exemplar Appeals on Indian Adults' Health Intentions: An Investigation of Direct Effects and Intervening Persuasion Processes.

    PubMed

    McKinley, Christopher J; Limbu, Yam; Jayachandran, C N

    2017-04-01

    In two separate investigations, we examined the persuasive effectiveness of statistical versus exemplar appeals on Indian adults' smoking cessation and mammography screening intentions. To more comprehensively address persuasion processes, we explored whether message response and perceived message effectiveness functioned as antecedents to persuasive effects. Results showed that statistical appeals led to higher levels of health intentions than exemplar appeals. In addition, findings from both studies indicated that statistical appeals stimulated more attention and were perceived as more effective than anecdotal accounts. Among male smokers, statistical appeals also generated greater cognitive processing than exemplar appeals. Subsequent mediation analyses revealed that message response and perceived message effectiveness fully carried the influence of appeal format on health intentions. Given these findings, future public health initiatives conducted among similar populations should design messages that include substantive factual information while ensuring that this content is perceived as credible and valuable.

  3. Statistical Selection of Biological Models for Genome-Wide Association Analyses.

    PubMed

    Bi, Wenjian; Kang, Guolian; Pounds, Stanley B

    2018-05-24

    Genome-wide association studies have discovered many biologically important associations of genes with phenotypes. Typically, genome-wide association analyses formally test the association of each genetic feature (SNP, CNV, etc) with the phenotype of interest and summarize the results with multiplicity-adjusted p-values. However, very small p-values only provide evidence against the null hypothesis of no association without indicating which biological model best explains the observed data. Correctly identifying a specific biological model may improve the scientific interpretation and can be used to more effectively select and design a follow-up validation study. Thus, statistical methodology to identify the correct biological model for a particular genotype-phenotype association can be very useful to investigators. Here, we propose a general statistical method to summarize how accurately each of five biological models (null, additive, dominant, recessive, co-dominant) represents the data observed for each variant in a GWAS study. We show that the new method stringently controls the false discovery rate and asymptotically selects the correct biological model. Simulations of two-stage discovery-validation studies show that the new method has these properties and that its validation power is similar to or exceeds that of simple methods that use the same statistical model for all SNPs. Example analyses of three data sets also highlight these advantages of the new method. An R package is freely available at www.stjuderesearch.org/site/depts/biostats/maew. Copyright © 2018. Published by Elsevier Inc.

  4. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    PubMed Central

    De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep

    2017-01-01

    Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107

  5. Views of medical students: what, when and how do they want statistics taught?

    PubMed

    Fielding, S; Poobalan, A; Prescott, G J; Marais, D; Aucott, L

    2015-11-01

    A key skill for a practising clinician is being able to do research, understand the statistical analyses and interpret results in the medical literature. Basic statistics has become essential within medical education, but when, what and in which format is uncertain. To inform curriculum design/development we undertook a quantitative survey of fifth year medical students and followed them up with a series of focus groups to obtain their opinions as to what statistics teaching they want, when and how. A total of 145 students undertook the survey and five focus groups were held with between 3 and 9 participants each. Previous statistical training varied and students recognised their knowledge was inadequate and keen to see additional training implemented. Students were aware of the importance of statistics to their future careers, but apprehensive about learning. Face-to-face teaching supported by online resources was popular. Focus groups indicated the need for statistical training early in their degree and highlighted their lack of confidence and inconsistencies in support. The study found that the students see the importance of statistics training in the medical curriculum but that timing and mode of delivery are key. The findings have informed the design of a new course to be implemented in the third undergraduate year. Teaching will be based around published studies aiming to equip students with the basics required with additional resources available through a virtual learning environment. © The Author(s) 2015.

  6. A guide to statistical analysis in microbial ecology: a community-focused, living review of multivariate data analyses.

    PubMed

    Buttigieg, Pier Luigi; Ramette, Alban

    2014-12-01

    The application of multivariate statistical analyses has become a consistent feature in microbial ecology. However, many microbial ecologists are still in the process of developing a deep understanding of these methods and appreciating their limitations. As a consequence, staying abreast of progress and debate in this arena poses an additional challenge to many microbial ecologists. To address these issues, we present the GUide to STatistical Analysis in Microbial Ecology (GUSTA ME): a dynamic, web-based resource providing accessible descriptions of numerous multivariate techniques relevant to microbial ecologists. A combination of interactive elements allows users to discover and navigate between methods relevant to their needs and examine how they have been used by others in the field. We have designed GUSTA ME to become a community-led and -curated service, which we hope will provide a common reference and forum to discuss and disseminate analytical techniques relevant to the microbial ecology community. © 2014 The Authors. FEMS Microbiology Ecology published by John Wiley & Sons Ltd on behalf of Federation of European Microbiological Societies.

  7. Statistics for NAEG: past efforts, new results, and future plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, R.O.; Simpson, J.C.; Kinnison, R.R.

    A brief review of Nevada Applied Ecology Group (NAEG) objectives is followed by a summary of past statistical analyses conducted by Pacific Northwest Laboratory for the NAEG. Estimates of spatial pattern of radionuclides and other statistical analyses at NS's 201, 219 and 221 are reviewed as background for new analyses presented in this paper. Suggested NAEG activities and statistical analyses needed for the projected termination date of NAEG studies in March 1986 are given.

  8. Geographically Sourcing Cocaine's Origin - Delineation of the Nineteen Major Coca Growing Regions in South America.

    PubMed

    Mallette, Jennifer R; Casale, John F; Jordan, James; Morello, David R; Beyer, Paul M

    2016-03-23

    Previously, geo-sourcing to five major coca growing regions within South America was accomplished. However, the expansion of coca cultivation throughout South America made sub-regional origin determinations increasingly difficult. The former methodology was recently enhanced with additional stable isotope analyses ((2)H and (18)O) to fully characterize cocaine due to the varying environmental conditions in which the coca was grown. An improved data analysis method was implemented with the combination of machine learning and multivariate statistical analysis methods to provide further partitioning between growing regions. Here, we show how the combination of trace cocaine alkaloids, stable isotopes, and multivariate statistical analyses can be used to classify illicit cocaine as originating from one of 19 growing regions within South America. The data obtained through this approach can be used to describe current coca cultivation and production trends, highlight trafficking routes, as well as identify new coca growing regions.

  9. Geographically Sourcing Cocaine’s Origin - Delineation of the Nineteen Major Coca Growing Regions in South America

    NASA Astrophysics Data System (ADS)

    Mallette, Jennifer R.; Casale, John F.; Jordan, James; Morello, David R.; Beyer, Paul M.

    2016-03-01

    Previously, geo-sourcing to five major coca growing regions within South America was accomplished. However, the expansion of coca cultivation throughout South America made sub-regional origin determinations increasingly difficult. The former methodology was recently enhanced with additional stable isotope analyses (2H and 18O) to fully characterize cocaine due to the varying environmental conditions in which the coca was grown. An improved data analysis method was implemented with the combination of machine learning and multivariate statistical analysis methods to provide further partitioning between growing regions. Here, we show how the combination of trace cocaine alkaloids, stable isotopes, and multivariate statistical analyses can be used to classify illicit cocaine as originating from one of 19 growing regions within South America. The data obtained through this approach can be used to describe current coca cultivation and production trends, highlight trafficking routes, as well as identify new coca growing regions.

  10. A Summary of the Naval Postgraduate School Research Program.

    DTIC Science & Technology

    1985-09-30

    new model will now be used in a variety of oceanic investigations including the response of the ocean to tropical and extratropical storms (R. L...Numerical Study of Maritime Extratropical e. Cyclones Using FGGE Data ........................... 249 Oceanic Current System Response to Atmospheric...In addition* Professor Jayachandran has performed statistical analyses of the storm tracking methodology used by the Naval Environmental Prediction

  11. Interpretation of correlations in clinical research.

    PubMed

    Hung, Man; Bounsanga, Jerry; Voss, Maren Wright

    2017-11-01

    Critically analyzing research is a key skill in evidence-based practice and requires knowledge of research methods, results interpretation, and applications, all of which rely on a foundation based in statistics. Evidence-based practice makes high demands on trained medical professionals to interpret an ever-expanding array of research evidence. As clinical training emphasizes medical care rather than statistics, it is useful to review the basics of statistical methods and what they mean for interpreting clinical studies. We reviewed the basic concepts of correlational associations, violations of normality, unobserved variable bias, sample size, and alpha inflation. The foundations of causal inference were discussed and sound statistical analyses were examined. We discuss four ways in which correlational analysis is misused, including causal inference overreach, over-reliance on significance, alpha inflation, and sample size bias. Recent published studies in the medical field provide evidence of causal assertion overreach drawn from correlational findings. The findings present a primer on the assumptions and nature of correlational methods of analysis and urge clinicians to exercise appropriate caution as they critically analyze the evidence before them and evaluate evidence that supports practice. Critically analyzing new evidence requires statistical knowledge in addition to clinical knowledge. Studies can overstate relationships, expressing causal assertions when only correlational evidence is available. Failure to account for the effect of sample size in the analyses tends to overstate the importance of predictive variables. It is important not to overemphasize the statistical significance without consideration of effect size and whether differences could be considered clinically meaningful.

  12. Statistical analysis of fNIRS data: a comprehensive review.

    PubMed

    Tak, Sungho; Ye, Jong Chul

    2014-01-15

    Functional near-infrared spectroscopy (fNIRS) is a non-invasive method to measure brain activities using the changes of optical absorption in the brain through the intact skull. fNIRS has many advantages over other neuroimaging modalities such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), or magnetoencephalography (MEG), since it can directly measure blood oxygenation level changes related to neural activation with high temporal resolution. However, fNIRS signals are highly corrupted by measurement noises and physiology-based systemic interference. Careful statistical analyses are therefore required to extract neuronal activity-related signals from fNIRS data. In this paper, we provide an extensive review of historical developments of statistical analyses of fNIRS signal, which include motion artifact correction, short source-detector separation correction, principal component analysis (PCA)/independent component analysis (ICA), false discovery rate (FDR), serially-correlated errors, as well as inference techniques such as the standard t-test, F-test, analysis of variance (ANOVA), and statistical parameter mapping (SPM) framework. In addition, to provide a unified view of various existing inference techniques, we explain a linear mixed effect model with restricted maximum likelihood (ReML) variance estimation, and show that most of the existing inference methods for fNIRS analysis can be derived as special cases. Some of the open issues in statistical analysis are also described. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Influence of peer review on the reporting of primary outcome(s) and statistical analyses of randomised trials.

    PubMed

    Hopewell, Sally; Witt, Claudia M; Linde, Klaus; Icke, Katja; Adedire, Olubusola; Kirtley, Shona; Altman, Douglas G

    2018-01-11

    Selective reporting of outcomes in clinical trials is a serious problem. We aimed to investigate the influence of the peer review process within biomedical journals on reporting of primary outcome(s) and statistical analyses within reports of randomised trials. Each month, PubMed (May 2014 to April 2015) was searched to identify primary reports of randomised trials published in six high-impact general and 12 high-impact specialty journals. The corresponding author of each trial was invited to complete an online survey asking authors about changes made to their manuscript as part of the peer review process. Our main outcomes were to assess: (1) the nature and extent of changes as part of the peer review process, in relation to reporting of the primary outcome(s) and/or primary statistical analysis; (2) how often authors followed these requests; and (3) whether this was related to specific journal or trial characteristics. Of 893 corresponding authors who were invited to take part in the online survey 258 (29%) responded. The majority of trials were multicentre (n = 191; 74%); median sample size 325 (IQR 138 to 1010). The primary outcome was clearly defined in 92% (n = 238), of which the direction of treatment effect was statistically significant in 49%. The majority responded (1-10 Likert scale) they were satisfied with the overall handling (mean 8.6, SD 1.5) and quality of peer review (mean 8.5, SD 1.5) of their manuscript. Only 3% (n = 8) said that the editor or peer reviewers had asked them to change or clarify the trial's primary outcome. However, 27% (n = 69) reported they were asked to change or clarify the statistical analysis of the primary outcome; most had fulfilled the request, the main motivation being to improve the statistical methods (n = 38; 55%) or avoid rejection (n = 30; 44%). Overall, there was little association between authors being asked to make this change and the type of journal, intervention, significance of the primary outcome, or funding source. Thirty-six percent (n = 94) of authors had been asked to include additional analyses that had not been included in the original manuscript; in 77% (n = 72) these were not pre-specified in the protocol. Twenty-three percent (n = 60) had been asked to modify their overall conclusion, usually (n = 53; 88%) to provide a more cautious conclusion. Overall, most changes, as a result of the peer review process, resulted in improvements to the published manuscript; there was little evidence of a negative impact in terms of post hoc changes of the primary outcome. However, some suggested changes might be considered inappropriate, such as unplanned additional analyses, and should be discouraged.

  14. Does speed matter? The impact of operative time on outcome in laparoscopic surgery

    PubMed Central

    Jackson, Timothy D.; Wannares, Jeffrey J.; Lancaster, R. Todd; Rattner, David W.

    2012-01-01

    Introduction Controversy exists concerning the importance of operative time on patient outcomes. It is unclear whether faster is better or haste makes waste or similarly whether slower procedures represent a safe, meticulous approach or inexperienced dawdling. The objective of the present study was to determine the effect of operative time on 30-day outcomes in laparoscopic surgery. Methods Patients who underwent laparoscopic general surgery procedures (colectomy, cholecystectomy, Nissen fundoplication, inguinal hernia, and gastric bypass) from the ACS-NSQIP 2005–2008 participant use file were identified. Exclusion criteria were defined a priori to identify same-day admission, elective procedures. Operative time was divided into deciles and summary statistics were analyzed. Univariate analyses using a Cochran-Armitage test for trend were completed. The effect of operative time on 30-day morbidity was further analyzed for each procedure type using multivariate regression controlling for case complexity and additional patient factors. Patients within the highest deciles were excluded to reduce outlier effect. Results A total of 76,748 elective general surgical patients who underwent laparoscopic procedures were analyzed. Univariate analyses of deciles of operative time demonstrated a statistically significant trend (p \\ 0.0001) toward increasing odds of complications with increasing operative time for laparoscopic colectomy (n = 10,135), cholecystectomy (n = 37,407), Nissen fundoplication (n = 4,934), and gastric bypass (n = 17,842). The trend was not found to be significant for laparoscopic inguinal hernia repair (n = 6,430; p = 0.14). Multivariate modeling revealed the effect of operative time to remain significant after controlling for additional patient factors. Conclusion Increasing operative time was associated with increased odds of complications and, therefore, it appears that speed may matter in laparoscopic surgery. These analyses are limited in their inability to adjust for all patient factors, potential confounders, and case complexities. Additional hierarchical multivariate analyses at the surgeon level would be important to examine this relationship further. PMID:21298533

  15. Does speed matter? The impact of operative time on outcome in laparoscopic surgery.

    PubMed

    Jackson, Timothy D; Wannares, Jeffrey J; Lancaster, R Todd; Rattner, David W; Hutter, Matthew M

    2011-07-01

    Controversy exists concerning the importance of operative time on patient outcomes. It is unclear whether faster is better or haste makes waste or similarly whether slower procedures represent a safe, meticulous approach or inexperienced dawdling. The objective of the present study was to determine the effect of operative time on 30-day outcomes in laparoscopic surgery. Patients who underwent laparoscopic general surgery procedures (colectomy, cholecystectomy, Nissen fundoplication, inguinal hernia, and gastric bypass) from the ACS-NSQIP 2005-2008 participant use file were identified. Exclusion criteria were defined a priori to identify same-day admission, elective procedures. Operative time was divided into deciles and summary statistics were analyzed. Univariate analyses using a Cochran-Armitage test for trend were completed. The effect of operative time on 30-day morbidity was further analyzed for each procedure type using multivariate regression controlling for case complexity and additional patient factors. Patients within the highest deciles were excluded to reduce outlier effect. A total of 76,748 elective general surgical patients who underwent laparoscopic procedures were analyzed. Univariate analyses of deciles of operative time demonstrated a statistically significant trend (p<0.0001) toward increasing odds of complications with increasing operative time for laparoscopic colectomy (n=10,135), cholecystectomy (n=37,407), Nissen fundoplication (n=4,934), and gastric bypass (n=17,842). The trend was not found to be significant for laparoscopic inguinal hernia repair (n=6,430; p=0.14). Multivariate modeling revealed the effect of operative time to remain significant after controlling for additional patient factors. Increasing operative time was associated with increased odds of complications and, therefore, it appears that speed may matter in laparoscopic surgery. These analyses are limited in their inability to adjust for all patient factors, potential confounders, and case complexities. Additional hierarchical multivariate analyses at the surgeon level would be important to examine this relationship further.

  16. Impact of searching clinical trial registries in systematic reviews of pharmaceutical treatments: methodological systematic review and reanalysis of meta-analyses.

    PubMed

    Baudard, Marie; Yavchitz, Amélie; Ravaud, Philippe; Perrodeau, Elodie; Boutron, Isabelle

    2017-02-17

    Objective  To evaluate the impact of searching clinical trial registries in systematic reviews. Design  Methodological systematic review and reanalyses of meta-analyses. Data sources  Medline was searched to identify systematic reviews of randomised controlled trials (RCTs) assessing pharmaceutical treatments published between June 2014 and January 2015. For all systematic reviews that did not report a trial registry search but reported the information to perform it, the World Health Organization International Trials Registry Platform (WHO ICTRP search portal) was searched for completed or terminated RCTs not originally included in the systematic review. Data extraction  For each systematic review, two researchers independently extracted the outcomes analysed, the number of patients included, and the treatment effect estimated. For each RCT identified, two researchers independently determined whether the results were available (ie, posted, published, or available on the sponsor website) and extracted the data. When additional data were retrieved, we reanalysed meta-analyses and calculated the weight of the additional RCTs and the change in summary statistics by comparison with the original meta-analysis. Results  Among 223 selected systematic reviews, 116 (52%) did not report a search of trial registries; 21 of these did not report the information to perform the search (key words, search date). A search was performed for 95 systematic reviews; for 54 (57%), no additional RCTs were found and for 41 (43%) 122 additional RCTs were identified. The search allowed for increasing the number of patients by more than 10% in 19 systematic reviews, 20% in 10, 30% in seven, and 50% in four. Moreover, 63 RCTs had results available; the results for 45 could be included in a meta-analysis. 14 systematic reviews including 45 RCTs were reanalysed. The weight of the additional RCTs in the recalculated meta-analyses ranged from 0% to 58% and was greater than 10% in five of 14 systematic reviews, 20% in three, and 50% in one. The change in summary statistics ranged from 0% to 29% and was greater than 10% for five of 14 systematic reviews and greater than 20% for two. However, none of the changes to summary effect estimates led to a qualitative change in the interpretation of the results once the new trials were added. Conclusions  Trial registries are an important source for identifying additional RCTs. The additional number of RCTs and patients included if a search were performed varied across systematic reviews. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  17. Angular Baryon Acoustic Oscillation measure at z=2.225 from the SDSS quasar survey

    NASA Astrophysics Data System (ADS)

    de Carvalho, E.; Bernui, A.; Carvalho, G. C.; Novaes, C. P.; Xavier, H. S.

    2018-04-01

    Following a quasi model-independent approach we measure the transversal BAO mode at high redshift using the two-point angular correlation function (2PACF). The analyses done here are only possible now with the quasar catalogue from the twelfth data release (DR12Q) from the Sloan Digital Sky Survey, because it is spatially dense enough to allow the measurement of the angular BAO signature with moderate statistical significance and acceptable precision. Our analyses with quasars in the redshift interval z in [2.20,2.25] produce the angular BAO scale θBAO = 1.77° ± 0.31° with a statistical significance of 2.12 σ (i.e., 97% confidence level), calculated through a likelihood analysis performed using the theoretical covariance matrix sourced by the analytical power spectra expected in the ΛCDM concordance model. Additionally, we show that the BAO signal is robust—although with less statistical significance—under diverse bin-size choices and under small displacements of the quasars' angular coordinates. Finally, we also performed cosmological parameter analyses comparing the θBAO predictions for wCDM and w(a)CDM models with angular BAO data available in the literature, including the measurement obtained here, jointly with CMB data. The constraints on the parameters ΩM, w0 and wa are in excellent agreement with the ΛCDM concordance model.

  18. A wind proxy based on migrating dunes at the Baltic coast: statistical analysis of the link between wind conditions and sand movement

    NASA Astrophysics Data System (ADS)

    Bierstedt, Svenja E.; Hünicke, Birgit; Zorita, Eduardo; Ludwig, Juliane

    2017-07-01

    We statistically analyse the relationship between the structure of migrating dunes in the southern Baltic and the driving wind conditions over the past 26 years, with the long-term aim of using migrating dunes as a proxy for past wind conditions at an interannual resolution. The present analysis is based on the dune record derived from geo-radar measurements by Ludwig et al. (2017). The dune system is located at the Baltic Sea coast of Poland and is migrating from west to east along the coast. The dunes present layers with different thicknesses that can be assigned to absolute dates at interannual timescales and put in relation to seasonal wind conditions. To statistically analyse this record and calibrate it as a wind proxy, we used a gridded regional meteorological reanalysis data set (coastDat2) covering recent decades. The identified link between the dune annual layers and wind conditions was additionally supported by the co-variability between dune layers and observed sea level variations in the southern Baltic Sea. We include precipitation and temperature into our analysis, in addition to wind, to learn more about the dependency between these three atmospheric factors and their common influence on the dune system. We set up a statistical linear model based on the correlation between the frequency of days with specific wind conditions in a given season and dune migration velocities derived for that season. To some extent, the dune records can be seen as analogous to tree-ring width records, and hence we use a proxy validation method usually applied in dendrochronology, cross-validation with the leave-one-out method, when the observational record is short. The revealed correlations between the wind record from the reanalysis and the wind record derived from the dune structure is in the range between 0.28 and 0.63, yielding similar statistical validation skill as dendroclimatological records.

  19. OdorMapComparer: an application for quantitative analyses and comparisons of fMRI brain odor maps.

    PubMed

    Liu, Nian; Xu, Fuqiang; Miller, Perry L; Shepherd, Gordon M

    2007-01-01

    Brain odor maps are reconstructed flat images that describe the spatial activity patterns in the glomerular layer of the olfactory bulbs in animals exposed to different odor stimuli. We have developed a software application, OdorMapComparer, to carry out quantitative analyses and comparisons of the fMRI odor maps. This application is an open-source window program that first loads two odor map images being compared. It allows image transformations including scaling, flipping, rotating, and warping so that the two images can be appropriately aligned to each other. It performs simple subtraction, addition, and average of signals in the two images. It also provides comparative statistics including the normalized correlation (NC) and spatial correlation coefficient. Experimental studies showed that the rodent fMRI odor maps for aliphatic aldehydes displayed spatial activity patterns that are similar in gross outlines but somewhat different in specific subregions. Analyses with OdorMapComparer indicate that the similarity between odor maps decreases with increasing difference in the length of carbon chains. For example, the map of butanal is more closely related to that of pentanal (with a NC = 0.617) than to that of octanal (NC = 0.082), which is consistent with animal behavioral studies. The study also indicates that fMRI odor maps are statistically odor-specific and repeatable across both the intra- and intersubject trials. OdorMapComparer thus provides a tool for quantitative, statistical analyses and comparisons of fMRI odor maps in a fashion that is integrated with the overall odor mapping techniques.

  20. Recent evaluations of crack-opening-area in circumferentially cracked pipes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahman, S.; Brust, F.; Ghadiali, N.

    1997-04-01

    Leak-before-break (LBB) analyses for circumferentially cracked pipes are currently being conducted in the nuclear industry to justify elimination of pipe whip restraints and jet shields which are present because of the expected dynamic effects from pipe rupture. The application of the LBB methodology frequently requires calculation of leak rates. The leak rates depend on the crack-opening area of the through-wall crack in the pipe. In addition to LBB analyses which assume a hypothetical flaw size, there is also interest in the integrity of actual leaking cracks corresponding to current leakage detection requirements in NRC Regulatory Guide 1.45, or for assessingmore » temporary repair of Class 2 and 3 pipes that have leaks as are being evaluated in ASME Section XI. The objectives of this study were to review, evaluate, and refine current predictive models for performing crack-opening-area analyses of circumferentially cracked pipes. The results from twenty-five full-scale pipe fracture experiments, conducted in the Degraded Piping Program, the International Piping Integrity Research Group Program, and the Short Cracks in Piping and Piping Welds Program, were used to verify the analytical models. Standard statistical analyses were performed to assess used to verify the analytical models. Standard statistical analyses were performed to assess quantitatively the accuracy of the predictive models. The evaluation also involved finite element analyses for determining the crack-opening profile often needed to perform leak-rate calculations.« less

  1. Human-modified temperatures induce species changes: Joint attribution.

    PubMed

    Root, Terry L; MacMynowski, Dena P; Mastrandrea, Michael D; Schneider, Stephen H

    2005-05-24

    Average global surface-air temperature is increasing. Contention exists over relative contributions by natural and anthropogenic forcings. Ecological studies attribute plant and animal changes to observed warming. Until now, temperature-species connections have not been statistically attributed directly to anthropogenic climatic change. Using modeled climatic variables and observed species data, which are independent of thermometer records and paleoclimatic proxies, we demonstrate statistically significant "joint attribution," a two-step linkage: human activities contribute significantly to temperature changes and human-changed temperatures are associated with discernible changes in plant and animal traits. Additionally, our analyses provide independent testing of grid-box-scale temperature projections from a general circulation model (HadCM3).

  2. Quantitative cancer risk assessment based on NIOSH and UCC epidemiological data for workers exposed to ethylene oxide.

    PubMed

    Valdez-Flores, Ciriaco; Sielken, Robert L; Teta, M Jane

    2010-04-01

    The most recent epidemiological data on individual workers in the NIOSH and updated UCC occupational studies have been used to characterize the potential excess cancer risks of environmental exposure to ethylene oxide (EO). In addition to refined analyses of the separate cohorts, power has been increased by analyzing the combined cohorts. In previous SMR analyses of the separate studies and the present analyses of the updated and pooled studies of over 19,000 workers, none of the SMRs for any combination of the 12 cancer endpoints and six sub-cohorts analyzed were statistically significantly greater than one including the ones of greatest previous interest: leukemia, lymphohematopoietic tissue, lymphoid tumors, NHL, and breast cancer. In our study, no evidence of a positive cumulative exposure-response relationship was found. Fitted Cox proportional hazards models with cumulative EO exposure do not have statistically significant positive slopes. The lack of increasing trends was corroborated by categorical analyses. Cox model estimates of the concentrations corresponding to a 1-in-a-million extra environmental cancer risk are all greater than approximately 1ppb and are more than 1500-fold greater than the 0.4ppt estimate in the 2006 EPA draft IRIS risk assessment. The reasons for this difference are identified and discussed. Copyright 2009 Elsevier Inc. All rights reserved.

  3. Treatment of missing data in follow-up studies of randomised controlled trials: A systematic review of the literature.

    PubMed

    Sullivan, Thomas R; Yelland, Lisa N; Lee, Katherine J; Ryan, Philip; Salter, Amy B

    2017-08-01

    After completion of a randomised controlled trial, an extended follow-up period may be initiated to learn about longer term impacts of the intervention. Since extended follow-up studies often involve additional eligibility restrictions and consent processes for participation, and a longer duration of follow-up entails a greater risk of participant attrition, missing data can be a considerable threat in this setting. As a potential source of bias, it is critical that missing data are appropriately handled in the statistical analysis, yet little is known about the treatment of missing data in extended follow-up studies. The aims of this review were to summarise the extent of missing data in extended follow-up studies and the use of statistical approaches to address this potentially serious problem. We performed a systematic literature search in PubMed to identify extended follow-up studies published from January to June 2015. Studies were eligible for inclusion if the original randomised controlled trial results were also published and if the main objective of extended follow-up was to compare the original randomised groups. We recorded information on the extent of missing data and the approach used to treat missing data in the statistical analysis of the primary outcome of the extended follow-up study. Of the 81 studies included in the review, 36 (44%) reported additional eligibility restrictions and 24 (30%) consent processes for entry into extended follow-up. Data were collected at a median of 7 years after randomisation. Excluding 28 studies with a time to event primary outcome, 51/53 studies (96%) reported missing data on the primary outcome. The median percentage of randomised participants with complete data on the primary outcome was just 66% in these studies. The most common statistical approach to address missing data was complete case analysis (51% of studies), while likelihood-based analyses were also well represented (25%). Sensitivity analyses around the missing data mechanism were rarely performed (25% of studies), and when they were, they often involved unrealistic assumptions about the mechanism. Despite missing data being a serious problem in extended follow-up studies, statistical approaches to addressing missing data were often inadequate. We recommend researchers clearly specify all sources of missing data in follow-up studies and use statistical methods that are valid under a plausible assumption about the missing data mechanism. Sensitivity analyses should also be undertaken to assess the robustness of findings to assumptions about the missing data mechanism.

  4. A statistical anomaly indicates symbiotic origins of eukaryotic membranes

    PubMed Central

    Bansal, Suneyna; Mittal, Aditya

    2015-01-01

    Compositional analyses of nucleic acids and proteins have shed light on possible origins of living cells. In this work, rigorous compositional analyses of ∼5000 plasma membrane lipid constituents of 273 species in the three life domains (archaea, eubacteria, and eukaryotes) revealed a remarkable statistical paradox, indicating symbiotic origins of eukaryotic cells involving eubacteria. For lipids common to plasma membranes of the three domains, the number of carbon atoms in eubacteria was found to be similar to that in eukaryotes. However, mutually exclusive subsets of same data show exactly the opposite—the number of carbon atoms in lipids of eukaryotes was higher than in eubacteria. This statistical paradox, called Simpson's paradox, was absent for lipids in archaea and for lipids not common to plasma membranes of the three domains. This indicates the presence of interaction(s) and/or association(s) in lipids forming plasma membranes of eubacteria and eukaryotes but not for those in archaea. Further inspection of membrane lipid structures affecting physicochemical properties of plasma membranes provides the first evidence (to our knowledge) on the symbiotic origins of eukaryotic cells based on the “third front” (i.e., lipids) in addition to the growing compositional data from nucleic acids and proteins. PMID:25631820

  5. Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power

    PubMed Central

    Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon

    2016-01-01

    An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%–155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%–71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power. PMID:28479943

  6. Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power.

    PubMed

    Miciak, Jeremy; Taylor, W Pat; Stuebing, Karla K; Fletcher, Jack M; Vaughn, Sharon

    2016-01-01

    An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%-155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%-71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power.

  7. An Analysis Pipeline with Statistical and Visualization-Guided Knowledge Discovery for Michigan-Style Learning Classifier Systems

    PubMed Central

    Urbanowicz, Ryan J.; Granizo-Mackenzie, Ambrose; Moore, Jason H.

    2014-01-01

    Michigan-style learning classifier systems (M-LCSs) represent an adaptive and powerful class of evolutionary algorithms which distribute the learned solution over a sizable population of rules. However their application to complex real world data mining problems, such as genetic association studies, has been limited. Traditional knowledge discovery strategies for M-LCS rule populations involve sorting and manual rule inspection. While this approach may be sufficient for simpler problems, the confounding influence of noise and the need to discriminate between predictive and non-predictive attributes calls for additional strategies. Additionally, tests of significance must be adapted to M-LCS analyses in order to make them a viable option within fields that require such analyses to assess confidence. In this work we introduce an M-LCS analysis pipeline that combines uniquely applied visualizations with objective statistical evaluation for the identification of predictive attributes, and reliable rule generalizations in noisy single-step data mining problems. This work considers an alternative paradigm for knowledge discovery in M-LCSs, shifting the focus from individual rules to a global, population-wide perspective. We demonstrate the efficacy of this pipeline applied to the identification of epistasis (i.e., attribute interaction) and heterogeneity in noisy simulated genetic association data. PMID:25431544

  8. Morphometric or morpho-anatomal and genetic investigations highlight allopatric speciation in Western Mediterranean lagoons within the Atherina lagunae species (Teleostei, Atherinidae)

    NASA Astrophysics Data System (ADS)

    Trabelsi, M.; Maamouri, F.; Quignard, J.-P.; Boussaïd, M.; Faure, E.

    2004-12-01

    Current distribution of Atherina lagunae poses an interesting biogeographical problem as this species inhabits widely separate circum-Mediterranean lagoons. Statistical analyses of 87 biometric parameters and genetic variation in a portion of the cytochrome b gene were examined in four populations of A. lagunae from Tunisian and French lagoons. The results suggested a subdivision into two distinct Atherinid groups: one included the French lagoonal sand smelts and the second included the Tunisian ones. Tunisian lagoonal sand smelts were distinguished from the French ones by the lower number of lateral line scales, vertebrae, pectorals and first dorsal fin rays and the higher number of lower and total gillrakers. In addition, A. lagunae from Tunisian lagoons are characterised by short preorbital length, developed operculum, broad interorbital space, larger head, robust body and a relatively small first dorsal fin which is positioned backwards. In addition, intraspecific sequence variation in a portion of the cytochrome b gene was examined in 87 individuals from Tunisia and France. The high correlation between the results of the molecular phylogenetic tree and biometric statistical data analysis suggested that two different sibling species or at least sub-species or semi-species have colonised the lagoons. In addition, our analyses suggested that the evolution of A. lagunae probably occurred in two steps including marine sympatric speciation within the large Atherina boyeri complex and a post-Pleistocene colonisation of the lagoons.

  9. Towards interoperable and reproducible QSAR analyses: Exchange of datasets.

    PubMed

    Spjuth, Ola; Willighagen, Egon L; Guha, Rajarshi; Eklund, Martin; Wikberg, Jarl Es

    2010-06-30

    QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML) which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join, extend, combine datasets and hence work collectively, but also allows for analyzing the effect descriptors have on the statistical model's performance. The presented Bioclipse plugins equip scientists with graphical tools that make QSAR-ML easily accessible for the community.

  10. Towards interoperable and reproducible QSAR analyses: Exchange of datasets

    PubMed Central

    2010-01-01

    Background QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. Results We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML) which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Conclusions Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join, extend, combine datasets and hence work collectively, but also allows for analyzing the effect descriptors have on the statistical model's performance. The presented Bioclipse plugins equip scientists with graphical tools that make QSAR-ML easily accessible for the community. PMID:20591161

  11. Integrative Approaches to Understanding the Pathogenic Role of Genetic Variation in Rheumatic Diseases.

    PubMed

    Laufer, Vincent A; Chen, Jake Y; Langefeld, Carl D; Bridges, S Louis

    2017-08-01

    The use of high-throughput omics may help to understand the contribution of genetic variants to the pathogenesis of rheumatic diseases. We discuss the concept of missing heritability: that genetic variants do not explain the heritability of rheumatoid arthritis and related rheumatologic conditions. In addition to an overview of how integrative data analysis can lead to novel insights into mechanisms of rheumatic diseases, we describe statistical approaches to prioritizing genetic variants for future functional analyses. We illustrate how analyses of large datasets provide hope for improved approaches to the diagnosis, treatment, and prevention of rheumatic diseases. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study.

    PubMed

    van der Krieke, Lian; Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith Gm; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-08-07

    Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher's tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use.

  13. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study

    PubMed Central

    Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith GM; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-01-01

    Background Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. Objective This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. Methods We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher’s tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). Results An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Conclusions Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use. PMID:26254160

  14. SU-F-T-551: Beam Hardening and Attenuation of Photon Beams Using Integral Quality Monitor in Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casar, B; Carot, I Mendez; Peterlin, P

    2016-06-15

    Purpose: Aim of the multi-centre study was to analyse beam hardening effect of the Integral Quality Monitor (IQM) for high energy photon beams used in radiotherapy with linear accelerators. Generic values for attenuation coefficient k(IQM) of IQM system were additionally investigated. Methods: Beam hardening effect of the IQM system was studied for a set of standard nominal photon energies (6 MV–18 MV) and two flattening filter free (FFF) energies (6 MV FFF and 10 MV FFF). PDD curves were measured and analysed for various square radiation fields, with and without IQM in place. Differences between PDD curves were statistically analysedmore » through comparison of respective PDD-20,10 values. Attenuation coefficients k(IQM) were determined for the same range of photon energies. Results: Statistically significant differences in beam qualities for all evaluated high energy photon beams were found, comparing PDD-20,10 values derived from PDD curves with and without IQM in place. Significance of beam hardening effect was statistically proven with high confidence (p < 0,01) for all analysed photon beams except for 15 MV (p = 0,078), although relative differences in beam qualities were minimal, ranging from 0,1 % to 0,5 %. Attenuation of the IQM system showed negligible dependence on radiation field size. However, clinically important dependence of kIQM versus TPRs20,10 was found: 0,941 for 6 MV photon beams, to 0,959 for 18 MV photon beams, with highest uncertainty below 0,006. k(IQM) versus TPRs were tabulated and polynomial equation for the determination of k(IQM) is suggested for clinical use. Conclusion: There was no clinically relevant beam hardening, when IQM system was on linear accelerators. Consequently, no additional commissioning is needed for the IQM system regarding the determination of beam qualities. Generic values for k(IQM) are proposed and can be used as tray factors for complete range of examined photon beam energies.« less

  15. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    ERIC Educational Resources Information Center

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  16. A comprehensive framework for functional diversity patterns of marine chromophytic phytoplankton using rbcL phylogeny

    PubMed Central

    Samanta, Brajogopal; Bhadury, Punyasloke

    2016-01-01

    Marine chromophytes are taxonomically diverse group of algae and contribute approximately half of the total oceanic primary production. To understand the global patterns of functional diversity of chromophytic phytoplankton, robust bioinformatics and statistical analyses including deep phylogeny based on 2476 form ID rbcL gene sequences representing seven ecologically significant oceanographic ecoregions were undertaken. In addition, 12 form ID rbcL clone libraries were generated and analyzed (148 sequences) from Sundarbans Biosphere Reserve representing the world’s largest mangrove ecosystem as part of this study. Global phylogenetic analyses recovered 11 major clades of chromophytic phytoplankton in varying proportions with several novel rbcL sequences in each of the seven targeted ecoregions. Majority of OTUs was found to be exclusive to each ecoregion, whereas some were shared by two or more ecoregions based on beta-diversity analysis. Present phylogenetic and bioinformatics analyses provide a strong statistical support for the hypothesis that different oceanographic regimes harbor distinct and coherent groups of chromophytic phytoplankton. It has been also shown as part of this study that varying natural selection pressure on form ID rbcL gene under different environmental conditions could lead to functional differences and overall fitness of chromophytic phytoplankton populations. PMID:26861415

  17. GSimp: A Gibbs sampler based left-censored missing value imputation approach for metabolomics studies

    PubMed Central

    Jia, Erik; Chen, Tianlu

    2018-01-01

    Left-censored missing values commonly exist in targeted metabolomics datasets and can be considered as missing not at random (MNAR). Improper data processing procedures for missing values will cause adverse impacts on subsequent statistical analyses. However, few imputation methods have been developed and applied to the situation of MNAR in the field of metabolomics. Thus, a practical left-censored missing value imputation method is urgently needed. We developed an iterative Gibbs sampler based left-censored missing value imputation approach (GSimp). We compared GSimp with other three imputation methods on two real-world targeted metabolomics datasets and one simulation dataset using our imputation evaluation pipeline. The results show that GSimp outperforms other imputation methods in terms of imputation accuracy, observation distribution, univariate and multivariate analyses, and statistical sensitivity. Additionally, a parallel version of GSimp was developed for dealing with large scale metabolomics datasets. The R code for GSimp, evaluation pipeline, tutorial, real-world and simulated targeted metabolomics datasets are available at: https://github.com/WandeRum/GSimp. PMID:29385130

  18. Geographically Sourcing Cocaine’s Origin – Delineation of the Nineteen Major Coca Growing Regions in South America

    PubMed Central

    Mallette, Jennifer R.; Casale, John F.; Jordan, James; Morello, David R.; Beyer, Paul M.

    2016-01-01

    Previously, geo-sourcing to five major coca growing regions within South America was accomplished. However, the expansion of coca cultivation throughout South America made sub-regional origin determinations increasingly difficult. The former methodology was recently enhanced with additional stable isotope analyses (2H and 18O) to fully characterize cocaine due to the varying environmental conditions in which the coca was grown. An improved data analysis method was implemented with the combination of machine learning and multivariate statistical analysis methods to provide further partitioning between growing regions. Here, we show how the combination of trace cocaine alkaloids, stable isotopes, and multivariate statistical analyses can be used to classify illicit cocaine as originating from one of 19 growing regions within South America. The data obtained through this approach can be used to describe current coca cultivation and production trends, highlight trafficking routes, as well as identify new coca growing regions. PMID:27006288

  19. A marked correlation function for constraining modified gravity models

    NASA Astrophysics Data System (ADS)

    White, Martin

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a `generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  20. Comparison of the predictive validity of diagnosis-based risk adjusters for clinical outcomes.

    PubMed

    Petersen, Laura A; Pietz, Kenneth; Woodard, LeChauncy D; Byrne, Margaret

    2005-01-01

    Many possible methods of risk adjustment exist, but there is a dearth of comparative data on their performance. We compared the predictive validity of 2 widely used methods (Diagnostic Cost Groups [DCGs] and Adjusted Clinical Groups [ACGs]) for 2 clinical outcomes using a large national sample of patients. We studied all patients who used Veterans Health Administration (VA) medical services in fiscal year (FY) 2001 (n = 3,069,168) and assigned both a DCG and an ACG to each. We used logistic regression analyses to compare predictive ability for death or long-term care (LTC) hospitalization for age/gender models, DCG models, and ACG models. We also assessed the effect of adding age to the DCG and ACG models. Patients in the highest DCG categories, indicating higher severity of illness, were more likely to die or to require LTC hospitalization. Surprisingly, the age/gender model predicted death slightly more accurately than the ACG model (c-statistic of 0.710 versus 0.700, respectively). The addition of age to the ACG model improved the c-statistic to 0.768. The highest c-statistic for prediction of death was obtained with a DCG/age model (0.830). The lowest c-statistics were obtained for age/gender models for LTC hospitalization (c-statistic 0.593). The c-statistic for use of ACGs to predict LTC hospitalization was 0.783, and improved to 0.792 with the addition of age. The c-statistics for use of DCGs and DCG/age to predict LTC hospitalization were 0.885 and 0.890, respectively, indicating the best prediction. We found that risk adjusters based upon diagnoses predicted an increased likelihood of death or LTC hospitalization, exhibiting good predictive validity. In this comparative analysis using VA data, DCG models were generally superior to ACG models in predicting clinical outcomes, although ACG model performance was enhanced by the addition of age.

  1. Methodological approaches in analysing observational data: A practical example on how to address clustering and selection bias.

    PubMed

    Trutschel, Diana; Palm, Rebecca; Holle, Bernhard; Simon, Michael

    2017-11-01

    Because not every scientific question on effectiveness can be answered with randomised controlled trials, research methods that minimise bias in observational studies are required. Two major concerns influence the internal validity of effect estimates: selection bias and clustering. Hence, to reduce the bias of the effect estimates, more sophisticated statistical methods are needed. To introduce statistical approaches such as propensity score matching and mixed models into representative real-world analysis and to conduct the implementation in statistical software R to reproduce the results. Additionally, the implementation in R is presented to allow the results to be reproduced. We perform a two-level analytic strategy to address the problems of bias and clustering: (i) generalised models with different abilities to adjust for dependencies are used to analyse binary data and (ii) the genetic matching and covariate adjustment methods are used to adjust for selection bias. Hence, we analyse the data from two population samples, the sample produced by the matching method and the full sample. The different analysis methods in this article present different results but still point in the same direction. In our example, the estimate of the probability of receiving a case conference is higher in the treatment group than in the control group. Both strategies, genetic matching and covariate adjustment, have their limitations but complement each other to provide the whole picture. The statistical approaches were feasible for reducing bias but were nevertheless limited by the sample used. For each study and obtained sample, the pros and cons of the different methods have to be weighted. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  2. Experiences in using DISCUS for visualizing human communication

    NASA Astrophysics Data System (ADS)

    Groehn, Matti; Nieminen, Marko; Haho, Paeivi; Smeds, Riitta

    2000-02-01

    In this paper, we present further improvement to the DISCUS software that can be used to record and analyze the flow and constants of business process simulation session discussion. The tool was initially introduced in 'visual data exploration and analysis IV' conference. The initial features of the tool enabled the visualization of discussion flow in business process simulation sessions and the creation of SOM analyses. The improvements of the tool consists of additional visualization possibilities that enable quick on-line analyses and improved graphical statistics. We have also created the very first interface to audio data and implemented two ways to visualize it. We also outline additional possibilities to use the tool in other application areas: these include usability testing and the possibility to use the tool for capturing design rationale in a product development process. The data gathered with DISCUS may be used in other applications, and further work may be done with data ming techniques.

  3. Uranium hydrogeochemical and stream sediment reconnaissance of the Solomon NTMS quadrangle, Alaska

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langfeldt, S.L.; Youngquist, C.A.; D'Andrea, R.F. Jr.

    This report presents results of a Hydrogeochemical and Stream Sediment Reconnaissance (HSSR) of the Solomon NTMS quadrangle, Alaska. In addition to this abbreviated data release, more complete data are available to the public in machine-readable form through the Grand Junction Office Information System at Oak Ridge National Laboratory. Presented in this data release are location data, field analyses, and laboratory analyses of several different sample media. For the sake of brevity, many field site observations have not been included in this volume. These data are, however, available on the magnetic tape. Appendices A and B describe the sample media andmore » summarize the analytical results for each medium. The data were subdivided by one of the Los Alamos National Laboratory (LANL) sorting programs of Zinkl and others into groups of stream sediment and stream water samples. For each group which contains a sufficient number of observations, statistical tables, tables of raw data, and 1:1000000 scale maps of pertinent elements have been included in this report. In addition, maps showing results of multivariate statistical analyses have been included. Further information about the HSSR program in general, or about the LANL portion of the program in particular, can be obtained in quarterly or semiannual program progress reports on open-file at DOE's Technical Library in Grand Junction. Information about the field and analytical procedures used by LANL during sample collection and analysis may be found in any HSSR data release prepared by the LANL and will not be included in this report.« less

  4. Statistical issues on the analysis of change in follow-up studies in dental research.

    PubMed

    Blance, Andrew; Tu, Yu-Kang; Baelum, Vibeke; Gilthorpe, Mark S

    2007-12-01

    To provide an overview to the problems in study design and associated analyses of follow-up studies in dental research, particularly addressing three issues: treatment-baselineinteractions; statistical power; and nonrandomization. Our previous work has shown that many studies purport an interacion between change (from baseline) and baseline values, which is often based on inappropriate statistical analyses. A priori power calculations are essential for randomized controlled trials (RCTs), but in the pre-test/post-test RCT design it is not well known to dental researchers that the choice of statistical method affects power, and that power is affected by treatment-baseline interactions. A common (good) practice in the analysis of RCT data is to adjust for baseline outcome values using ancova, thereby increasing statistical power. However, an important requirement for ancova is there to be no interaction between the groups and baseline outcome (i.e. effective randomization); the patient-selection process should not cause differences in mean baseline values across groups. This assumption is often violated for nonrandomized (observational) studies and the use of ancova is thus problematic, potentially giving biased estimates, invoking Lord's paradox and leading to difficulties in the interpretation of results. Baseline interaction issues can be overcome by use of statistical methods; not widely practiced in dental research: Oldham's method and multilevel modelling; the latter is preferred for its greater flexibility to deal with more than one follow-up occasion as well as additional covariates To illustrate these three key issues, hypothetical examples are considered from the fields of periodontology, orthodontics, and oral implantology. Caution needs to be exercised when considering the design and analysis of follow-up studies. ancova is generally inappropriate for nonrandomized studies and causal inferences from observational data should be avoided.

  5. Quantile regression for the statistical analysis of immunological data with many non-detects.

    PubMed

    Eilers, Paul H C; Röder, Esther; Savelkoul, Huub F J; van Wijk, Roy Gerth

    2012-07-07

    Immunological parameters are hard to measure. A well-known problem is the occurrence of values below the detection limit, the non-detects. Non-detects are a nuisance, because classical statistical analyses, like ANOVA and regression, cannot be applied. The more advanced statistical techniques currently available for the analysis of datasets with non-detects can only be used if a small percentage of the data are non-detects. Quantile regression, a generalization of percentiles to regression models, models the median or higher percentiles and tolerates very high numbers of non-detects. We present a non-technical introduction and illustrate it with an implementation to real data from a clinical trial. We show that by using quantile regression, groups can be compared and that meaningful linear trends can be computed, even if more than half of the data consists of non-detects. Quantile regression is a valuable addition to the statistical methods that can be used for the analysis of immunological datasets with non-detects.

  6. Additive scales in degenerative disease--calculation of effect sizes and clinical judgment.

    PubMed

    Riepe, Matthias W; Wilkinson, David; Förstl, Hans; Brieden, Andreas

    2011-12-16

    The therapeutic efficacy of an intervention is often assessed in clinical trials by scales measuring multiple diverse activities that are added to produce a cumulative global score. Medical communities and health care systems subsequently use these data to calculate pooled effect sizes to compare treatments. This is done because major doubt has been cast over the clinical relevance of statistically significant findings relying on p values with the potential to report chance findings. Hence in an aim to overcome this pooling the results of clinical studies into a meta-analyses with a statistical calculus has been assumed to be a more definitive way of deciding of efficacy. We simulate the therapeutic effects as measured with additive scales in patient cohorts with different disease severity and assess the limitations of an effect size calculation of additive scales which are proven mathematically. We demonstrate that the major problem, which cannot be overcome by current numerical methods, is the complex nature and neurobiological foundation of clinical psychiatric endpoints in particular and additive scales in general. This is particularly relevant for endpoints used in dementia research. 'Cognition' is composed of functions such as memory, attention, orientation and many more. These individual functions decline in varied and non-linear ways. Here we demonstrate that with progressive diseases cumulative values from multidimensional scales are subject to distortion by the limitations of the additive scale. The non-linearity of the decline of function impedes the calculation of effect sizes based on cumulative values from these multidimensional scales. Statistical analysis needs to be guided by boundaries of the biological condition. Alternatively, we suggest a different approach avoiding the error imposed by over-analysis of cumulative global scores from additive scales.

  7. Statistical process control: A feasibility study of the application of time-series measurement in early neurorehabilitation after acquired brain injury.

    PubMed

    Markovic, Gabriela; Schult, Marie-Louise; Bartfai, Aniko; Elg, Mattias

    2017-01-31

    Progress in early cognitive recovery after acquired brain injury is uneven and unpredictable, and thus the evaluation of rehabilitation is complex. The use of time-series measurements is susceptible to statistical change due to process variation. To evaluate the feasibility of using a time-series method, statistical process control, in early cognitive rehabilitation. Participants were 27 patients with acquired brain injury undergoing interdisciplinary rehabilitation of attention within 4 months post-injury. The outcome measure, the Paced Auditory Serial Addition Test, was analysed using statistical process control. Statistical process control identifies if and when change occurs in the process according to 3 patterns: rapid, steady or stationary performers. The statistical process control method was adjusted, in terms of constructing the baseline and the total number of measurement points, in order to measure a process in change. Statistical process control methodology is feasible for use in early cognitive rehabilitation, since it provides information about change in a process, thus enabling adjustment of the individual treatment response. Together with the results indicating discernible subgroups that respond differently to rehabilitation, statistical process control could be a valid tool in clinical decision-making. This study is a starting-point in understanding the rehabilitation process using a real-time-measurements approach.

  8. Bias, precision and statistical power of analysis of covariance in the analysis of randomized trials with baseline imbalance: a simulation study.

    PubMed

    Egbewale, Bolaji E; Lewis, Martyn; Sim, Julius

    2014-04-09

    Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. 126 hypothetical trial scenarios were evaluated (126,000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power.

  9. Bias, precision and statistical power of analysis of covariance in the analysis of randomized trials with baseline imbalance: a simulation study

    PubMed Central

    2014-01-01

    Background Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. Methods 126 hypothetical trial scenarios were evaluated (126 000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Results Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Conclusions Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power. PMID:24712304

  10. Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Udey, Ruth Norma

    Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.

  11. Reproducibility of ZrO2-based freeze casting for biomaterials.

    PubMed

    Naleway, Steven E; Fickas, Kate C; Maker, Yajur N; Meyers, Marc A; McKittrick, Joanna

    2016-04-01

    The processing technique of freeze casting has been intensely researched for its potential to create porous scaffold and infiltrated composite materials for biomedical implants and structural materials. However, in order for this technique to be employed medically or commercially, it must be able to reliably produce materials in great quantities with similar microstructures and properties. Here we investigate the reproducibility of the freeze casting process by independently fabricating three sets of eight ZrO2-epoxy composite scaffolds with the same processing conditions but varying solid loading (10, 15 and 20 vol.%). Statistical analyses (One-way ANOVA and Tukey's HSD tests) run upon measurements of the microstructural dimensions of these composite scaffold sets show that, while the majority of microstructures are similar, in all cases the composite scaffolds display statistically significant variability. In addition, composite scaffolds where mechanically compressed and statistically analyzed. Similar to the microstructures, almost all of their resultant properties displayed significant variability though most composite scaffolds were similar. These results suggest that additional research to improve control of the freeze casting technique is required before scaffolds and composite scaffolds can reliably be reproduced for commercial or medical applications. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. A catalogue of /Fe/H/ determinations

    NASA Astrophysics Data System (ADS)

    Cayrel de Strobel, G.; Bentolila, C.; Hauck, B.; Curchod, A.

    1980-09-01

    A catalog of iron/hydrogen abundance ratios for 628 stars is compiled based on 1109 published values. The catalog consists of (1) a table of absolute iron abundance determinations in the solar photosphere as compiled by Blackwell (1974); (2) the iron/hydrogen abundances of 628 stars in the form of logarithmic differences between iron abundances in the given star and a standard star, obtained from analyses of high-dispersion spectra as well as useful stellar spectroscopic and photometric parameters; and (3) indications of the mean dispersion and wavelength interval used in the analyses. In addition, statistics on the distributions of the number of determinations per star and the apparent magnitudes and spectral types of the stars are presented.

  13. Identity-by-descent analyses for measuring population dynamics and selection in recombining pathogens.

    PubMed

    Henden, Lyndal; Lee, Stuart; Mueller, Ivo; Barry, Alyssa; Bahlo, Melanie

    2018-05-01

    Identification of genomic regions that are identical by descent (IBD) has proven useful for human genetic studies where analyses have led to the discovery of familial relatedness and fine-mapping of disease critical regions. Unfortunately however, IBD analyses have been underutilized in analysis of other organisms, including human pathogens. This is in part due to the lack of statistical methodologies for non-diploid genomes in addition to the added complexity of multiclonal infections. As such, we have developed an IBD methodology, called isoRelate, for analysis of haploid recombining microorganisms in the presence of multiclonal infections. Using the inferred IBD status at genomic locations, we have also developed a novel statistic for identifying loci under positive selection and propose relatedness networks as a means of exploring shared haplotypes within populations. We evaluate the performance of our methodologies for detecting IBD and selection, including comparisons with existing tools, then perform an exploratory analysis of whole genome sequencing data from a global Plasmodium falciparum dataset of more than 2500 genomes. This analysis identifies Southeast Asia as having many highly related isolates, possibly as a result of both reduced transmission from intensified control efforts and population bottlenecks following the emergence of antimalarial drug resistance. Many signals of selection are also identified, most of which overlap genes that are known to be associated with drug resistance, in addition to two novel signals observed in multiple countries that have yet to be explored in detail. Additionally, we investigate relatedness networks over the selected loci and determine that one of these sweeps has spread between continents while the other has arisen independently in different countries. IBD analysis of microorganisms using isoRelate can be used for exploring population structure, positive selection and haplotype distributions, and will be a valuable tool for monitoring disease control and elimination efforts of many diseases.

  14. Statistical Exposé of a Multiple-Compartment Anaerobic Reactor Treating Domestic Wastewater.

    PubMed

    Pfluger, Andrew R; Hahn, Martha J; Hering, Amanda S; Munakata-Marr, Junko; Figueroa, Linda

    2018-06-01

      Mainstream anaerobic treatment of domestic wastewater is a promising energy-generating treatment strategy; however, such reactors operated in colder regions are not well characterized. Performance data from a pilot-scale, multiple-compartment anaerobic reactor taken over 786 days were subjected to comprehensive statistical analyses. Results suggest that chemical oxygen demand (COD) was a poor proxy for organics in anaerobic systems as oxygen demand from dissolved inorganic material, dissolved methane, and colloidal material influence dissolved and particulate COD measurements. Additionally, univariate and functional boxplots were useful in visualizing variability in contaminant concentrations and identifying statistical outliers. Further, significantly different dissolved organic removal and methane production was observed between operational years, suggesting that anaerobic reactor systems may not achieve steady-state performance within one year. Last, modeling multiple-compartment reactor systems will require data collected over at least two years to capture seasonal variations of the major anaerobic microbial functions occurring within each reactor compartment.

  15. A phylogenetic transform enhances analysis of compositional microbiota data.

    PubMed

    Silverman, Justin D; Washburne, Alex D; Mukherjee, Sayan; David, Lawrence A

    2017-02-15

    Surveys of microbial communities (microbiota), typically measured as relative abundance of species, have illustrated the importance of these communities in human health and disease. Yet, statistical artifacts commonly plague the analysis of relative abundance data. Here, we introduce the PhILR transform, which incorporates microbial evolutionary models with the isometric log-ratio transform to allow off-the-shelf statistical tools to be safely applied to microbiota surveys. We demonstrate that analyses of community-level structure can be applied to PhILR transformed data with performance on benchmarks rivaling or surpassing standard tools. Additionally, by decomposing distance in the PhILR transformed space, we identified neighboring clades that may have adapted to distinct human body sites. Decomposing variance revealed that covariation of bacterial clades within human body sites increases with phylogenetic relatedness. Together, these findings illustrate how the PhILR transform combines statistical and phylogenetic models to overcome compositional data challenges and enable evolutionary insights relevant to microbial communities.

  16. Tipping points in the arctic: eyeballing or statistical significance?

    PubMed

    Carstensen, Jacob; Weydmann, Agata

    2012-02-01

    Arctic ecosystems have experienced and are projected to experience continued large increases in temperature and declines in sea ice cover. It has been hypothesized that small changes in ecosystem drivers can fundamentally alter ecosystem functioning, and that this might be particularly pronounced for Arctic ecosystems. We present a suite of simple statistical analyses to identify changes in the statistical properties of data, emphasizing that changes in the standard error should be considered in addition to changes in mean properties. The methods are exemplified using sea ice extent, and suggest that the loss rate of sea ice accelerated by factor of ~5 in 1996, as reported in other studies, but increases in random fluctuations, as an early warning signal, were observed already in 1990. We recommend to employ the proposed methods more systematically for analyzing tipping points to document effects of climate change in the Arctic.

  17. NeuroVault.org: A repository for sharing unthresholded statistical maps, parcellations, and atlases of the human brain.

    PubMed

    Gorgolewski, Krzysztof J; Varoquaux, Gael; Rivera, Gabriel; Schwartz, Yannick; Sochat, Vanessa V; Ghosh, Satrajit S; Maumet, Camille; Nichols, Thomas E; Poline, Jean-Baptiste; Yarkoni, Tal; Margulies, Daniel S; Poldrack, Russell A

    2016-01-01

    NeuroVault.org is dedicated to storing outputs of analyses in the form of statistical maps, parcellations and atlases, a unique strategy that contrasts with most neuroimaging repositories that store raw acquisition data or stereotaxic coordinates. Such maps are indispensable for performing meta-analyses, validating novel methodology, and deciding on precise outlines for regions of interest (ROIs). NeuroVault is open to maps derived from both healthy and clinical populations, as well as from various imaging modalities (sMRI, fMRI, EEG, MEG, PET, etc.). The repository uses modern web technologies such as interactive web-based visualization, cognitive decoding, and comparison with other maps to provide researchers with efficient, intuitive tools to improve the understanding of their results. Each dataset and map is assigned a permanent Universal Resource Locator (URL), and all of the data is accessible through a REST Application Programming Interface (API). Additionally, the repository supports the NIDM-Results standard and has the ability to parse outputs from popular FSL and SPM software packages to automatically extract relevant metadata. This ease of use, modern web-integration, and pioneering functionality holds promise to improve the workflow for making inferences about and sharing whole-brain statistical maps. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Early Warning Signs of Suicide in Service Members Who Engage in Unauthorized Acts of Violence

    DTIC Science & Technology

    2016-06-01

    observable to military law enforcement personnel. Statistical analyses tested for differences in warning signs between cases of suicide, violence, or...indicators, (2) Behavioral Change indicators, (3) Social indicators, and (4) Occupational indicators. Statistical analyses were conducted to test for...6 Coding _________________________________________________________________ 7 Statistical

  19. Reduction of Fasting Blood Glucose and Hemoglobin A1c Using Oral Aloe Vera: A Meta-Analysis.

    PubMed

    Dick, William R; Fletcher, Emily A; Shah, Sachin A

    2016-06-01

    Diabetes mellitus is a global epidemic and one of the leading causes of morbidity and mortality. Additional medications that are novel, affordable, and efficacious are needed to treat this rampant disease. This meta-analysis was performed to ascertain the effectiveness of oral aloe vera consumption on the reduction of fasting blood glucose (FBG) and hemoglobin A1c (HbA1c). PubMed, CINAHL, Natural Medicines Comprehensive Database, and Natural Standard databases were searched. Studies of aloe vera's effect on FBG, HbA1c, homeostasis model assessment-estimated insulin resistance (HOMA-IR), fasting serum insulin, fructosamine, and oral glucose tolerance test (OGTT) in prediabetic and diabetic populations were examined. After data extraction, the parameters of FBG and HbA1c had appropriate data for meta-analyses. Extracted data were verified and then analyzed by StatsDirect Statistical Software. Reductions of FBG and HbA1c were reported as the weighted mean differences from baseline, calculated by a random-effects model with 95% confidence intervals. Subgroup analyses to determine clinical and statistical heterogeneity were also performed. Publication bias was assessed by using the Egger bias statistic. Nine studies were included in the FBG parameter (n = 283); 5 of these studies included HbA1c data (n = 89). Aloe vera decreased FBG by 46.6 mg/dL (p < 0.0001) and HbA1c by 1.05% (p = 0.004). Significant reductions of both endpoints were maintained in all subgroup analyses. Additionally, the data suggest that patients with an FBG ≥200 mg/dL may see a greater benefit. A mean FBG reduction of 109.9 mg/dL was observed in this population (p ≤ 0.0001). The Egger statistic showed publication bias with FBG but not with HbA1c (p = 0.010 and p = 0.602, respectively). These results support the use of oral aloe vera for significantly reducing FBG (46.6 mg/dL) and HbA1c (1.05%). Further clinical studies that are more robust and better controlled are warranted to further explore these findings.

  20. The effect of a senior jazz dance class on static balance in healthy women over 50 years of age: a pilot study.

    PubMed

    Wallmann, Harvey W; Gillis, Carrie B; Alpert, Patricia T; Miller, Sally K

    2009-01-01

    The purpose of this pilot study is to assess the impact of a senior jazz dance class on static balance for healthy women over 50 years of age using the NeuroCom Smart Balance Master System (Balance Master). A total of 12 healthy women aged 54-88 years completed a 15-week jazz dance class which they attended 1 time per week for 90 min per class. Balance data were collected using the Sensory Organization Test (SOT) at baseline (pre), at 7 weeks (mid), and after 15 weeks (post). An equilibrium score measuring postural sway was calculated for each of six different conditions. The composite equilibrium score (all six conditions integrated to 1 score) was used as an overall measure of balance. Repeated measures analyses of variance (ANOVAs) were used to compare the means of each participant's SOT composite equilibrium score in addition to the equilibrium score for each individual condition (1-6) across the 3 time points (pre, mid, post). There was a statistically significant difference among the means, p < .0005. Pairwise (Bonferroni) post hoc analyses revealed the following statistically significant findings for SOT composite equilibrium scores for the pre (67.33 + 10.43), mid (75.25 + 6.97), and post (79.00 + 4.97) measurements: premid (p = .008); prepost (p < .0005); midpost (p = .033). In addition, correlational statistics were used to determine any relationship between SOT scores and age. Results indicated that administration of a 15-week jazz dance class 1 time per week was beneficial in improving static balance as measured by the Balance Master SOT.

  1. AMAS: a fast tool for alignment manipulation and computing of summary statistics.

    PubMed

    Borowiec, Marek L

    2016-01-01

    The amount of data used in phylogenetics has grown explosively in the recent years and many phylogenies are inferred with hundreds or even thousands of loci and many taxa. These modern phylogenomic studies often entail separate analyses of each of the loci in addition to multiple analyses of subsets of genes or concatenated sequences. Computationally efficient tools for handling and computing properties of thousands of single-locus or large concatenated alignments are needed. Here I present AMAS (Alignment Manipulation And Summary), a tool that can be used either as a stand-alone command-line utility or as a Python package. AMAS works on amino acid and nucleotide alignments and combines capabilities of sequence manipulation with a function that calculates basic statistics. The manipulation functions include conversions among popular formats, concatenation, extracting sites and splitting according to a pre-defined partitioning scheme, creation of replicate data sets, and removal of taxa. The statistics calculated include the number of taxa, alignment length, total count of matrix cells, overall number of undetermined characters, percent of missing data, AT and GC contents (for DNA alignments), count and proportion of variable sites, count and proportion of parsimony informative sites, and counts of all characters relevant for a nucleotide or amino acid alphabet. AMAS is particularly suitable for very large alignments with hundreds of taxa and thousands of loci. It is computationally efficient, utilizes parallel processing, and performs better at concatenation than other popular tools. AMAS is a Python 3 program that relies solely on Python's core modules and needs no additional dependencies. AMAS source code and manual can be downloaded from http://github.com/marekborowiec/AMAS/ under GNU General Public License.

  2. Improving qPCR telomere length assays: Controlling for well position effects increases statistical power.

    PubMed

    Eisenberg, Dan T A; Kuzawa, Christopher W; Hayes, M Geoffrey

    2015-01-01

    Telomere length (TL) is commonly measured using quantitative PCR (qPCR). Although, easier than the southern blot of terminal restriction fragments (TRF) TL measurement method, one drawback of qPCR is that it introduces greater measurement error and thus reduces the statistical power of analyses. To address a potential source of measurement error, we consider the effect of well position on qPCR TL measurements. qPCR TL data from 3,638 people run on a Bio-Rad iCycler iQ are reanalyzed here. To evaluate measurement validity, correspondence with TRF, age, and between mother and offspring are examined. First, we present evidence for systematic variation in qPCR TL measurements in relation to thermocycler well position. Controlling for these well-position effects consistently improves measurement validity and yields estimated improvements in statistical power equivalent to increasing sample sizes by 16%. We additionally evaluated the linearity of the relationships between telomere and single copy gene control amplicons and between qPCR and TRF measures. We find that, unlike some previous reports, our data exhibit linear relationships. We introduce the standard error in percent, a superior method for quantifying measurement error as compared to the commonly used coefficient of variation. Using this measure, we find that excluding samples with high measurement error does not improve measurement validity in our study. Future studies using block-based thermocyclers should consider well position effects. Since additional information can be gleaned from well position corrections, rerunning analyses of previous results with well position correction could serve as an independent test of the validity of these results. © 2015 Wiley Periodicals, Inc.

  3. GreekLex 2: A comprehensive lexical database with part-of-speech, syllabic, phonological, and stress information

    PubMed Central

    van Heuven, Walter J. B.; Pitchford, Nicola J.; Ledgeway, Timothy

    2017-01-01

    Databases containing lexical properties on any given orthography are crucial for psycholinguistic research. In the last ten years, a number of lexical databases have been developed for Greek. However, these lack important part-of-speech information. Furthermore, the need for alternative procedures for calculating syllabic measurements and stress information, as well as combination of several metrics to investigate linguistic properties of the Greek language are highlighted. To address these issues, we present a new extensive lexical database of Modern Greek (GreekLex 2) with part-of-speech information for each word and accurate syllabification and orthographic information predictive of stress, as well as several measurements of word similarity and phonetic information. The addition of detailed statistical information about Greek part-of-speech, syllabification, and stress neighbourhood allowed novel analyses of stress distribution within different grammatical categories and syllabic lengths to be carried out. Results showed that the statistical preponderance of stress position on the pre-final syllable that is reported for Greek language is dependent upon grammatical category. Additionally, analyses showed that a proportion higher than 90% of the tokens in the database would be stressed correctly solely by relying on stress neighbourhood information. The database and the scripts for orthographic and phonological syllabification as well as phonetic transcription are available at http://www.psychology.nottingham.ac.uk/greeklex/. PMID:28231303

  4. GreekLex 2: A comprehensive lexical database with part-of-speech, syllabic, phonological, and stress information.

    PubMed

    Kyparissiadis, Antonios; van Heuven, Walter J B; Pitchford, Nicola J; Ledgeway, Timothy

    2017-01-01

    Databases containing lexical properties on any given orthography are crucial for psycholinguistic research. In the last ten years, a number of lexical databases have been developed for Greek. However, these lack important part-of-speech information. Furthermore, the need for alternative procedures for calculating syllabic measurements and stress information, as well as combination of several metrics to investigate linguistic properties of the Greek language are highlighted. To address these issues, we present a new extensive lexical database of Modern Greek (GreekLex 2) with part-of-speech information for each word and accurate syllabification and orthographic information predictive of stress, as well as several measurements of word similarity and phonetic information. The addition of detailed statistical information about Greek part-of-speech, syllabification, and stress neighbourhood allowed novel analyses of stress distribution within different grammatical categories and syllabic lengths to be carried out. Results showed that the statistical preponderance of stress position on the pre-final syllable that is reported for Greek language is dependent upon grammatical category. Additionally, analyses showed that a proportion higher than 90% of the tokens in the database would be stressed correctly solely by relying on stress neighbourhood information. The database and the scripts for orthographic and phonological syllabification as well as phonetic transcription are available at http://www.psychology.nottingham.ac.uk/greeklex/.

  5. Applications of spatial statistical network models to stream data

    USGS Publications Warehouse

    Isaak, Daniel J.; Peterson, Erin E.; Ver Hoef, Jay M.; Wenger, Seth J.; Falke, Jeffrey A.; Torgersen, Christian E.; Sowder, Colin; Steel, E. Ashley; Fortin, Marie-Josée; Jordan, Chris E.; Ruesch, Aaron S.; Som, Nicholas; Monestiez, Pascal

    2014-01-01

    Streams and rivers host a significant portion of Earth's biodiversity and provide important ecosystem services for human populations. Accurate information regarding the status and trends of stream resources is vital for their effective conservation and management. Most statistical techniques applied to data measured on stream networks were developed for terrestrial applications and are not optimized for streams. A new class of spatial statistical model, based on valid covariance structures for stream networks, can be used with many common types of stream data (e.g., water quality attributes, habitat conditions, biological surveys) through application of appropriate distributions (e.g., Gaussian, binomial, Poisson). The spatial statistical network models account for spatial autocorrelation (i.e., nonindependence) among measurements, which allows their application to databases with clustered measurement locations. Large amounts of stream data exist in many areas where spatial statistical analyses could be used to develop novel insights, improve predictions at unsampled sites, and aid in the design of efficient monitoring strategies at relatively low cost. We review the topic of spatial autocorrelation and its effects on statistical inference, demonstrate the use of spatial statistics with stream datasets relevant to common research and management questions, and discuss additional applications and development potential for spatial statistics on stream networks. Free software for implementing the spatial statistical network models has been developed that enables custom applications with many stream databases.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shettel, D.L. Jr.; Langfeldt, S.L.; Youngquist, C.A.

    This report presents a Hydrogeochemical and Stream Sediment Reconnaissance of the Christian NTMS Quadrangle, Alaska. In addition to this abbreviated data release, more complete data are available to the public in machine-readable form. These machine-readable data, as well as quarterly or semiannual program progress reports containing further information on the HSSR program in general, or on the Los Alamos National Laboratory portion of the program in particular, are available from DOE's Technical Library at its Grand Junction Area Office. Presented in this data release are location data, field analyses, and laboratory analyses of several different sample media. For the sakemore » of brevity, many field site observations have not been included in this volume; these data are, however, available on the magnetic tape. Appendices A through D describe the sample media and summarize the analytical results for each medium. The data have been subdivided by one of the Los Alamos National Laboratory sorting programs of Zinkl and others (1981a) into groups of stream-sediment, lake-sediment, stream-water, lake-water, and ground-water samples. For each group which contains a sufficient number of observations, statistical tables, tables of raw data, and 1:1,000,000 scale maps of pertinent elements have been included in this report. Also included are maps showing results of multivariate statistical analyses.« less

  7. General cognitive principles for learning structure in time and space.

    PubMed

    Goldstein, Michael H; Waterfall, Heidi R; Lotem, Arnon; Halpern, Joseph Y; Schwade, Jennifer A; Onnis, Luca; Edelman, Shimon

    2010-06-01

    How are hierarchically structured sequences of objects, events or actions learned from experience and represented in the brain? When several streams of regularities present themselves, which will be learned and which ignored? Can statistical regularities take effect on their own, or are additional factors such as behavioral outcomes expected to influence statistical learning? Answers to these questions are starting to emerge through a convergence of findings from naturalistic observations, behavioral experiments, neurobiological studies, and computational analyses and simulations. We propose that a small set of principles are at work in every situation that involves learning of structure from patterns of experience and outline a general framework that accounts for such learning. (c) 2010 Elsevier Ltd. All rights reserved.

  8. Correlation between Na/K ratio and electron densities in blood samples of breast cancer patients.

    PubMed

    Topdağı, Ömer; Toker, Ozan; Bakırdere, Sezgin; Bursalıoğlu, Ertuğrul Osman; Öz, Ersoy; Eyecioğlu, Önder; Demir, Mustafa; İçelli, Orhan

    2018-05-31

    The main purpose of this study was to investigate the relationship between the electron densities and Na/K ratio which has important role in breast cancer disease. Determinations of sodium and potassium concentrations in blood samples performed with inductive coupled plasma-atomic emission spectrometry. Electron density values of blood samples were determined via ZXCOM. Statistical analyses were performed for electron densities and Na/K ratio including Kolmogorov-Smirnov normality tests, Spearman's rank correlation test and Mann-Whitney U test. It was found that the electron densities significantly differ between control and breast cancer groups. In addition, statistically significant positive correlation was found between the electron density and Na/K ratios in breast cancer group.

  9. Uncertainties in radon related to house-specific factors and proximity to geological boundaries in England.

    PubMed

    Hunter, Nezahat; Muirhead, Colin R; Miles, Jon C H; Appleton, J Donald

    2009-08-01

    Data collected as a part of a survey on radon concentrations from about 40 000 dwellings in England for six contrasting geological units were analysed to evaluate the impact of house-specific factors (building characteristics and construction dates) and of proximity to geological boundaries. After adjusting for temperature and outdoor radon, geological unit, house type, double glazing and date of building were found to have a statistically significant influence on indoor radon concentrations and explained about 29 % of the total variation between dwellings in logarithmically transformed radon values. In addition, there were statistically significant differences in radon concentrations according to proximity to geological boundaries categories for most of the geological units, but no consistent pattern could be detected.

  10. A risk-based statistical investigation of the quantification of polymorphic purity of a pharmaceutical candidate by solid-state 19F NMR.

    PubMed

    Barry, Samantha J; Pham, Tran N; Borman, Phil J; Edwards, Andrew J; Watson, Simon A

    2012-01-27

    The DMAIC (Define, Measure, Analyse, Improve and Control) framework and associated statistical tools have been applied to both identify and reduce variability observed in a quantitative (19)F solid-state NMR (SSNMR) analytical method. The method had been developed to quantify levels of an additional polymorph (Form 3) in batches of an active pharmaceutical ingredient (API), where Form 1 is the predominant polymorph. In order to validate analyses of the polymorphic form, a single batch of API was used as a standard each time the method was used. The level of Form 3 in this standard was observed to gradually increase over time, the effect not being immediately apparent due to method variability. In order to determine the cause of this unexpected increase and to reduce method variability, a risk-based statistical investigation was performed to identify potential factors which could be responsible for these effects. Factors identified by the risk assessment were investigated using a series of designed experiments to gain a greater understanding of the method. The increase of the level of Form 3 in the standard was primarily found to correlate with the number of repeat analyses, an effect not previously reported in SSNMR literature. Differences in data processing (phasing and linewidth) were found to be responsible for the variability in the method. After implementing corrective actions the variability was reduced such that the level of Form 3 was within an acceptable range of ±1% ww(-1) in fresh samples of API. Copyright © 2011. Published by Elsevier B.V.

  11. Statistical approaches in published ophthalmic clinical science papers: a comparison to statistical practice two decades ago.

    PubMed

    Zhang, Harrison G; Ying, Gui-Shuang

    2018-02-09

    The aim of this study is to evaluate the current practice of statistical analysis of eye data in clinical science papers published in British Journal of Ophthalmology ( BJO ) and to determine whether the practice of statistical analysis has improved in the past two decades. All clinical science papers (n=125) published in BJO in January-June 2017 were reviewed for their statistical analysis approaches for analysing primary ocular measure. We compared our findings to the results from a previous paper that reviewed BJO papers in 1995. Of 112 papers eligible for analysis, half of the studies analysed the data at an individual level because of the nature of observation, 16 (14%) studies analysed data from one eye only, 36 (32%) studies analysed data from both eyes at ocular level, one study (1%) analysed the overall summary of ocular finding per individual and three (3%) studies used the paired comparison. Among studies with data available from both eyes, 50 (89%) of 56 papers in 2017 did not analyse data from both eyes or ignored the intereye correlation, as compared with in 60 (90%) of 67 papers in 1995 (P=0.96). Among studies that analysed data from both eyes at an ocular level, 33 (92%) of 36 studies completely ignored the intereye correlation in 2017, as compared with in 16 (89%) of 18 studies in 1995 (P=0.40). A majority of studies did not analyse the data properly when data from both eyes were available. The practice of statistical analysis did not improve in the past two decades. Collaborative efforts should be made in the vision research community to improve the practice of statistical analysis for ocular data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  12. Quantitative genetic analysis of the body composition and blood pressure association in two ethnically diverse populations.

    PubMed

    Ghosh, Sudipta; Dosaev, Tasbulat; Prakash, Jai; Livshits, Gregory

    2017-04-01

    The major aim of this study was to conduct comparative quantitative-genetic analysis of the body composition (BCP) and somatotype (STP) variation, as well as their correlations with blood pressure (BP) in two ethnically, culturally and geographically different populations: Santhal, indigenous ethnic group from India and Chuvash, indigenous population from Russia. Correspondently two pedigree-based samples were collected from 1,262 Santhal and1,558 Chuvash individuals, respectively. At the first stage of the study, descriptive statistics and a series of univariate regression analyses were calculated. Finally, multiple and multivariate regression (MMR) analyses, with BP measurements as dependent variables and age, sex, BCP and STP as independent variables were carried out in each sample separately. The significant and independent covariates of BP were identified and used for re-examination in pedigree-based variance decomposition analysis. Despite clear and significant differences between the populations in BCP/STP, both Santhal and Chuvash were found to be predominantly mesomorphic irrespective of their sex. According to MMR analyses variation of BP significantly depended on age and mesomorphic component in both samples, and in addition on sex, ectomorphy and fat mass index in Santhal and on fat free mass index in Chuvash samples, respectively. Additive genetic component contributes to a substantial proportion of blood pressure and body composition variance. Variance component analysis in addition to above mentioned results suggests that additive genetic factors influence BP and BCP/STP associations significantly. © 2017 Wiley Periodicals, Inc.

  13. Using R-Project for Free Statistical Analysis in Extension Research

    ERIC Educational Resources Information Center

    Mangiafico, Salvatore S.

    2013-01-01

    One option for Extension professionals wishing to use free statistical software is to use online calculators, which are useful for common, simple analyses. A second option is to use a free computing environment capable of performing statistical analyses, like R-project. R-project is free, cross-platform, powerful, and respected, but may be…

  14. Self-reported unemployment status and recession: An analysis on the Italian population with and without mental health problems

    PubMed Central

    Starace, Fabrizio; Mungai, Francesco; Sarti, Elena; Addabbo, Tindara

    2017-01-01

    Purpose During economic recession people with mental health problems have higher risk of losing their job. This paper analyses the issue by considering the Italian rates of unemployment amongst individuals with and without mental health problems in 2005 and 2013, that is prior and during the economic crisis. Methods We used data from the National surveys on “Health conditions and use of health services” carried out by the Italian National Institute of Statistics (ISTAT) for the years 2005 and 2013. The surveys collected information on the health status and socioeconomic conditions of the Italian population. Self-reported unemployment status was analysed amongst individuals with and without reported mental health problems. In addition, descriptive statistics were performed in order to detect possible differences in the risk of unemployment within different regional contexts characterised by different socio-economic conditions. Results The recession determined increased disparities in unemployment rates between people with and without mental health problems. Regardless to the presence of mental health problems, young people were more likely to be unemployed. Among people who reported mental health problems, males were more likely to be unemployed than females. People with low education level were more likely to be unemployed, particularly during the recession and in presence of mental health problems. Changes in unemployment rates due to the crisis showed different patterns across different regions of the Country. Conclusions These analyses confirm that in periods of economic crisis people with mental health problems are at risk of experiencing exclusion from labour market. In addition, the impact is even worse within the group with low education and younger age. These findings emphasise the importance of specific interventions aimed at promoting labour market participation and reintegration for people with mental health problems. PMID:28376098

  15. Self-reported unemployment status and recession: An analysis on the Italian population with and without mental health problems.

    PubMed

    Starace, Fabrizio; Mungai, Francesco; Sarti, Elena; Addabbo, Tindara

    2017-01-01

    During economic recession people with mental health problems have higher risk of losing their job. This paper analyses the issue by considering the Italian rates of unemployment amongst individuals with and without mental health problems in 2005 and 2013, that is prior and during the economic crisis. We used data from the National surveys on "Health conditions and use of health services" carried out by the Italian National Institute of Statistics (ISTAT) for the years 2005 and 2013. The surveys collected information on the health status and socioeconomic conditions of the Italian population. Self-reported unemployment status was analysed amongst individuals with and without reported mental health problems. In addition, descriptive statistics were performed in order to detect possible differences in the risk of unemployment within different regional contexts characterised by different socio-economic conditions. The recession determined increased disparities in unemployment rates between people with and without mental health problems. Regardless to the presence of mental health problems, young people were more likely to be unemployed. Among people who reported mental health problems, males were more likely to be unemployed than females. People with low education level were more likely to be unemployed, particularly during the recession and in presence of mental health problems. Changes in unemployment rates due to the crisis showed different patterns across different regions of the Country. These analyses confirm that in periods of economic crisis people with mental health problems are at risk of experiencing exclusion from labour market. In addition, the impact is even worse within the group with low education and younger age. These findings emphasise the importance of specific interventions aimed at promoting labour market participation and reintegration for people with mental health problems.

  16. Increased left hemisphere impairment in high-functioning autism: a tract based spatial statistics study.

    PubMed

    Perkins, Thomas John; Stokes, Mark Andrew; McGillivray, Jane Anne; Mussap, Alexander Julien; Cox, Ivanna Anne; Maller, Jerome Joseph; Bittar, Richard Garth

    2014-11-30

    There is evidence emerging from Diffusion Tensor Imaging (DTI) research that autism spectrum disorders (ASD) are associated with greater impairment in the left hemisphere. Although this has been quantified with volumetric region of interest analyses, it has yet to be tested with white matter integrity analysis. In the present study, tract based spatial statistics was used to contrast white matter integrity of 12 participants with high-functioning autism or Aspergers syndrome (HFA/AS) with 12 typically developing individuals. Fractional Anisotropy (FA) was examined, in addition to axial, radial and mean diffusivity (AD, RD and MD). In the left hemisphere, participants with HFA/AS demonstrated significantly reduced FA in predominantly thalamic and fronto-parietal pathways and increased RD. Symmetry analyses confirmed that in the HFA/AS group, WM disturbance was significantly greater in the left compared to right hemisphere. These findings contribute to a growing body of literature suggestive of reduced FA in ASD, and provide preliminary evidence for RD impairments in the left hemisphere. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Additive Interaction between Heterogeneous Environmental Quality Domains (Air, Water, Land, Sociodemographic, and Built Environment) on Preterm Birth.

    PubMed

    Grabich, Shannon C; Rappazzo, Kristen M; Gray, Christine L; Jagai, Jyotsna S; Jian, Yun; Messer, Lynne C; Lobdell, Danelle T

    2016-01-01

    Environmental exposures often occur in tandem; however, epidemiological research often focuses on singular exposures. Statistical interactions among broad, well-characterized environmental domains have not yet been evaluated in association with health. We address this gap by conducting a county-level cross-sectional analysis of interactions between Environmental Quality Index (EQI) domain indices on preterm birth in the Unites States from 2000 to 2005. The EQI, a county-level index constructed for the 2000-2005 time period, was constructed from five domain-specific indices (air, water, land, built, and sociodemographic) using principal component analyses. County-level preterm birth rates ( n  = 3141) were estimated using live births from the National Center for Health Statistics. Linear regression was used to estimate prevalence differences (PDs) and 95% confidence intervals (CIs) comparing worse environmental quality to the better quality for each model for (a) each individual domain main effect, (b) the interaction contrast, and (c) the two main effects plus interaction effect (i.e., the "net effect") to show departure from additivity for the all U.S. counties. Analyses were also performed for subgroupings by four urban/rural strata. We found the suggestion of antagonistic interactions but no synergism, along with several purely additive (i.e., no interaction) associations. In the non-stratified model, we observed antagonistic interactions, between the sociodemographic/air domains [net effect (i.e., the association, including main effects and interaction effects) PD: -0.004 (95% CI: -0.007, 0.000), interaction contrast: -0.013 (95% CI: -0.020, -0.007)] and built/air domains [net effect PD: 0.008 (95% CI 0.004, 0.011), interaction contrast: -0.008 (95% CI: -0.015, -0.002)]. Most interactions were between the air domain and other respective domains. Interactions differed by urbanicity, with more interactions observed in non-metropolitan regions. Observed antagonistic associations may indicate that those living in areas with multiple detrimental domains may have other interfering factors reducing the burden of environmental exposure. This study is the first to explore interactions across different environmental domains and demonstrates the utility of the EQI to examine the relationship between environmental domain interactions and human health. While we did observe some departures from additivity, many observed effects were additive. This study demonstrated that interactions between environmental domains should be considered in future analyses.

  18. The Problem of Auto-Correlation in Parasitology

    PubMed Central

    Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick

    2012-01-01

    Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865

  19. Fast and accurate imputation of summary statistics enhances evidence of functional enrichment

    PubMed Central

    Pasaniuc, Bogdan; Zaitlen, Noah; Shi, Huwenbo; Bhatia, Gaurav; Gusev, Alexander; Pickrell, Joseph; Hirschhorn, Joel; Strachan, David P.; Patterson, Nick; Price, Alkes L.

    2014-01-01

    Motivation: Imputation using external reference panels (e.g. 1000 Genomes) is a widely used approach for increasing power in genome-wide association studies and meta-analysis. Existing hidden Markov models (HMM)-based imputation approaches require individual-level genotypes. Here, we develop a new method for Gaussian imputation from summary association statistics, a type of data that is becoming widely available. Results: In simulations using 1000 Genomes (1000G) data, this method recovers 84% (54%) of the effective sample size for common (>5%) and low-frequency (1–5%) variants [increasing to 87% (60%) when summary linkage disequilibrium information is available from target samples] versus the gold standard of 89% (67%) for HMM-based imputation, which cannot be applied to summary statistics. Our approach accounts for the limited sample size of the reference panel, a crucial step to eliminate false-positive associations, and it is computationally very fast. As an empirical demonstration, we apply our method to seven case–control phenotypes from the Wellcome Trust Case Control Consortium (WTCCC) data and a study of height in the British 1958 birth cohort (1958BC). Gaussian imputation from summary statistics recovers 95% (105%) of the effective sample size (as quantified by the ratio of χ2 association statistics) compared with HMM-based imputation from individual-level genotypes at the 227 (176) published single nucleotide polymorphisms (SNPs) in the WTCCC (1958BC height) data. In addition, for publicly available summary statistics from large meta-analyses of four lipid traits, we publicly release imputed summary statistics at 1000G SNPs, which could not have been obtained using previously published methods, and demonstrate their accuracy by masking subsets of the data. We show that 1000G imputation using our approach increases the magnitude and statistical evidence of enrichment at genic versus non-genic loci for these traits, as compared with an analysis without 1000G imputation. Thus, imputation of summary statistics will be a valuable tool in future functional enrichment analyses. Availability and implementation: Publicly available software package available at http://bogdan.bioinformatics.ucla.edu/software/. Contact: bpasaniuc@mednet.ucla.edu or aprice@hsph.harvard.edu Supplementary information: Supplementary materials are available at Bioinformatics online. PMID:24990607

  20. Functional genomics annotation of a statistical epistasis network associated with bladder cancer susceptibility.

    PubMed

    Hu, Ting; Pan, Qinxin; Andrew, Angeline S; Langer, Jillian M; Cole, Michael D; Tomlinson, Craig R; Karagas, Margaret R; Moore, Jason H

    2014-04-11

    Several different genetic and environmental factors have been identified as independent risk factors for bladder cancer in population-based studies. Recent studies have turned to understanding the role of gene-gene and gene-environment interactions in determining risk. We previously developed the bioinformatics framework of statistical epistasis networks (SEN) to characterize the global structure of interacting genetic factors associated with a particular disease or clinical outcome. By applying SEN to a population-based study of bladder cancer among Caucasians in New Hampshire, we were able to identify a set of connected genetic factors with strong and significant interaction effects on bladder cancer susceptibility. To support our statistical findings using networks, in the present study, we performed pathway enrichment analyses on the set of genes identified using SEN, and found that they are associated with the carcinogen benzo[a]pyrene, a component of tobacco smoke. We further carried out an mRNA expression microarray experiment to validate statistical genetic interactions, and to determine if the set of genes identified in the SEN were differentially expressed in a normal bladder cell line and a bladder cancer cell line in the presence or absence of benzo[a]pyrene. Significant nonrandom sets of genes from the SEN were found to be differentially expressed in response to benzo[a]pyrene in both the normal bladder cells and the bladder cancer cells. In addition, the patterns of gene expression were significantly different between these two cell types. The enrichment analyses and the gene expression microarray results support the idea that SEN analysis of bladder in population-based studies is able to identify biologically meaningful statistical patterns. These results bring us a step closer to a systems genetic approach to understanding cancer susceptibility that integrates population and laboratory-based studies.

  1. Formative assessment in mathematics for engineering students

    NASA Astrophysics Data System (ADS)

    Ní Fhloinn, Eabhnat; Carr, Michael

    2017-07-01

    In this paper, we present a range of formative assessment types for engineering mathematics, including in-class exercises, homework, mock examination questions, table quizzes, presentations, critical analyses of statistical papers, peer-to-peer teaching, online assessments and electronic voting systems. We provide practical tips for the implementation of such assessments, with a particular focus on time or resource constraints and large class sizes, as well as effective methods of feedback. In addition, we consider the benefits of such formative assessments for students and staff.

  2. Assessing potential effects of highway runoff on receiving-water quality at selected sites in Oregon with the Stochastic Empirical Loading and Dilution Model (SELDM)

    USGS Publications Warehouse

    Risley, John C.; Granato, Gregory E.

    2014-01-01

    6. An analysis of the use of grab sampling and nonstochastic upstream modeling methods was done to evaluate the potential effects on modeling outcomes. Additional analyses using surrogate water-quality datasets for the upstream basin and highway catchment were provided for six Oregon study sites to illustrate the risk-based information that SELDM will produce. These analyses show that the potential effects of highway runoff on receiving-water quality downstream of the outfall depends on the ratio of drainage areas (dilution), the quality of the receiving water upstream of the highway, and the concentration of the criteria of the constituent of interest. These analyses also show that the probability of exceeding a water-quality criterion may depend on the input statistics used, thus careful selection of representative values is important.

  3. The response of numerical weather prediction analysis systems to FGGE 2b data

    NASA Technical Reports Server (NTRS)

    Hollingsworth, A.; Lorenc, A.; Tracton, S.; Arpe, K.; Cats, G.; Uppala, S.; Kallberg, P.

    1985-01-01

    An intercomparison of analyses of the main PGGE Level IIb data set is presented with three advanced analysis systems. The aims of the work are to estimate the extent and magnitude of the differences between the analyses, to identify the reasons for the differences, and finally to estimate the significance of the differences. Extratropical analyses only are considered. Objective evaluations of analysis quality, such as fit to observations, statistics of analysis differences, and mean fields are discussed. In addition, substantial emphasis is placed on subjective evaluation of a series of case studies that were selected to illustrate the importance of different aspects of the analysis procedures, such as quality control, data selection, resolution, dynamical balance, and the role of the assimilating forecast model. In some cases, the forecast models are used as selective amplifiers of analysis differences to assist in deciding which analysis was more nearly correct in the treatment of particular data.

  4. Enhanced Component Performance Study. Emergency Diesel Generators 1998–2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2014-11-01

    This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2013 and maintenance unavailability (UA) performance data using Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2013. The objective is to present an analysis of factors that could influence the system and component trends in addition to annual performance trends of failure rates and probabilities. The factors analyzed for the EDG component are the differences in failuresmore » between all demands and actual unplanned engineered safety feature (ESF) demands, differences among manufacturers, and differences among EDG ratings. Statistical analyses of these differences are performed and results showing whether pooling is acceptable across these factors. In addition, engineering analyses were performed with respect to time period and failure mode. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating.« less

  5. An application of Social Values for Ecosystem Services (SolVES) to three national forests in Colorado and Wyoming

    USGS Publications Warehouse

    Sherrouse, Benson C.; Semmens, Darius J.; Clement, Jessica M.

    2014-01-01

    Despite widespread recognition that social-value information is needed to inform stakeholders and decision makers regarding trade-offs in environmental management, it too often remains absent from ecosystem service assessments. Although quantitative indicators of social values need to be explicitly accounted for in the decision-making process, they need not be monetary. Ongoing efforts to map such values demonstrate how they can also be made spatially explicit and relatable to underlying ecological information. We originally developed Social Values for Ecosystem Services (SolVES) as a tool to assess, map, and quantify nonmarket values perceived by various groups of ecosystem stakeholders. With SolVES 2.0 we have extended the functionality by integrating SolVES with Maxent maximum entropy modeling software to generate more complete social-value maps from available value and preference survey data and to produce more robust models describing the relationship between social values and ecosystems. The current study has two objectives: (1) evaluate how effectively the value index, a quantitative, nonmonetary social-value indicator calculated by SolVES, reproduces results from more common statistical methods of social-survey data analysis and (2) examine how the spatial results produced by SolVES provide additional information that could be used by managers and stakeholders to better understand more complex relationships among stakeholder values, attitudes, and preferences. To achieve these objectives, we applied SolVES to value and preference survey data collected for three national forests, the Pike and San Isabel in Colorado and the Bridger–Teton and the Shoshone in Wyoming. Value index results were generally consistent with results found through more common statistical analyses of the survey data such as frequency, discriminant function, and correlation analyses. In addition, spatial analysis of the social-value maps produced by SolVES provided information that was useful for explaining relationships between stakeholder values and forest uses. Our results suggest that SolVES can effectively reproduce information derived from traditional statistical analyses while adding spatially explicit, social-value information that can contribute to integrated resource assessment, planning, and management of forests and other ecosystems.

  6. Assessing the suitability of summary data for two-sample Mendelian randomization analyses using MR-Egger regression: the role of the I2 statistic.

    PubMed

    Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R

    2016-12-01

    : MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We demonstrate our proposed approach for a two-sample summary data MR analysis to estimate the causal effect of low-density lipoprotein on heart disease risk. A high value of IGX2 close to 1 indicates that dilution does not materially affect the standard MR-Egger analyses for these data. : Care must be taken to assess the NOME assumption via the IGX2 statistic before implementing standard MR-Egger regression in the two-sample summary data context. If IGX2 is sufficiently low (less than 90%), inferences from the method should be interpreted with caution and adjustment methods considered. © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association.

  7. Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Brunett, Acacia

    2015-04-26

    The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less

  8. Multiple Phenotype Association Tests Using Summary Statistics in Genome-Wide Association Studies

    PubMed Central

    Liu, Zhonghua; Lin, Xihong

    2017-01-01

    Summary We study in this paper jointly testing the associations of a genetic variant with correlated multiple phenotypes using the summary statistics of individual phenotype analysis from Genome-Wide Association Studies (GWASs). We estimated the between-phenotype correlation matrix using the summary statistics of individual phenotype GWAS analyses, and developed genetic association tests for multiple phenotypes by accounting for between-phenotype correlation without the need to access individual-level data. Since genetic variants often affect multiple phenotypes differently across the genome and the between-phenotype correlation can be arbitrary, we proposed robust and powerful multiple phenotype testing procedures by jointly testing a common mean and a variance component in linear mixed models for summary statistics. We computed the p-values of the proposed tests analytically. This computational advantage makes our methods practically appealing in large-scale GWASs. We performed simulation studies to show that the proposed tests maintained correct type I error rates, and to compare their powers in various settings with the existing methods. We applied the proposed tests to a GWAS Global Lipids Genetics Consortium summary statistics data set and identified additional genetic variants that were missed by the original single-trait analysis. PMID:28653391

  9. Multiple phenotype association tests using summary statistics in genome-wide association studies.

    PubMed

    Liu, Zhonghua; Lin, Xihong

    2018-03-01

    We study in this article jointly testing the associations of a genetic variant with correlated multiple phenotypes using the summary statistics of individual phenotype analysis from Genome-Wide Association Studies (GWASs). We estimated the between-phenotype correlation matrix using the summary statistics of individual phenotype GWAS analyses, and developed genetic association tests for multiple phenotypes by accounting for between-phenotype correlation without the need to access individual-level data. Since genetic variants often affect multiple phenotypes differently across the genome and the between-phenotype correlation can be arbitrary, we proposed robust and powerful multiple phenotype testing procedures by jointly testing a common mean and a variance component in linear mixed models for summary statistics. We computed the p-values of the proposed tests analytically. This computational advantage makes our methods practically appealing in large-scale GWASs. We performed simulation studies to show that the proposed tests maintained correct type I error rates, and to compare their powers in various settings with the existing methods. We applied the proposed tests to a GWAS Global Lipids Genetics Consortium summary statistics data set and identified additional genetic variants that were missed by the original single-trait analysis. © 2017, The International Biometric Society.

  10. Biomechanical Analysis of Military Boots. Phase 1. Materials Testing of Military and Commercial Footwear

    DTIC Science & Technology

    1992-10-01

    N=8) and Results of 44 Statistical Analyses for Impact Test Performed on Forefoot of Unworn Footwear A-2. Summary Statistics (N=8) and Results of...on Forefoot of Worn Footwear Vlll Tables (continued) Table Page B-2. Summary Statistics (N=4) and Results of 76 Statistical Analyses for Impact...used tests to assess heel and forefoot shock absorption, upper and sole durability, and flexibility (Cavanagh, 1978). Later, the number of tests was

  11. Quantifying, displaying and accounting for heterogeneity in the meta-analysis of RCTs using standard and generalised Q statistics

    PubMed Central

    2011-01-01

    Background Clinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis. Heterogeneity is usually assessed via the well known Q and I2 statistics, along with the random effects estimate they imply. In recent years, alternative methods for quantifying heterogeneity have been proposed, that are based on a 'generalised' Q statistic. Methods We review 18 IPD meta-analyses of RCTs into treatments for cancer, in order to quantify the amount of heterogeneity present and also to discuss practical methods for explaining heterogeneity. Results Differing results were obtained when the standard Q and I2 statistics were used to test for the presence of heterogeneity. The two meta-analyses with the largest amount of heterogeneity were investigated further, and on inspection the straightforward application of a random effects model was not deemed appropriate. Compared to the standard Q statistic, the generalised Q statistic provided a more accurate platform for estimating the amount of heterogeneity in the 18 meta-analyses. Conclusions Explaining heterogeneity via the pre-specification of trial subgroups, graphical diagnostic tools and sensitivity analyses produced a more desirable outcome than an automatic application of the random effects model. Generalised Q statistic methods for quantifying and adjusting for heterogeneity should be incorporated as standard into statistical software. Software is provided to help achieve this aim. PMID:21473747

  12. Distinguishing synchronous and time-varying synergies using point process interval statistics: motor primitives in frog and rat

    PubMed Central

    Hart, Corey B.; Giszter, Simon F.

    2013-01-01

    We present and apply a method that uses point process statistics to discriminate the forms of synergies in motor pattern data, prior to explicit synergy extraction. The method uses electromyogram (EMG) pulse peak timing or onset timing. Peak timing is preferable in complex patterns where pulse onsets may be overlapping. An interval statistic derived from the point processes of EMG peak timings distinguishes time-varying synergies from synchronous synergies (SS). Model data shows that the statistic is robust for most conditions. Its application to both frog hindlimb EMG and rat locomotion hindlimb EMG show data from these preparations is clearly most consistent with synchronous synergy models (p < 0.001). Additional direct tests of pulse and interval relations in frog data further bolster the support for synchronous synergy mechanisms in these data. Our method and analyses support separated control of rhythm and pattern of motor primitives, with the low level execution primitives comprising pulsed SS in both frog and rat, and both episodic and rhythmic behaviors. PMID:23675341

  13. Love Canal Emergency Declaration Area habitability study. Volume 5. Peer review summary: TRC (Technical Review Committee) responses. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Environmental studies were conducted to provide data that could be used by the Commissioner of Health for the State of New York in determining whether the Emergency Declaration Area surrounding the Love Canal hazardous-waste site is habitable. These volumes (II through IV) were reviewed by a peer-review panel of expert scientists. The scientists concluded that each of the three environmental studies was well planned and well executed. Volume V summarizes the peer review and gives additional information or clarifications as requested during the peer review. Volume V also provides additional supplemental statistical analyses requested by the peer reviewer panel.

  14. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  15. Cluster detection methods applied to the Upper Cape Cod cancer data.

    PubMed

    Ozonoff, Al; Webster, Thomas; Vieira, Veronica; Weinberg, Janice; Ozonoff, David; Aschengrau, Ann

    2005-09-15

    A variety of statistical methods have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different methods. Most of the available comparative studies rely on simulated data rather than real data sets. We have chosen three methods currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM) method as applied by Webster; and Kulldorff's spatial scan statistic. We apply these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three methods generally concur. However, for 15 year latency and no latency assumptions, the methods produce different results when testing for global clustering. The comparative analyses of real data sets by different statistical methods provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical methods applied to epidemiological data with a spatial component.

  16. The genetic basis of female multiple mating in a polyandrous livebearing fish

    PubMed Central

    Evans, Jonathan P; Gasparini, Clelia

    2013-01-01

    The widespread occurrence of female multiple mating (FMM) demands evolutionary explanation, particularly in the light of the costs of mating. One explanation encapsulated by “good sperm” and “sexy-sperm” (GS-SS) theoretical models is that FMM facilitates sperm competition, thus ensuring paternity by males that pass on genes for elevated sperm competitiveness to their male offspring. While support for this component of GS-SS theory is accumulating, a second but poorly tested assumption of these models is that there should be corresponding heritable genetic variation in FMM – the proposed mechanism of postcopulatory preferences underlying GS-SS models. Here, we conduct quantitative genetic analyses on paternal half-siblings to test this component of GS-SS theory in the guppy (Poecilia reticulata), a freshwater fish with some of the highest known rates of FMM in vertebrates. As with most previous quantitative genetic analyses of FMM in other species, our results reveal high levels of phenotypic variation in this trait and a correspondingly low narrow-sense heritability (h2 = 0.11). Furthermore, although our analysis of additive genetic variance in FMM was not statistically significant (probably owing to limited statistical power), the ensuing estimate of mean-standardized additive genetic variance (IA = 0.7) was nevertheless relatively low compared with estimates published for life-history traits across a broad range of taxa. Our results therefore add to a growing body of evidence that FMM is characterized by relatively low additive genetic variation, thus apparently contradicting GS-SS theory. However, we qualify this conclusion by drawing attention to potential deficiencies in most designs (including ours) that have tested for genetic variation in FMM, particularly those that fail to account for intersexual interactions that underlie FMM in many systems. PMID:23403856

  17. Effects of cigarette tax on cigarette consumption and the Chinese economy

    PubMed Central

    Hu, T; Mao, Z

    2002-01-01

    Objectives: To analyse a policy dilemma in China on public health versus the tobacco economy through additional cigarette tax. Methods: Using published statistics from 1980 through 1997 to estimate the impact of tobacco production and consumption on government revenue and the entire economy. These estimates relied on the results of estimated price elasticities of the demand for cigarettes in China. Results: Given the estimated price elasticities (-0.54), by introducing an additional 10% increase in cigarette tax per pack (from the current 40% to 50% tax rate), the central government tax revenue would twice exceed total losses in industry revenue, tobacco farmers' income, and local tax revenue. In addition, between 1.44 and 2.16 million lives would be saved by this tax increase. Conclusions: Additional taxation on cigarettes in China would be a desirable public policy for the Chinese government to consider. PMID:12035000

  18. PSSMSearch: a server for modeling, visualization, proteome-wide discovery and annotation of protein motif specificity determinants.

    PubMed

    Krystkowiak, Izabella; Manguy, Jean; Davey, Norman E

    2018-06-05

    There is a pressing need for in silico tools that can aid in the identification of the complete repertoire of protein binding (SLiMs, MoRFs, miniMotifs) and modification (moiety attachment/removal, isomerization, cleavage) motifs. We have created PSSMSearch, an interactive web-based tool for rapid statistical modeling, visualization, discovery and annotation of protein motif specificity determinants to discover novel motifs in a proteome-wide manner. PSSMSearch analyses proteomes for regions with significant similarity to a motif specificity determinant model built from a set of aligned motif-containing peptides. Multiple scoring methods are available to build a position-specific scoring matrix (PSSM) describing the motif specificity determinant model. This model can then be modified by a user to add prior knowledge of specificity determinants through an interactive PSSM heatmap. PSSMSearch includes a statistical framework to calculate the significance of specificity determinant model matches against a proteome of interest. PSSMSearch also includes the SLiMSearch framework's annotation, motif functional analysis and filtering tools to highlight relevant discriminatory information. Additional tools to annotate statistically significant shared keywords and GO terms, or experimental evidence of interaction with a motif-recognizing protein have been added. Finally, PSSM-based conservation metrics have been created for taxonomic range analyses. The PSSMSearch web server is available at http://slim.ucd.ie/pssmsearch/.

  19. Toxic essential oils. Part V: Behaviour modulating and toxic properties of thujones and thujone-containing essential oils of Salvia officinalis L., Artemisia absinthium L., Thuja occidentalis L. and Tanacetum vulgare L.

    PubMed

    Radulović, Niko S; Genčić, Marija S; Stojanović, Nikola M; Randjelović, Pavle J; Stojanović-Radić, Zorica Z; Stojiljković, Nenad I

    2017-07-01

    Neurotoxic thujones (α- and β-diastereoisomers) are common constituents of plant essential oils. In this study, we employed a statistical approach to determine the contribution of thujones to the overall observed behaviour-modulating and toxic effects of essential oils (Salvia officinalis L., Artemisia absinthium L., Thuja occidentalis L. and Tanacetum vulgare L.) containing these monoterpene ketones. The data from three in vivo neuropharmacological tests on rats (open field, light-dark, and diazepam-induced sleep), and toxicity assays (brine shrimp, and antimicrobial activity against a panel of microorganisms), together with the data from detailed chemical analyses, were subjected to a multivariate statistical treatment to reveal the possible correlation(s) between the content of essential-oil constituents and the observed effects. The results strongly imply that the toxic and behaviour-modulating activity of the oils (hundreds of constituents) should not be associated exclusively with thujones. The statistical analyses pinpointed to a number of essential-oil constituents other than thujones that demonstrated a clear correlation with either the toxicity, antimicrobial effect or the activity on CNS. Thus, in addition to the thujone content, the amount and toxicity of other constituents should be taken into consideration when making risk assessment and determining the regulatory status of plants in food and medicines. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Statistical power analysis in wildlife research

    USGS Publications Warehouse

    Steidl, R.J.; Hayes, J.P.

    1997-01-01

    Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.

  1. Mobile phones and head tumours. The discrepancies in cause-effect relationships in the epidemiological studies - how do they arise?

    PubMed

    Levis, Angelo G; Minicuci, Nadia; Ricci, Paolo; Gennaro, Valerio; Garbisa, Spiridione

    2011-06-17

    Whether or not there is a relationship between use of mobile phones (analogue and digital cellulars, and cordless) and head tumour risk (brain tumours, acoustic neuromas, and salivary gland tumours) is still a matter of debate; progress requires a critical analysis of the methodological elements necessary for an impartial evaluation of contradictory studies. A close examination of the protocols and results from all case-control and cohort studies, pooled- and meta-analyses on head tumour risk for mobile phone users was carried out, and for each study the elements necessary for evaluating its reliability were identified. In addition, new meta-analyses of the literature data were undertaken. These were limited to subjects with mobile phone latency time compatible with the progression of the examined tumours, and with analysis of the laterality of head tumour localisation corresponding to the habitual laterality of mobile phone use. Blind protocols, free from errors, bias, and financial conditioning factors, give positive results that reveal a cause-effect relationship between long-term mobile phone use or latency and statistically significant increase of ipsilateral head tumour risk, with biological plausibility. Non-blind protocols, which instead are affected by errors, bias, and financial conditioning factors, give negative results with systematic underestimate of such risk. However, also in these studies a statistically significant increase in risk of ipsilateral head tumours is quite common after more than 10 years of mobile phone use or latency. The meta-analyses, our included, examining only data on ipsilateral tumours in subjects using mobile phones since or for at least 10 years, show large and statistically significant increases in risk of ipsilateral brain gliomas and acoustic neuromas. Our analysis of the literature studies and of the results from meta-analyses of the significant data alone shows an almost doubling of the risk of head tumours induced by long-term mobile phone use or latency.

  2. Mobile phones and head tumours. The discrepancies in cause-effect relationships in the epidemiological studies - how do they arise?

    PubMed Central

    2011-01-01

    Background Whether or not there is a relationship between use of mobile phones (analogue and digital cellulars, and cordless) and head tumour risk (brain tumours, acoustic neuromas, and salivary gland tumours) is still a matter of debate; progress requires a critical analysis of the methodological elements necessary for an impartial evaluation of contradictory studies. Methods A close examination of the protocols and results from all case-control and cohort studies, pooled- and meta-analyses on head tumour risk for mobile phone users was carried out, and for each study the elements necessary for evaluating its reliability were identified. In addition, new meta-analyses of the literature data were undertaken. These were limited to subjects with mobile phone latency time compatible with the progression of the examined tumours, and with analysis of the laterality of head tumour localisation corresponding to the habitual laterality of mobile phone use. Results Blind protocols, free from errors, bias, and financial conditioning factors, give positive results that reveal a cause-effect relationship between long-term mobile phone use or latency and statistically significant increase of ipsilateral head tumour risk, with biological plausibility. Non-blind protocols, which instead are affected by errors, bias, and financial conditioning factors, give negative results with systematic underestimate of such risk. However, also in these studies a statistically significant increase in risk of ipsilateral head tumours is quite common after more than 10 years of mobile phone use or latency. The meta-analyses, our included, examining only data on ipsilateral tumours in subjects using mobile phones since or for at least 10 years, show large and statistically significant increases in risk of ipsilateral brain gliomas and acoustic neuromas. Conclusions Our analysis of the literature studies and of the results from meta-analyses of the significant data alone shows an almost doubling of the risk of head tumours induced by long-term mobile phone use or latency. PMID:21679472

  3. EvolQG - An R package for evolutionary quantitative genetics

    PubMed Central

    Melo, Diogo; Garcia, Guilherme; Hubbe, Alex; Assis, Ana Paula; Marroig, Gabriel

    2016-01-01

    We present an open source package for performing evolutionary quantitative genetics analyses in the R environment for statistical computing. Evolutionary theory shows that evolution depends critically on the available variation in a given population. When dealing with many quantitative traits this variation is expressed in the form of a covariance matrix, particularly the additive genetic covariance matrix or sometimes the phenotypic matrix, when the genetic matrix is unavailable and there is evidence the phenotypic matrix is sufficiently similar to the genetic matrix. Given this mathematical representation of available variation, the \\textbf{EvolQG} package provides functions for calculation of relevant evolutionary statistics; estimation of sampling error; corrections for this error; matrix comparison via correlations, distances and matrix decomposition; analysis of modularity patterns; and functions for testing evolutionary hypotheses on taxa diversification. PMID:27785352

  4. Editorial: Bayesian benefits for child psychology and psychiatry researchers.

    PubMed

    Oldehinkel, Albertine J

    2016-09-01

    For many scientists, performing statistical tests has become an almost automated routine. However, p-values are frequently used and interpreted incorrectly; and even when used appropriately, p-values tend to provide answers that do not match researchers' questions and hypotheses well. Bayesian statistics present an elegant and often more suitable alternative. The Bayesian approach has rarely been applied in child psychology and psychiatry research so far, but the development of user-friendly software packages and tutorials has placed it well within reach now. Because Bayesian analyses require a more refined definition of hypothesized probabilities of possible outcomes than the classical approach, going Bayesian may offer the additional benefit of sparkling the development and refinement of theoretical models in our field. © 2016 Association for Child and Adolescent Mental Health.

  5. Urine metabolic fingerprinting using LC-MS and GC-MS reveals metabolite changes in prostate cancer: A pilot study.

    PubMed

    Struck-Lewicka, Wiktoria; Kordalewska, Marta; Bujak, Renata; Yumba Mpanga, Arlette; Markuszewski, Marcin; Jacyna, Julia; Matuszewski, Marcin; Kaliszan, Roman; Markuszewski, Michał J

    2015-01-01

    Prostate cancer (CaP) is a leading cause of cancer deaths in men worldwide. The alarming statistics, the currently applied biomarkers are still not enough specific and selective. In addition, pathogenesis of CaP development is not totally understood. Therefore, in the present work, metabolomics study related to urinary metabolic fingerprinting analyses has been performed in order to scrutinize potential biomarkers that could help in explaining the pathomechanism of the disease and be potentially useful in its diagnosis and prognosis. Urine samples from CaP patients and healthy volunteers were analyzed with the use of high performance liquid chromatography coupled with time of flight mass spectrometry detection (HPLC-TOF/MS) in positive and negative polarity as well as gas chromatography hyphenated with triple quadruple mass spectrometry detection (GC-QqQ/MS) in a scan mode. The obtained data sets were statistically analyzed using univariate and multivariate statistical analyses. The Principal Component Analysis (PCA) was used to check systems' stability and possible outliers, whereas Partial Least Squares Discriminant Analysis (PLS-DA) was performed for evaluation of quality of the model as well as its predictive ability using statistically significant metabolites. The subsequent identification of selected metabolites using NIST library and commonly available databases allows for creation of a list of putative biomarkers and related biochemical pathways they are involved in. The selected pathways, like urea and tricarboxylic acid cycle, amino acid and purine metabolism, can play crucial role in pathogenesis of prostate cancer disease. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. 40 CFR 91.512 - Request for public hearing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... plans and statistical analyses have been properly applied (specifically, whether sampling procedures and statistical analyses specified in this subpart were followed and whether there exists a basis for... will be made available to the public during Agency business hours. ...

  7. A retrospective survey of research design and statistical analyses in selected Chinese medical journals in 1998 and 2008.

    PubMed

    Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia

    2010-05-25

    High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, p<0.001), 59.8% (545/1,335) in 1998 compared to 52.2% (664/1,578) in 2008. The overall error/defect proportion of study design also decreased ( = 21.22, p<0.001), 50.9% (680/1,335) compared to 42.40% (669/1,578). In 2008, design with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, p<0.001), 92.7% (945/1,019) compared to 78.2% (1023/1,309) and interpretation ( = 27.26, p<0.001), 9.7% (99/1,019) compared to 4.3% (56/1,309), some serious ones persisted. Chinese medical research seems to have made significant progress regarding statistical analyses, but there remains ample room for improvement regarding study designs. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative.

  8. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    ERIC Educational Resources Information Center

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  9. Tightening force and torque of nonlocking screws in a reverse shoulder prosthesis.

    PubMed

    Terrier, A; Kochbeck, S H; Merlini, F; Gortchacow, M; Pioletti, D P; Farron, A

    2010-07-01

    Reversed shoulder arthroplasty is an accepted treatment for glenohumeral arthritis associated to rotator cuff deficiency. For most reversed shoulder prostheses, the baseplate of the glenoid component is uncemented and its primary stability is provided by a central peg and peripheral screws. Because of the importance of the primary stability for a good osteo-integration of the baseplate, the optimal fixation of the screws is crucial. In particular, the amplitude of the tightening force of the nonlocking screws is clearly associated to this stability. Since this force is unknown, it is currently not accounted for in experimental or numerical analyses. Thus, the primary goal of this work is to measure this tightening force experimentally. In addition, the tightening torque was also measured, to estimate an optimal surgical value. An experimental setup with an instrumented baseplate was developed to measure simultaneously the tightening force, tightening torque and screwing angle, of the nonlocking screws of the Aquealis reversed prosthesis. In addition, the amount of bone volume around each screw was measured with a micro-CT. Measurements were performed on 6 human cadaveric scapulae. A statistically correlated relationship (p<0.05, R=0.83) was obtained between the maximal tightening force and the bone volume. The relationship between the tightening torque and the bone volume was not statistically significant. The experimental relationship presented in this paper can be used in numerical analyses to improve the baseplate fixation in the glenoid bone. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  10. Statistics for X-chromosome associations.

    PubMed

    Özbek, Umut; Lin, Hui-Min; Lin, Yan; Weeks, Daniel E; Chen, Wei; Shaffer, John R; Purcell, Shaun M; Feingold, Eleanor

    2018-06-13

    In a genome-wide association study (GWAS), association between genotype and phenotype at autosomal loci is generally tested by regression models. However, X-chromosome data are often excluded from published analyses of autosomes because of the difference between males and females in number of X chromosomes. Failure to analyze X-chromosome data at all is obviously less than ideal, and can lead to missed discoveries. Even when X-chromosome data are included, they are often analyzed with suboptimal statistics. Several mathematically sensible statistics for X-chromosome association have been proposed. The optimality of these statistics, however, is based on very specific simple genetic models. In addition, while previous simulation studies of these statistics have been informative, they have focused on single-marker tests and have not considered the types of error that occur even under the null hypothesis when the entire X chromosome is scanned. In this study, we comprehensively tested several X-chromosome association statistics using simulation studies that include the entire chromosome. We also considered a wide range of trait models for sex differences and phenotypic effects of X inactivation. We found that models that do not incorporate a sex effect can have large type I error in some cases. We also found that many of the best statistics perform well even when there are modest deviations, such as trait variance differences between the sexes or small sex differences in allele frequencies, from assumptions. © 2018 WILEY PERIODICALS, INC.

  11. Algorithm for Identifying Erroneous Rain-Gauge Readings

    NASA Technical Reports Server (NTRS)

    Rickman, Doug

    2005-01-01

    An algorithm analyzes rain-gauge data to identify statistical outliers that could be deemed to be erroneous readings. Heretofore, analyses of this type have been performed in burdensome manual procedures that have involved subjective judgements. Sometimes, the analyses have included computational assistance for detecting values falling outside of arbitrary limits. The analyses have been performed without statistically valid knowledge of the spatial and temporal variations of precipitation within rain events. In contrast, the present algorithm makes it possible to automate such an analysis, makes the analysis objective, takes account of the spatial distribution of rain gauges in conjunction with the statistical nature of spatial variations in rainfall readings, and minimizes the use of arbitrary criteria. The algorithm implements an iterative process that involves nonparametric statistics.

  12. Citation of previous meta-analyses on the same topic: a clue to perpetuation of incorrect methods?

    PubMed

    Li, Tianjing; Dickersin, Kay

    2013-06-01

    Systematic reviews and meta-analyses serve as a basis for decision-making and clinical practice guidelines and should be carried out using appropriate methodology to avoid incorrect inferences. We describe the characteristics, statistical methods used for meta-analyses, and citation patterns of all 21 glaucoma systematic reviews we identified pertaining to the effectiveness of prostaglandin analog eye drops in treating primary open-angle glaucoma, published between December 2000 and February 2012. We abstracted data, assessed whether appropriate statistical methods were applied in meta-analyses, and examined citation patterns of included reviews. We identified two forms of problematic statistical analyses in 9 of the 21 systematic reviews examined. Except in 1 case, none of the 9 reviews that used incorrect statistical methods cited a previously published review that used appropriate methods. Reviews that used incorrect methods were cited 2.6 times more often than reviews that used appropriate statistical methods. We speculate that by emulating the statistical methodology of previous systematic reviews, systematic review authors may have perpetuated incorrect approaches to meta-analysis. The use of incorrect statistical methods, perhaps through emulating methods described in previous research, calls conclusions of systematic reviews into question and may lead to inappropriate patient care. We urge systematic review authors and journal editors to seek the advice of experienced statisticians before undertaking or accepting for publication a systematic review and meta-analysis. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  13. Reporting quality of statistical methods in surgical observational studies: protocol for systematic review.

    PubMed

    Wu, Robert; Glen, Peter; Ramsay, Tim; Martel, Guillaume

    2014-06-28

    Observational studies dominate the surgical literature. Statistical adjustment is an important strategy to account for confounders in observational studies. Research has shown that published articles are often poor in statistical quality, which may jeopardize their conclusions. The Statistical Analyses and Methods in the Published Literature (SAMPL) guidelines have been published to help establish standards for statistical reporting.This study will seek to determine whether the quality of statistical adjustment and the reporting of these methods are adequate in surgical observational studies. We hypothesize that incomplete reporting will be found in all surgical observational studies, and that the quality and reporting of these methods will be of lower quality in surgical journals when compared with medical journals. Finally, this work will seek to identify predictors of high-quality reporting. This work will examine the top five general surgical and medical journals, based on a 5-year impact factor (2007-2012). All observational studies investigating an intervention related to an essential component area of general surgery (defined by the American Board of Surgery), with an exposure, outcome, and comparator, will be included in this systematic review. Essential elements related to statistical reporting and quality were extracted from the SAMPL guidelines and include domains such as intent of analysis, primary analysis, multiple comparisons, numbers and descriptive statistics, association and correlation analyses, linear regression, logistic regression, Cox proportional hazard analysis, analysis of variance, survival analysis, propensity analysis, and independent and correlated analyses. Each article will be scored as a proportion based on fulfilling criteria in relevant analyses used in the study. A logistic regression model will be built to identify variables associated with high-quality reporting. A comparison will be made between the scores of surgical observational studies published in medical versus surgical journals. Secondary outcomes will pertain to individual domains of analysis. Sensitivity analyses will be conducted. This study will explore the reporting and quality of statistical analyses in surgical observational studies published in the most referenced surgical and medical journals in 2013 and examine whether variables (including the type of journal) can predict high-quality reporting.

  14. Impact of Exploratory Biomarkers on the Treatment Effect of Bevacizumab in Metastatic Breast Cancer

    PubMed Central

    Jubb, Adrian M.; Miller, Kathy D.; Rugo, Hope S.; Harris, Adrian L.; Chen, Dafeng; Reimann, James D.; Cobleigh, Melody A.; Schmidt, Maike; Langmuir, Virginia K.; Hillan, Kenneth J.; Chen, Daniel S.; Koeppen, Hartmut

    2010-01-01

    Purpose The addition of bevacizumab to cytotoxic chemotherapy has demonstrated a progression free survival (PFS) benefit in the first line and second line treatment of advanced or metastatic breast cancer (MBC). However, the addition of bevacizumab to capecitabine in heavily pretreated MBC patients did not show a PFS benefit (AVF2119g phase three trial). The aim of this study was to evaluate the expression of novel putative biomarkers as predictors of benefit from bevacizumab in retrospective subset analyses of the AVF2119g trial. Experimental Design In the AVF2119g trial, 462 patients with MBC were randomly assigned to receive capecitabine or capecitabine plus bevacizumab. Primary tumor tissue and outcome data were available for 223 patients. Biomarker expression was assessed by in situ hybridization (VEGF-A, VEGF-B, thrombospondin-2 and Flt4) or immunohistochemistry (VEGF-C, PDGF-C, neuropilin-1, delta like ligand (Dll)4, Bv8, p53 and thymidine phosphorylase) on formalin fixed paraffin embedded tissue. PFS was associated with these variables in retrospective subset analyses. Results Patients with low scores for Dll4, VEGF-C and neuropilin-1 showed trends toward improvement in PFS associated with the addition of bevacizumab to capecitabine (p values 0.01, 0.05 and 0.07, respectively). These observations were not statistically significant following correction for multiple hypothesis testing. Conclusion These retrospective subset analyses suggest that expression of Dll4, VEGF-C and neuropilin-1 may predict benefit from bevacizumab. Such observations are not conclusive but warrant additional testing. PMID:21224365

  15. Relationship between sitting volleyball performance and field fitness of sitting volleyball players in Korea

    PubMed Central

    Jeoung, Bogja

    2017-01-01

    The purpose of this study was to evaluate the relationship between sitting volleyball performance and the field fitness of sitting volleyball players. Forty-five elite sitting volleyball players participated in 10 field fitness tests. Additionally, the players’ head coach and coach assessed their volleyball performance (receive and defense, block, attack, and serve). Data were analyzed with SPSS software version 21 by using correlation and regression analyses, and the significance level was set at P< 0.05. The results showed that chest pass, overhand throw, one-hand throw, one-hand side throw, splint, speed endurance, reaction time, and graded exercise test results had a statistically significant influence on the players’ abilities to attack, serve, and block. Grip strength, t-test, speed, and agility showed a statistically significant relationship with the players’ skill at defense and receive. Our results showed that chest pass, overhand throw, one-hand throw, one-hand side throw, speed endurance, reaction time, and graded exercise test results had a statistically significant influence on volleyball performance. PMID:29326896

  16. GenomeGraphs: integrated genomic data visualization with R.

    PubMed

    Durinck, Steffen; Bullard, James; Spellman, Paul T; Dudoit, Sandrine

    2009-01-06

    Biological studies involve a growing number of distinct high-throughput experiments to characterize samples of interest. There is a lack of methods to visualize these different genomic datasets in a versatile manner. In addition, genomic data analysis requires integrated visualization of experimental data along with constantly changing genomic annotation and statistical analyses. We developed GenomeGraphs, as an add-on software package for the statistical programming environment R, to facilitate integrated visualization of genomic datasets. GenomeGraphs uses the biomaRt package to perform on-line annotation queries to Ensembl and translates these to gene/transcript structures in viewports of the grid graphics package. This allows genomic annotation to be plotted together with experimental data. GenomeGraphs can also be used to plot custom annotation tracks in combination with different experimental data types together in one plot using the same genomic coordinate system. GenomeGraphs is a flexible and extensible software package which can be used to visualize a multitude of genomic datasets within the statistical programming environment R.

  17. A phylogenetic transform enhances analysis of compositional microbiota data

    PubMed Central

    Silverman, Justin D; Washburne, Alex D; Mukherjee, Sayan; David, Lawrence A

    2017-01-01

    Surveys of microbial communities (microbiota), typically measured as relative abundance of species, have illustrated the importance of these communities in human health and disease. Yet, statistical artifacts commonly plague the analysis of relative abundance data. Here, we introduce the PhILR transform, which incorporates microbial evolutionary models with the isometric log-ratio transform to allow off-the-shelf statistical tools to be safely applied to microbiota surveys. We demonstrate that analyses of community-level structure can be applied to PhILR transformed data with performance on benchmarks rivaling or surpassing standard tools. Additionally, by decomposing distance in the PhILR transformed space, we identified neighboring clades that may have adapted to distinct human body sites. Decomposing variance revealed that covariation of bacterial clades within human body sites increases with phylogenetic relatedness. Together, these findings illustrate how the PhILR transform combines statistical and phylogenetic models to overcome compositional data challenges and enable evolutionary insights relevant to microbial communities. DOI: http://dx.doi.org/10.7554/eLife.21887.001 PMID:28198697

  18. A methodology using in-chair movements as an objective measure of discomfort for the purpose of statistically distinguishing between similar seat surfaces.

    PubMed

    Cascioli, Vincenzo; Liu, Zhuofu; Heusch, Andrew; McCarthy, Peter W

    2016-05-01

    This study presents a method for objectively measuring in-chair movement (ICM) that shows correlation with subjective ratings of comfort and discomfort. Employing a cross-over controlled, single blind design, healthy young subjects (n = 21) sat for 18 min on each of the following surfaces: contoured foam, straight foam and wood. Force sensitive resistors attached to the sitting interface measured the relative movements of the subjects during sitting. The purpose of this study was to determine whether ICM could statistically distinguish between each seat material, including two with subtle design differences. In addition, this study investigated methodological considerations, in particular appropriate threshold selection and sitting duration, when analysing objective movement data. ICM appears to be able to statistically distinguish between similar foam surfaces, as long as appropriate ICM thresholds and sufficient sitting durations are present. A relationship between greater ICM and increased discomfort, and lesser ICM and increased comfort was also found. Copyright © 2016. Published by Elsevier Ltd.

  19. Statistical Analysis of NAS Parallel Benchmarks and LINPACK Results

    NASA Technical Reports Server (NTRS)

    Meuer, Hans-Werner; Simon, Horst D.; Strohmeier, Erich; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    In the last three years extensive performance data have been reported for parallel machines both based on the NAS Parallel Benchmarks, and on LINPACK. In this study we have used the reported benchmark results and performed a number of statistical experiments using factor, cluster, and regression analyses. In addition to the performance results of LINPACK and the eight NAS parallel benchmarks, we have also included peak performance of the machine, and the LINPACK n and n(sub 1/2) values. Some of the results and observations can be summarized as follows: 1) All benchmarks are strongly correlated with peak performance. 2) LINPACK and EP have each a unique signature. 3) The remaining NPB can grouped into three groups as follows: (CG and IS), (LU and SP), and (MG, FT, and BT). Hence three (or four with EP) benchmarks are sufficient to characterize the overall NPB performance. Our poster presentation will follow a standard poster format, and will present the data of our statistical analysis in detail.

  20. Statistical analyses of commercial vehicle accident factors. Volume 1 Part 1

    DOT National Transportation Integrated Search

    1978-02-01

    Procedures for conducting statistical analyses of commercial vehicle accidents have been established and initially applied. A file of some 3,000 California Highway Patrol accident reports from two areas of California during a period of about one year...

  1. 40 CFR 90.712 - Request for public hearing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sampling plans and statistical analyses have been properly applied (specifically, whether sampling procedures and statistical analyses specified in this subpart were followed and whether there exists a basis... Clerk and will be made available to the public during Agency business hours. ...

  2. Combined Analyses of Bacterial, Fungal and Nematode Communities in Andosolic Agricultural Soils in Japan

    PubMed Central

    Bao, Zhihua; Ikunaga, Yoko; Matsushita, Yuko; Morimoto, Sho; Takada-Hoshino, Yuko; Okada, Hiroaki; Oba, Hirosuke; Takemoto, Shuhei; Niwa, Shigeru; Ohigashi, Kentaro; Suzuki, Chika; Nagaoka, Kazunari; Takenaka, Makoto; Urashima, Yasufumi; Sekiguchi, Hiroyuki; Kushida, Atsuhiko; Toyota, Koki; Saito, Masanori; Tsushima, Seiya

    2012-01-01

    We simultaneously examined the bacteria, fungi and nematode communities in Andosols from four agro-geographical sites in Japan using polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE) and statistical analyses to test the effects of environmental factors including soil properties on these communities depending on geographical sites. Statistical analyses such as Principal component analysis (PCA) and Redundancy analysis (RDA) revealed that the compositions of the three soil biota communities were strongly affected by geographical sites, which were in turn strongly associated with soil characteristics such as total C (TC), total N (TN), C/N ratio and annual mean soil temperature (ST). In particular, the TC, TN and C/N ratio had stronger effects on bacterial and fungal communities than on the nematode community. Additionally, two-way cluster analysis using the combined DGGE profile also indicated that all soil samples were classified into four clusters corresponding to the four sites, showing high site specificity of soil samples, and all DNA bands were classified into four clusters, showing the coexistence of specific DGGE bands of bacteria, fungi and nematodes in Andosol fields. The results of this study suggest that geography relative to soil properties has a simultaneous impact on soil microbial and nematode community compositions. This is the first combined profile analysis of bacteria, fungi and nematodes at different sites with agricultural Andosols. PMID:22223474

  3. Combined analyses of bacterial, fungal and nematode communities in andosolic agricultural soils in Japan.

    PubMed

    Bao, Zhihua; Ikunaga, Yoko; Matsushita, Yuko; Morimoto, Sho; Takada-Hoshino, Yuko; Okada, Hiroaki; Oba, Hirosuke; Takemoto, Shuhei; Niwa, Shigeru; Ohigashi, Kentaro; Suzuki, Chika; Nagaoka, Kazunari; Takenaka, Makoto; Urashima, Yasufumi; Sekiguchi, Hiroyuki; Kushida, Atsuhiko; Toyota, Koki; Saito, Masanori; Tsushima, Seiya

    2012-01-01

    We simultaneously examined the bacteria, fungi and nematode communities in Andosols from four agro-geographical sites in Japan using polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE) and statistical analyses to test the effects of environmental factors including soil properties on these communities depending on geographical sites. Statistical analyses such as Principal component analysis (PCA) and Redundancy analysis (RDA) revealed that the compositions of the three soil biota communities were strongly affected by geographical sites, which were in turn strongly associated with soil characteristics such as total C (TC), total N (TN), C/N ratio and annual mean soil temperature (ST). In particular, the TC, TN and C/N ratio had stronger effects on bacterial and fungal communities than on the nematode community. Additionally, two-way cluster analysis using the combined DGGE profile also indicated that all soil samples were classified into four clusters corresponding to the four sites, showing high site specificity of soil samples, and all DNA bands were classified into four clusters, showing the coexistence of specific DGGE bands of bacteria, fungi and nematodes in Andosol fields. The results of this study suggest that geography relative to soil properties has a simultaneous impact on soil microbial and nematode community compositions. This is the first combined profile analysis of bacteria, fungi and nematodes at different sites with agricultural Andosols.

  4. Amino acid pair- and triplet-wise groupings in the interior of α-helical segments in proteins.

    PubMed

    de Sousa, Miguel M; Munteanu, Cristian R; Pazos, Alejandro; Fonseca, Nuno A; Camacho, Rui; Magalhães, A L

    2011-02-21

    A statistical approach has been applied to analyse primary structure patterns at inner positions of α-helices in proteins. A systematic survey was carried out in a recent sample of non-redundant proteins selected from the Protein Data Bank, which were used to analyse α-helix structures for amino acid pairing patterns. Only residues more than three positions apart from both termini of the α-helix were considered as inner. Amino acid pairings i, i+k (k=1, 2, 3, 4, 5), were analysed and the corresponding 20×20 matrices of relative global propensities were constructed. An analysis of (i, i+4, i+8) and (i, i+3, i+4) triplet patterns was also performed. These analysis yielded information on a series of amino acid patterns (pairings and triplets) showing either high or low preference for α-helical motifs and suggested a novel approach to protein alphabet reduction. In addition, it has been shown that the individual amino acid propensities are not enough to define the statistical distribution of these patterns. Global pair propensities also depend on the type of pattern, its composition and orientation in the protein sequence. The data presented should prove useful to obtain and refine useful predictive rules which can further the development and fine-tuning of protein structure prediction algorithms and tools. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Intratumoral heterogeneity analysis reveals hidden associations between protein expression losses and patient survival in clear cell renal cell carcinoma

    PubMed Central

    Devarajan, Karthik; Parsons, Theodore; Wang, Qiong; O'Neill, Raymond; Solomides, Charalambos; Peiper, Stephen C.; Testa, Joseph R.; Uzzo, Robert; Yang, Haifeng

    2017-01-01

    Intratumoral heterogeneity (ITH) is a prominent feature of kidney cancer. It is not known whether it has utility in finding associations between protein expression and clinical parameters. We used ITH that is detected by immunohistochemistry (IHC) to aid the association analysis between the loss of SWI/SNF components and clinical parameters.160 ccRCC tumors (40 per tumor stage) were used to generate tissue microarray (TMA). Four foci from different regions of each tumor were selected. IHC was performed against PBRM1, ARID1A, SETD2, SMARCA4, and SMARCA2. Statistical analyses were performed to correlate biomarker losses with patho-clinical parameters. Categorical variables were compared between groups using Fisher's exact tests. Univariate and multivariable analyses were used to correlate biomarker changes and patient survivals. Multivariable analyses were performed by constructing decision trees using the classification and regression trees (CART) methodology. IHC detected widespread ITH in ccRCC tumors. The statistical analysis of the “Truncal loss” (root loss) found additional correlations between biomarker losses and tumor stages than the traditional “Loss in tumor (total)”. Losses of SMARCA4 or SMARCA2 significantly improved prognosis for overall survival (OS). Losses of PBRM1, ARID1A or SETD2 had the opposite effect. Thus “Truncal Loss” analysis revealed hidden links between protein losses and patient survival in ccRCC. PMID:28445125

  6. Supply Chain Collaboration: Information Sharing in a Tactical Operating Environment

    DTIC Science & Technology

    2013-06-01

    architecture, there are four tiers: Client (Web Application Clients ), Presentation (Web-Server), Processing (Application-Server), Data (Database...organization in each period. This data will be collected to analyze. i) Analyses and Validation: We will do a statistics test in this data, Pareto ...notes, outstanding deliveries, and inventory. i) Analyses and Validation: We will do a statistics test in this data, Pareto analyses and confirmation

  7. New instrument for measuring student beliefs about physics and learning physics: The Colorado Learning Attitudes about Science Survey

    NASA Astrophysics Data System (ADS)

    Adams, W. K.; Perkins, K. K.; Podolefsky, N. S.; Dubson, M.; Finkelstein, N. D.; Wieman, C. E.

    2006-06-01

    The Colorado Learning Attitudes about Science Survey (CLASS) is a new instrument designed to measure student beliefs about physics and about learning physics. This instrument extends previous work by probing additional aspects of student beliefs and by using wording suitable for students in a wide variety of physics courses. The CLASS has been validated using interviews, reliability studies, and extensive statistical analyses of responses from over 5000 students. In addition, a new methodology for determining useful and statistically robust categories of student beliefs has been developed. This paper serves as the foundation for an extensive study of how student beliefs impact and are impacted by their educational experiences. For example, this survey measures the following: that most teaching practices cause substantial drops in student scores; that a student’s likelihood of becoming a physics major correlates with their “Personal Interest” score; and that, for a majority of student populations, women’s scores in some categories, including “Personal Interest” and “Real World Connections,” are significantly different from men’s scores.

  8. A study of the comparative effects of various means of motion cueing during a simulated compensatory tracking task

    NASA Technical Reports Server (NTRS)

    Mckissick, B. T.; Ashworth, B. R.; Parrish, R. V.; Martin, D. J., Jr.

    1980-01-01

    NASA's Langley Research Center conducted a simulation experiment to ascertain the comparative effects of motion cues (combinations of platform motion and g-seat normal acceleration cues) on compensatory tracking performance. In the experiment, a full six-degree-of-freedom YF-16 model was used as the simulated pursuit aircraft. The Langley Visual Motion Simulator (with in-house developed wash-out), and a Langley developed g-seat were principal components of the simulation. The results of the experiment were examined utilizing univariate and multivariate techniques. The statistical analyses demonstrate that the platform motion and g-seat cues provide additional information to the pilot that allows substantial reduction of lateral tracking error. Also, the analyses show that the g-seat cue helps reduce vertical error.

  9. A systematic review of the quality of statistical methods employed for analysing quality of life data in cancer randomised controlled trials.

    PubMed

    Hamel, Jean-Francois; Saulnier, Patrick; Pe, Madeline; Zikos, Efstathios; Musoro, Jammbe; Coens, Corneel; Bottomley, Andrew

    2017-09-01

    Over the last decades, Health-related Quality of Life (HRQoL) end-points have become an important outcome of the randomised controlled trials (RCTs). HRQoL methodology in RCTs has improved following international consensus recommendations. However, no international recommendations exist concerning the statistical analysis of such data. The aim of our study was to identify and characterise the quality of the statistical methods commonly used for analysing HRQoL data in cancer RCTs. Building on our recently published systematic review, we analysed a total of 33 published RCTs studying the HRQoL methods reported in RCTs since 1991. We focussed on the ability of the methods to deal with the three major problems commonly encountered when analysing HRQoL data: their multidimensional and longitudinal structure and the commonly high rate of missing data. All studies reported HRQoL being assessed repeatedly over time for a period ranging from 2 to 36 months. Missing data were common, with compliance rates ranging from 45% to 90%. From the 33 studies considered, 12 different statistical methods were identified. Twenty-nine studies analysed each of the questionnaire sub-dimensions without type I error adjustment. Thirteen studies repeated the HRQoL analysis at each assessment time again without type I error adjustment. Only 8 studies used methods suitable for repeated measurements. Our findings show a lack of consistency in statistical methods for analysing HRQoL data. Problems related to multiple comparisons were rarely considered leading to a high risk of false positive results. It is therefore critical that international recommendations for improving such statistical practices are developed. Copyright © 2017. Published by Elsevier Ltd.

  10. Report endorses data sharing

    NASA Astrophysics Data System (ADS)

    The potential benefits of sharing data so outweigh its costs that investigators should be required to include plans for sharing data as part of their grant proposals, according to recommendations issued recently by the Committee on National Statistics (CNSTAT) of the National Research Council (NRC).In their report Sharing Research Data, CNSTAT also recommended that “Journals should give more emphasis to reports of secondary analyses and to replications,” provided that the original collections of data receive full credit. In addition, “Journal editors should require authors to provide access to data during the peer review process.”

  11. Back to BaySICS: a user-friendly program for Bayesian Statistical Inference from Coalescent Simulations.

    PubMed

    Sandoval-Castellanos, Edson; Palkopoulou, Eleftheria; Dalén, Love

    2014-01-01

    Inference of population demographic history has vastly improved in recent years due to a number of technological and theoretical advances including the use of ancient DNA. Approximate Bayesian computation (ABC) stands among the most promising methods due to its simple theoretical fundament and exceptional flexibility. However, limited availability of user-friendly programs that perform ABC analysis renders it difficult to implement, and hence programming skills are frequently required. In addition, there is limited availability of programs able to deal with heterochronous data. Here we present the software BaySICS: Bayesian Statistical Inference of Coalescent Simulations. BaySICS provides an integrated and user-friendly platform that performs ABC analyses by means of coalescent simulations from DNA sequence data. It estimates historical demographic population parameters and performs hypothesis testing by means of Bayes factors obtained from model comparisons. Although providing specific features that improve inference from datasets with heterochronous data, BaySICS also has several capabilities making it a suitable tool for analysing contemporary genetic datasets. Those capabilities include joint analysis of independent tables, a graphical interface and the implementation of Markov-chain Monte Carlo without likelihoods.

  12. Neurotoxicological and statistical analyses of a mixture of five organophosphorus pesticides using a ray design.

    PubMed

    Moser, V C; Casey, M; Hamm, A; Carter, W H; Simmons, J E; Gennings, C

    2005-07-01

    Environmental exposures generally involve chemical mixtures instead of single chemicals. Statistical models such as the fixed-ratio ray design, wherein the mixing ratio (proportions) of the chemicals is fixed across increasing mixture doses, allows for the detection and characterization of interactions among the chemicals. In this study, we tested for interaction(s) in a mixture of five organophosphorus (OP) pesticides (chlorpyrifos, diazinon, dimethoate, acephate, and malathion). The ratio of the five pesticides (full ray) reflected the relative dietary exposure estimates of the general population as projected by the US EPA Dietary Exposure Evaluation Model (DEEM). A second mixture was tested using the same dose levels of all pesticides, but excluding malathion (reduced ray). The experimental approach first required characterization of dose-response curves for the individual OPs to build a dose-additivity model. A series of behavioral measures were evaluated in adult male Long-Evans rats at the time of peak effect following a single oral dose, and then tissues were collected for measurement of cholinesterase (ChE) activity. Neurochemical (blood and brain cholinesterase [ChE] activity) and behavioral (motor activity, gait score, tail-pinch response score) endpoints were evaluated statistically for evidence of additivity. The additivity model constructed from the single chemical data was used to predict the effects of the pesticide mixture along the full ray (10-450 mg/kg) and the reduced ray (1.75-78.8 mg/kg). The experimental mixture data were also modeled and statistically compared to the additivity models. Analysis of the 5-OP mixture (the full ray) revealed significant deviation from additivity for all endpoints except tail-pinch response. Greater-than-additive responses (synergism) were observed at the lower doses of the 5-OP mixture, which contained non-effective dose levels of each of the components. The predicted effective doses (ED20, ED50) were about half that predicted by additivity, and for brain ChE and motor activity, there was a threshold shift in the dose-response curves. For the brain ChE and motor activity, there was no difference between the full (5-OP mixture) and reduced (4-OP mixture) rays, indicating that malathion did not influence the non-additivity. While the reduced ray for blood ChE showed greater deviation from additivity without malathion in the mixture, the non-additivity observed for the gait score was reversed when malathion was removed. Thus, greater-than-additive interactions were detected for both the full and reduced ray mixtures, and the role of malathion in the interactions varied depending on the endpoint. In all cases, the deviations from additivity occurred at the lower end of the dose-response curves.

  13. Relationship Between Breastfeeding and Early Childhood Obesity: Results of a Prospective Longitudinal Study from Birth to 4 Years.

    PubMed

    Wallby, Thomas; Lagerberg, Dagmar; Magnusson, Margaretha

    To study a potential link between breastfeeding in infancy and obesity at age 4. A total of 30,508 infants born during 2002-2007 from the databases of the Preventive Child Health Services in two Swedish counties and from national registers were studied. The outcome variable was obesity at age 4. Analyses were conducted by logistic regression models using the methodology of generalized estimating equations. Analyses were adjusted for child sex and maternal anthropometric and sociodemographic variables. In unadjusted analyses, any breastfeeding up to 9 months was linked to successively decreasing odds ratios (ORs) for obesity at age 4 (ORs 0.78-0.33), however, not significantly for 1 week and 2 months of breastfeeding. In adjusted analyses, the same pattern remained statistically significant for breastfeeding for 4 (OR 0.51), 6 (OR 0.55), and 9 (OR 0.47) months. Child sex, maternal education, maternal body mass index, and maternal smoking additionally influenced child obesity. Breastfeeding duration for at least 4 months may contribute independently to a reduced risk for childhood obesity at 4 years.

  14. Statistical Analyses for Probabilistic Assessments of the Reactor Pressure Vessel Structural Integrity: Building a Master Curve on an Extract of the 'Euro' Fracture Toughness Dataset, Controlling Statistical Uncertainty for Both Mono-Temperature and multi-temperature tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josse, Florent; Lefebvre, Yannick; Todeschini, Patrick

    2006-07-01

    Assessing the structural integrity of a nuclear Reactor Pressure Vessel (RPV) subjected to pressurized-thermal-shock (PTS) transients is extremely important to safety. In addition to conventional deterministic calculations to confirm RPV integrity, Electricite de France (EDF) carries out probabilistic analyses. Probabilistic analyses are interesting because some key variables, albeit conventionally taken at conservative values, can be modeled more accurately through statistical variability. One variable which significantly affects RPV structural integrity assessment is cleavage fracture initiation toughness. The reference fracture toughness method currently in use at EDF is the RCCM and ASME Code lower-bound K{sub IC} based on the indexing parameter RT{submore » NDT}. However, in order to quantify the toughness scatter for probabilistic analyses, the master curve method is being analyzed at present. Furthermore, the master curve method is a direct means of evaluating fracture toughness based on K{sub JC} data. In the framework of the master curve investigation undertaken by EDF, this article deals with the following two statistical items: building a master curve from an extract of a fracture toughness dataset (from the European project 'Unified Reference Fracture Toughness Design curves for RPV Steels') and controlling statistical uncertainty for both mono-temperature and multi-temperature tests. Concerning the first point, master curve temperature dependence is empirical in nature. To determine the 'original' master curve, Wallin postulated that a unified description of fracture toughness temperature dependence for ferritic steels is possible, and used a large number of data corresponding to nuclear-grade pressure vessel steels and welds. Our working hypothesis is that some ferritic steels may behave in slightly different ways. Therefore we focused exclusively on the basic french reactor vessel metal of types A508 Class 3 and A 533 grade B Class 1, taking the sampling level and direction into account as well as the test specimen type. As for the second point, the emphasis is placed on the uncertainties in applying the master curve approach. For a toughness dataset based on different specimens of a single product, application of the master curve methodology requires the statistical estimation of one parameter: the reference temperature T{sub 0}. Because of the limited number of specimens, estimation of this temperature is uncertain. The ASTM standard provides a rough evaluation of this statistical uncertainty through an approximate confidence interval. In this paper, a thorough study is carried out to build more meaningful confidence intervals (for both mono-temperature and multi-temperature tests). These results ensure better control over uncertainty, and allow rigorous analysis of the impact of its influencing factors: the number of specimens and the temperatures at which they have been tested. (authors)« less

  15. Structural neurobiological correlates of Mayer-Salovery-Caruso Emotional Intelligence Test performance in early course schizophrenia.

    PubMed

    Wojtalik, Jessica A; Eack, Shaun M; Keshavan, Matcheri S

    2013-01-10

    The Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) is a key measure of social cognition in schizophrenia that has good psychometric properties and is recommended by the MATRICS committee. As a way to further investigate the validity of the MSCEIT, this study sought to examine the neurobiological correlates of MSCEIT performance in patients with early course schizophrenia. A total of 51 patients diagnosed with early course, stabilized schizophrenia or schizoaffective disorder completed structural magnetic resonance imaging (MRI) scans and the MSCEIT. Investigation of the associations between MSCEIT performance and gray matter morphology was examined by conducting voxel-based morphometry (VBM) analyses across hypothesized social-cognitive regions of interest using automated anatomical labeling in Statistical Parametric Mapping Software, version 5 (SPM5). All VBM analyses utilized general linear models examining gray matter density partitioned images, adjusting for demographic and illness-related confounds. VBM results were then followed up with confirmatory volumetric analyses. Patients with poorer overall and Facilitating, Understanding, and Managing Emotions subscale performances on the MSCEIT showed significantly reduced gray matter density in the left parahippocampal gyrus. Additionally, attenuated performance on the Facilitating and Managing Emotions subscales was significantly associated with reduced right posterior cingulate gray matter density. All associations observed between MSCEIT performance and gray matter density were supported with confirmatory gray matter volumetric analyses, with the exception of the association between the right posterior cingulate and the facilitation of emotions. These findings provide additional evidence for the MSCEIT as a valid social-cognitive measure by elucidating its correlates with neurobiological structures commonly implicated in emotion processing. These findings provide additional biological evidence supporting the use of the MSCEIT in cognitive enhancing clinical trials in schizophrenia. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Mixed Approach Retrospective Analyses of Suicide and Suicidal Ideation for Brand Compared with Generic Central Nervous System Drugs.

    PubMed

    Cheng, Ning; Rahman, Md Motiur; Alatawi, Yasser; Qian, Jingjing; Peissig, Peggy L; Berg, Richard L; Page, C David; Hansen, Richard A

    2018-04-01

    Several different types of drugs acting on the central nervous system (CNS) have previously been associated with an increased risk of suicide and suicidal ideation (broadly referred to as suicide). However, a differential association between brand and generic CNS drugs and suicide has not been reported. This study compares suicide adverse event rates for brand versus generic CNS drugs using multiple sources of data. Selected examples of CNS drugs (sertraline, gabapentin, zolpidem, and methylphenidate) were evaluated via the US FDA Adverse Event Reporting System (FAERS) for a hypothesis-generating study, and then via administrative claims and electronic health record (EHR) data for a more rigorous retrospective cohort study. Disproportionality analyses with reporting odds ratios and 95% confidence intervals (CIs) were used in the FAERS analyses to quantify the association between each drug and reported suicide. For the cohort studies, Cox proportional hazards models were used, controlling for demographic and clinical characteristics as well as the background risk of suicide in the insured population. The FAERS analyses found significantly lower suicide reporting rates for brands compared with generics for all four studied products (Breslow-Day P < 0.05). In the claims- and EHR-based cohort study, the adjusted hazard ratio (HR) was statistically significant only for sertraline (HR 0.58; 95% CI 0.38-0.88). Suicide reporting rates were disproportionately larger for generic than for brand CNS drugs in FAERS and adjusted retrospective cohort analyses remained significant only for sertraline. However, even for sertraline, temporal confounding related to the close proximity of black box warnings and generic availability is possible. Additional analyses in larger data sources with additional drugs are needed.

  17. Results of the Excreta Bioassay Quality Control Program for April 1, 2009 through March 31, 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antonio, Cheryl L.

    2012-07-19

    A total of 58 urine samples and 10 fecal samples were submitted during the report period (April 1, 2009 through March 31, 2010) to General Engineering Laboratories, South Carolina by the Hanford Internal Dosimetry Program (IDP) to check the accuracy, precision, and detection levels of their analyses. Urine analyses for Sr, 238Pu, 239Pu, 241Am, 243Am 235U, 238U, elemental uranium and fecal analyses for 241Am, 238Pu and 239Pu were tested this year as well as four tissue samples for 238Pu, 239Pu, 241Am and 241Pu. The number of QC urine samples submitted during the report period represented 1.3% of the total samplesmore » submitted. In addition to the samples provided by IDP, GEL was also required to conduct their own QC program, and submit the results of analyses to IDP. About 33% of the analyses processed by GEL during the third year of this contract were quality control samples. GEL tested the performance of 21 radioisotopes, all of which met or exceeded the specifications in the Statement of Work within statistical uncertainty (Table 4).« less

  18. Results of The Excreta Bioassay Quality Control Program For April 1, 2010 Through March 31, 2011

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antonio, Cheryl L.

    2012-07-19

    A total of 76 urine samples and 10 spiked fecal samples were submitted during the report period (April 1, 2010 through March 31, 2011) to GEL Laboratories, LLC in South Carolina by the Hanford Internal Dosimetry Program (IDP) to check the accuracy, precision, and detection levels of their analyses. Urine analyses for 14C, Sr, for 238Pu, 239Pu, 241Am, 243Am, 235U, 238U, 238U-mass and fecal analyses for 241Am, 238Pu and 239Pu were tested this year. The number of QC urine samples submitted during the report period represented 1.1% of the total samples submitted. In addition to the samples provided by IDP,more » GEL was also required to conduct their own QC program, and submit the results of analyses to IDP. About 31% of the analyses processed by GEL during the first year of contract 112512 were quality control samples. GEL tested the performance of 23 radioisotopes, all of which met or exceeded the specifications in the Statement of Work within statistical uncertainty except the slightly elevated relative bias for 243,244Cm (Table 4).« less

  19. Usefulness and limitations of various guinea-pig test methods in detecting human skin sensitizers-validation of guinea-pig tests for skin hypersensitivity.

    PubMed

    Marzulli, F; Maguire, H C

    1982-02-01

    Several guinea-pig predictive test methods were evaluated by comparison of results with those obtained with human predictive tests, using ten compounds that have been used in cosmetics. The method involves the statistical analysis of the frequency with which guinea-pig tests agree with the findings of tests in humans. In addition, the frequencies of false positive and false negative predictive findings are considered and statistically analysed. The results clearly demonstrate the superiority of adjuvant tests (complete Freund's adjuvant) in determining skin sensitizers and the overall superiority of the guinea-pig maximization test in providing results similar to those obtained by human testing. A procedure is suggested for utilizing adjuvant and non-adjuvant test methods for characterizing compounds as of weak, moderate or strong sensitizing potential.

  20. An evaluation of the Goddard Space Flight Center Library

    NASA Technical Reports Server (NTRS)

    Herner, S.; Lancaster, F. W.; Wright, N.; Ockerman, L.; Shearer, B.; Greenspan, S.; Mccartney, J.; Vellucci, M.

    1979-01-01

    The character and degree of coincidence between the current and future missions, programs, and projects of the Goddard Space Flight Center and the current and future collection, services, and facilities of its library were determined from structured interviews and discussions with various classes of facility personnel. In addition to the tabulation and interpretation of the data from the structured interview survey, five types of statistical analyses were performed to corroborate (or contradict) the survey results and to produce useful information not readily attainable through survey material. Conclusions reached regarding compatability between needs and holdings, services and buildings, library hours of operation, methods of early detection and anticipation of changing holdings requirements, and the impact of near future programs are presented along with a list of statistics needing collection, organization, and interpretation on a continuing or longitudinal basis.

  1. Integration of statistical and physiological analyses of adaptation of near-isogenic barley lines.

    PubMed

    Romagosa, I; Fox, P N; García Del Moral, L F; Ramos, J M; García Del Moral, B; Roca de Togores, F; Molina-Cano, J L

    1993-08-01

    Seven near-isogenic barley lines, differing for three independent mutant genes, were grown in 15 environments in Spain. Genotype x environment interaction (G x E) for grain yield was examined with the Additive Main Effects and Multiplicative interaction (AMMI) model. The results of this statistical analysis of multilocation yield-data were compared with a morpho-physiological characterization of the lines at two sites (Molina-Cano et al. 1990). The first two principal component axes from the AMMI analysis were strongly associated with the morpho-physiological characters. The independent but parallel discrimination among genotypes reflects genetic differences and highlights the power of the AMMI analysis as a tool to investigate G x E. Characters which appear to be positively associated with yield in the germplasm under study could be identified for some environments.

  2. Low-dose ionizing radiation increases the mortality risk of solid cancers in nuclear industry workers: A meta-analysis.

    PubMed

    Qu, Shu-Gen; Gao, Jin; Tang, Bo; Yu, Bo; Shen, Yue-Ping; Tu, Yu

    2018-05-01

    Low-dose ionizing radiation (LDIR) may increase the mortality of solid cancers in nuclear industry workers, but only few individual cohort studies exist, and the available reports have low statistical power. The aim of the present study was to focus on solid cancer mortality risk from LDIR in the nuclear industry using standard mortality ratios (SMRs) and 95% confidence intervals. A systematic literature search through the PubMed and Embase databases identified 27 studies relevant to this meta-analysis. There was statistical significance for total, solid and lung cancers, with meta-SMR values of 0.88, 0.80, and 0.89, respectively. There was evidence of stochastic effects by IR, but more definitive conclusions require additional analyses using standardized protocols to determine whether LDIR increases the risk of solid cancer-related mortality.

  3. A Retrospective Survey of Research Design and Statistical Analyses in Selected Chinese Medical Journals in 1998 and 2008

    PubMed Central

    Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia

    2010-01-01

    Background High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Methodology/Principal Findings Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, p<0.001), 59.8% (545/1,335) in 1998 compared to 52.2% (664/1,578) in 2008. The overall error/defect proportion of study design also decreased ( = 21.22, p<0.001), 50.9% (680/1,335) compared to 42.40% (669/1,578). In 2008, design with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, p<0.001), 92.7% (945/1,019) compared to 78.2% (1023/1,309) and interpretation ( = 27.26, p<0.001), 9.7% (99/1,019) compared to 4.3% (56/1,309), some serious ones persisted. Conclusions/Significance Chinese medical research seems to have made significant progress regarding statistical analyses, but there remains ample room for improvement regarding study designs. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative. PMID:20520824

  4. Fast and accurate imputation of summary statistics enhances evidence of functional enrichment.

    PubMed

    Pasaniuc, Bogdan; Zaitlen, Noah; Shi, Huwenbo; Bhatia, Gaurav; Gusev, Alexander; Pickrell, Joseph; Hirschhorn, Joel; Strachan, David P; Patterson, Nick; Price, Alkes L

    2014-10-15

    Imputation using external reference panels (e.g. 1000 Genomes) is a widely used approach for increasing power in genome-wide association studies and meta-analysis. Existing hidden Markov models (HMM)-based imputation approaches require individual-level genotypes. Here, we develop a new method for Gaussian imputation from summary association statistics, a type of data that is becoming widely available. In simulations using 1000 Genomes (1000G) data, this method recovers 84% (54%) of the effective sample size for common (>5%) and low-frequency (1-5%) variants [increasing to 87% (60%) when summary linkage disequilibrium information is available from target samples] versus the gold standard of 89% (67%) for HMM-based imputation, which cannot be applied to summary statistics. Our approach accounts for the limited sample size of the reference panel, a crucial step to eliminate false-positive associations, and it is computationally very fast. As an empirical demonstration, we apply our method to seven case-control phenotypes from the Wellcome Trust Case Control Consortium (WTCCC) data and a study of height in the British 1958 birth cohort (1958BC). Gaussian imputation from summary statistics recovers 95% (105%) of the effective sample size (as quantified by the ratio of [Formula: see text] association statistics) compared with HMM-based imputation from individual-level genotypes at the 227 (176) published single nucleotide polymorphisms (SNPs) in the WTCCC (1958BC height) data. In addition, for publicly available summary statistics from large meta-analyses of four lipid traits, we publicly release imputed summary statistics at 1000G SNPs, which could not have been obtained using previously published methods, and demonstrate their accuracy by masking subsets of the data. We show that 1000G imputation using our approach increases the magnitude and statistical evidence of enrichment at genic versus non-genic loci for these traits, as compared with an analysis without 1000G imputation. Thus, imputation of summary statistics will be a valuable tool in future functional enrichment analyses. Publicly available software package available at http://bogdan.bioinformatics.ucla.edu/software/. bpasaniuc@mednet.ucla.edu or aprice@hsph.harvard.edu Supplementary materials are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Tafamidis delays disease progression in patients with early stage transthyretin familial amyloid polyneuropathy: additional supportive analyses from the pivotal trial.

    PubMed

    Keohane, Denis; Schwartz, Jeffrey; Gundapaneni, Balarama; Stewart, Michelle; Amass, Leslie

    2017-03-01

    Tafamidis, a non-NSAID highly specific transthyretin stabilizer, delayed neurologic disease progression as measured by Neuropathy Impairment Score-Lower Limbs (NIS-LL) in an 18-month, double-blind, placebo-controlled randomized trial in 128 patients with early-stage transthyretin V30M familial amyloid polyneuropathy (ATTRV30M-FAP). The current post hoc analyses aimed to further evaluate the effects of tafamidis in delaying ATTRV30M-FAP progression in this trial. Pre-specified, repeated-measures analysis of change from baseline in NIS-LL in this trial (ClinicalTrials.gov NCT00409175) was repeated with addition of baseline as covariate and multiple imputation analysis for missing data by treatment group. Change in NIS-LL plus three small-fiber nerve tests (NIS-LL + Σ3) and NIS-LL plus seven nerve tests (NIS-LL + Σ7) were assessed without baseline as covariate. Treatment outcomes over the NIS-LL, Σ3, Σ7, modified body mass index and Norfolk Quality of Life-Diabetic Neuropathy Total Quality of Life Score were also examined using multivariate analysis techniques. Neuropathy progression based on NIS-LL change from baseline to Month 18 remained significantly reduced for tafamidis versus placebo in the baseline-adjusted and multiple imputation analyses. NIS-LL + Σ3 and NIS-LL + Σ7 captured significant treatment group differences. Multivariate analyses provided strong statistical evidence for a superior tafamidis treatment effect. These supportive analyses confirm that tafamidis delays neurologic progression in early-stage ATTRV30M-FAP. NCT00409175.

  6. Methylenetetrahydrofolate Reductase Gene Polymorphisms (C677T and A1298C) and Hemorrhagic Stroke in Moroccan Patients.

    PubMed

    Abidi, Omar; Haissam, Mohammed; Nahili, Halima; El Azhari, Abdessamad; Hilmani, Said; Barakat, Abdelhamid

    2018-07-01

    The number of deaths from hemorrhagic strokes is about twice as high than the number of deaths from ischemic strokes. Genetic risk assessment could play important roles in preventive and therapeutic strategies. The present study was aimed to evaluate whether the MTHFR gene polymorphisms could increase the risk of cerebral hemorrhage in Moroccan patients. A total of 113 patients with hemorrhagic stroke and 323 healthy controls were included in this case-control study. The C677T (rs1801133) and A1298C (rs1801131) MTHFR gene polymorphisms were genotyped by Polymerase Chain Reaction-Restriction Fragment Length Polymorphism (PCR-RFLP) method in all patients and controls. The genotype and allele frequencies were compared between groups using appropriate statistical analyses. Both groups, patients and controls, were in accordance with the Hardy-Weinberg Equilibrium. For the C677T polymorphism, the frequencies of the CC, CT, and TT genotypes were 50.44% versus 46.13%, 39.82% versus 43.03, and 9.73% versus 10.84% in controls versus patients, respectively, whereas for the A1298C polymorphism, the frequencies of the AA, AC, and CC genotypes were 56.64% versus 57.59%, 40.71% versus 37.15, and 2.65% versus 5.26% in controls versus patients, respectively. No statistically significant difference has been proved between patients and controls frequencies (P >.05) for all additive, recessive, and dominant models. Additional analyses including genotypes combination, allelic frequencies, and hemorrhagic stroke patient subtypes did not show any statistically significant difference between controls and patients/subgroup patients. Our findings suggested no association between MTHFR gene polymorphisms and susceptibility to hemorrhagic strokes in Moroccan patients. Further investigations should be conducted to elucidate the roles of other gene variants in the pathogenesis of this condition. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  7. Quantifying and reducing statistical uncertainty in sample-based health program costing studies in low- and middle-income countries.

    PubMed

    Rivera-Rodriguez, Claudia L; Resch, Stephen; Haneuse, Sebastien

    2018-01-01

    In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty.

  8. Quantifying and reducing statistical uncertainty in sample-based health program costing studies in low- and middle-income countries

    PubMed Central

    Resch, Stephen

    2018-01-01

    Objectives: In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. Methods: We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. Results: A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Conclusion: Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty. PMID:29636964

  9. Global atmospheric circulation statistics, 1000-1 mb

    NASA Technical Reports Server (NTRS)

    Randel, William J.

    1992-01-01

    The atlas presents atmospheric general circulation statistics derived from twelve years (1979-90) of daily National Meteorological Center (NMC) operational geopotential height analyses; it is an update of a prior atlas using data over 1979-1986. These global analyses are available on pressure levels covering 1000-1 mb (approximately 0-50 km). The geopotential grids are a combined product of the Climate Analysis Center (which produces analyses over 70-1 mb) and operational NMC analyses (over 1000-100 mb). Balance horizontal winds and hydrostatic temperatures are derived from the geopotential fields.

  10. Understanding the drivers on medical workloads: an analysis of spectators at the Australian Football League.

    PubMed

    Zeitz, Kathryn; Haghighi, Pari Delir; Burstein, Frada; Williams, Jeffrey

    2013-06-01

    The present study was designed to further understand the psychosocial drivers of crowds impacting on the demand for healthcare. This involved analysing different spectator crowds for medical usage at mass gatherings; more specifically, did different football team spectators (of the Australian Football League) generate different medical usage rates. In total, 317 games were analysed from 10 venues over 2 years. Data were analysed by the ANOVA and Pearson correlation tests. RESULTS; Spectators who supported different football teams generated statistically significant differences in patient presentation rates (PPR) (F15, 618=1.998, P=0.014). The present study confirmed previous findings that there is a positive correlation between the crowd size and PPR at mass gatherings but found a negative correlation between density and PPR (r = -0.206, n=317, P<0.0005). The present study has attempted to scientifically explore psychosocial elements of crowd behaviour as a driver of demand for emergency medical care. In measuring demand for emergency medical services there is a need to develop a more sophisticated understanding of a variety of drivers in addition to traditional metrics such as temperature, crowd size and other physical elements. In this study we saw that spectators who supported different football teams generated statistically significant differences in PPR. What is known about this topic? Understanding the drivers of emergency medical care is most important in the mass gathering setting. There has been minimal analysis of psychological 'crowd' variables. What does this paper add? This study explores the psychosocial impact of supporting a different team on the PPR of spectators at Australian Football League matches. The value of collecting and analysing these types of data sets is to support more balanced planning, better decision support and knowledge management, and more effective emergency medical demand management. What are the implications for practitioners? This information further expands the body of evidence being created to understand the drivers of emergency medical demand and usage. In addition, it supports the planning and management of emergency medical and health-related requirements by increasing our understanding of the effect of elements of 'crowd' that impact on medical usage and emergency healthcare.

  11. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI)

    PubMed Central

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non–expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI’s robustness and sensitivity in capturing useful data relating to the students’ conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. PMID:26903497

  12. Analysis of Feature Intervisibility and Cumulative Visibility Using GIS, Bayesian and Spatial Statistics: A Study from the Mandara Mountains, Northern Cameroon

    PubMed Central

    Wright, David K.; MacEachern, Scott; Lee, Jaeyong

    2014-01-01

    The locations of diy-geδ-bay (DGB) sites in the Mandara Mountains, northern Cameroon are hypothesized to occur as a function of their ability to see and be seen from points on the surrounding landscape. A series of geostatistical, two-way and Bayesian logistic regression analyses were performed to test two hypotheses related to the intervisibility of the sites to one another and their visual prominence on the landscape. We determine that the intervisibility of the sites to one another is highly statistically significant when compared to 10 stratified-random permutations of DGB sites. Bayesian logistic regression additionally demonstrates that the visibility of the sites to points on the surrounding landscape is statistically significant. The location of sites appears to have also been selected on the basis of lower slope than random permutations of sites. Using statistical measures, many of which are not commonly employed in archaeological research, to evaluate aspects of visibility on the landscape, we conclude that the placement of DGB sites improved their conspicuousness for enhanced ritual, social cooperation and/or competition purposes. PMID:25383883

  13. Cognitive predictors of balance in Parkinson's disease.

    PubMed

    Fernandes, Ângela; Mendes, Andreia; Rocha, Nuno; Tavares, João Manuel R S

    2016-06-01

    Postural instability is one of the most incapacitating symptoms of Parkinson's disease (PD) and appears to be related to cognitive deficits. This study aims to determine the cognitive factors that can predict deficits in static and dynamic balance in individuals with PD. A sociodemographic questionnaire characterized 52 individuals with PD for this work. The Trail Making Test, Rule Shift Cards Test, and Digit Span Test assessed the executive functions. The static balance was assessed using a plantar pressure platform, and dynamic balance was based on the Timed Up and Go Test. The results were statistically analysed using SPSS Statistics software through linear regression analysis. The results show that a statistically significant model based on cognitive outcomes was able to explain the variance of motor variables. Also, the explanatory value of the model tended to increase with the addition of individual and clinical variables, although the resulting model was not statistically significant The model explained 25-29% of the variability of the Timed Up and Go Test, while for the anteroposterior displacement it was 23-34%, and for the mediolateral displacement it was 24-39%. From the findings, we conclude that the cognitive performance, especially the executive functions, is a predictor of balance deficit in individuals with PD.

  14. a Statistical Texture Feature for Building Collapse Information Extraction of SAR Image

    NASA Astrophysics Data System (ADS)

    Li, L.; Yang, H.; Chen, Q.; Liu, X.

    2018-04-01

    Synthetic Aperture Radar (SAR) has become one of the most important ways to extract post-disaster collapsed building information, due to its extreme versatility and almost all-weather, day-and-night working capability, etc. In view of the fact that the inherent statistical distribution of speckle in SAR images is not used to extract collapsed building information, this paper proposed a novel texture feature of statistical models of SAR images to extract the collapsed buildings. In the proposed feature, the texture parameter of G0 distribution from SAR images is used to reflect the uniformity of the target to extract the collapsed building. This feature not only considers the statistical distribution of SAR images, providing more accurate description of the object texture, but also is applied to extract collapsed building information of single-, dual- or full-polarization SAR data. The RADARSAT-2 data of Yushu earthquake which acquired on April 21, 2010 is used to present and analyze the performance of the proposed method. In addition, the applicability of this feature to SAR data with different polarizations is also analysed, which provides decision support for the data selection of collapsed building information extraction.

  15. Detecting most influencing courses on students grades using block PCA

    NASA Astrophysics Data System (ADS)

    Othman, Osama H.; Gebril, Rami Salah

    2014-12-01

    One of the modern solutions adopted in dealing with the problem of large number of variables in statistical analyses is the Block Principal Component Analysis (Block PCA). This modified technique can be used to reduce the vertical dimension (variables) of the data matrix Xn×p by selecting a smaller number of variables, (say m) containing most of the statistical information. These selected variables can then be employed in further investigations and analyses. Block PCA is an adapted multistage technique of the original PCA. It involves the application of Cluster Analysis (CA) and variable selection throughout sub principal components scores (PC's). The application of Block PCA in this paper is a modified version of the original work of Liu et al (2002). The main objective was to apply PCA on each group of variables, (established using cluster analysis), instead of involving the whole large pack of variables which was proved to be unreliable. In this work, the Block PCA is used to reduce the size of a huge data matrix ((n = 41) × (p = 251)) consisting of Grade Point Average (GPA) of the students in 251 courses (variables) in the faculty of science in Benghazi University. In other words, we are constructing a smaller analytical data matrix of the GPA's of the students with less variables containing most variation (statistical information) in the original database. By applying the Block PCA, (12) courses were found to `absorb' most of the variation or influence from the original data matrix, and hence worth to be keep for future statistical exploring and analytical studies. In addition, the course Independent Study (Math.) was found to be the most influencing course on students GPA among the 12 selected courses.

  16. Guidelines for the design and statistical analysis of experiments in papers submitted to ATLA.

    PubMed

    Festing, M F

    2001-01-01

    In vitro experiments need to be well designed and correctly analysed if they are to achieve their full potential to replace the use of animals in research. An "experiment" is a procedure for collecting scientific data in order to answer a hypothesis, or to provide material for generating new hypotheses, and differs from a survey because the scientist has control over the treatments that can be applied. Most experiments can be classified into one of a few formal designs, the most common being completely randomised, and randomised block designs. These are quite common with in vitro experiments, which are often replicated in time. Some experiments involve a single independent (treatment) variable, while other "factorial" designs simultaneously vary two or more independent variables, such as drug treatment and cell line. Factorial designs often provide additional information at little extra cost. Experiments need to be carefully planned to avoid bias, be powerful yet simple, provide for a valid statistical analysis and, in some cases, have a wide range of applicability. Virtually all experiments need some sort of statistical analysis in order to take account of biological variation among the experimental subjects. Parametric methods using the t test or analysis of variance are usually more powerful than non-parametric methods, provided the underlying assumptions of normality of the residuals and equal variances are approximately valid. The statistical analyses of data from a completely randomised design, and from a randomised-block design are demonstrated in Appendices 1 and 2, and methods of determining sample size are discussed in Appendix 3. Appendix 4 gives a checklist for authors submitting papers to ATLA.

  17. Secondary Analysis of National Longitudinal Transition Study 2 Data

    ERIC Educational Resources Information Center

    Hicks, Tyler A.; Knollman, Greg A.

    2015-01-01

    This review examines published secondary analyses of National Longitudinal Transition Study 2 (NLTS2) data, with a primary focus upon statistical objectives, paradigms, inferences, and methods. Its primary purpose was to determine which statistical techniques have been common in secondary analyses of NLTS2 data. The review begins with an…

  18. A Nonparametric Geostatistical Method For Estimating Species Importance

    Treesearch

    Andrew J. Lister; Rachel Riemann; Michael Hoppus

    2001-01-01

    Parametric statistical methods are not always appropriate for conducting spatial analyses of forest inventory data. Parametric geostatistical methods such as variography and kriging are essentially averaging procedures, and thus can be affected by extreme values. Furthermore, non normal distributions violate the assumptions of analyses in which test statistics are...

  19. "Who Was 'Shadow'?" The Computer Knows: Applying Grammar-Program Statistics in Content Analyses to Solve Mysteries about Authorship.

    ERIC Educational Resources Information Center

    Ellis, Barbara G.; Dick, Steven J.

    1996-01-01

    Employs the statistics-documentation portion of a word-processing program's grammar-check feature together with qualitative analyses to determine that Henry Watterson, long-time editor of the "Louisville Courier-Journal," was probably the South's famed Civil War correspondent "Shadow." (TB)

  20. Remote sensing data acquisition, analysis and archival. Volume 1. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stringer, W.J.; Dean, K.G.; Groves, J.E.

    1993-03-25

    The project specialized in the acquisition and dissemination of satellite imagery and its utilization for case-specific and statistical analyses of offshore environmental conditions, particularly those involving sea ice. During the duration of this contract, 854 Landsat Multispectral Scanner and 2 Landsat Thematic Mapper scenes, 8,576 Advanced Very High Resolution Radiometer images, and 31,000 European, Earth Resources Satellite, Synthetic Aperture Radar images were archived. Direct assistance was provided to eight Minerals Management Service (MMS)-sponsored studies, including analyses of Port Moller circulation, Bowhead whale migration, distribution, population and behavioral studies, Beaufort Sea fisheries, oil spill trajectory model development, and Kasegaluk Lagoon environmentalmore » assessments. In addition, under this Cooperative Agreement several complete studies were undertaken based on analysis of satellite imagery. The topics included: Kasegaluk Lagoon transport, the effect of winter storms on arctic ice, the relationship between ice surface temperatures as measured by buoys and passive microwave imagery, unusual cloud forms following lead-openings, and analyses of Chukchi and Bering sea polynyas.« less

  1. Impact of extreme temperatures on daily mortality in Madrid (Spain) among the 45-64 age-group.

    PubMed

    Díaz, Julio; Linares, Cristina; Tobías, Aurelio

    2006-07-01

    This paper analyses the relationship between extreme temperatures and mortality among persons aged 45-64 years. Daily mortality in Madrid was analysed by sex and cause, from January 1986 to December 1997. Quantitative analyses were performed using generalised additive models, with other covariables, such as influenza, air pollution and seasonality, included as controls. Our results showed that impact on mortality was limited for temperatures ranging from the 5th to the 95th percentiles, and increased sharply thereafter. During the summer period, the effect of heat was detected solely among males in the target age group, with an attributable risk (AR) of 13.3% for circulatory causes. Similarly, NO(2) concentrations registered the main statistically significant associations in females, with an AR of 15% when circulatory causes were considered. During winter, the impact of cold was exclusively observed among females having an AR of 7.7%. The magnitude of the AR indicates that the impact of extreme temperature is by no means negligible.

  2. Common variants of the EPDR1 gene and the risk of Dupuytren’s disease.

    PubMed

    Dębniak, T; Żyluk, A; Puchalski, P; Serrano-Fernandez, P

    2013-10-01

    The object of this study was the investigation of 3 common variants of single nucleotide polymorphisms of the ependymin-related gene 1 and its association with the occurrence of Dupuytren's disease. DNA samples were obtained from the peripheral blood of 508 consecutive patients. The control group comprised 515 healthy adults who were age-matched with the Dupuytren's patients. 3 common variants were analysed using TaqMan® genotyping assays and sequencing. The differences in the frequencies of variants of single nucleotide polymorphisms in patients and the control group were statistically tested. Additionally, haplotype frequency and linkage disequilibrium were analysed for these variants. A statistically significant association was noted between rs16879765_CT, rs16879765_TT and rs13240429_AA variants and Dupuytren's disease. 2 haplotypes: rs2722280_C+rs13240429_A+rs16879765_C and rs2722280_C+rs13240429_G+rs16879765_T were found to be statistically significantly associated with Dupuytren's disease. Moreover, we found that rs13240429 and rs16879765 variants were in strong linkage disequilibrium, while rs2722280 was only in moderate linkage disequilibrium. No significant differences were found in the frequencies of the variants of the gene between the groups with a positive and negative familial history of Dupuytren's disease. In conclusion, results of this study suggest that EPDR1 gene can be added to a growing list of genes associated with Dupuytren's disease development. © Georg Thieme Verlag KG Stuttgart · New York.

  3. Statistical Characterization of School Bus Drive Cycles Collected via Onboard Logging Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duran, A.; Walkowicz, K.

    In an effort to characterize the dynamics typical of school bus operation, National Renewable Energy Laboratory (NREL) researchers set out to gather in-use duty cycle data from school bus fleets operating across the country. Employing a combination of Isaac Instruments GPS/CAN data loggers in conjunction with existing onboard telemetric systems resulted in the capture of operating information for more than 200 individual vehicles in three geographically unique domestic locations. In total, over 1,500 individual operational route shifts from Washington, New York, and Colorado were collected. Upon completing the collection of in-use field data using either NREL-installed data acquisition devices ormore » existing onboard telemetry systems, large-scale duty-cycle statistical analyses were performed to examine underlying vehicle dynamics trends within the data and to explore vehicle operation variations between fleet locations. Based on the results of these analyses, high, low, and average vehicle dynamics requirements were determined, resulting in the selection of representative standard chassis dynamometer test cycles for each condition. In this paper, the methodology and accompanying results of the large-scale duty-cycle statistical analysis are presented, including graphical and tabular representations of a number of relationships between key duty-cycle metrics observed within the larger data set. In addition to presenting the results of this analysis, conclusions are drawn and presented regarding potential applications of advanced vehicle technology as it relates specifically to school buses.« less

  4. Do patients with brain metastases selected for whole brain radiotherapy have worse baseline quality of life as compared to those for radiosurgery or neurosurgery (with or without whole brain radiotherapy)?

    PubMed

    Chow, Ronald; Tsao, May; Pulenzas, Natalie; Zhang, Liying; Sahgal, Arjun; Cella, David; Soliman, Hany; Danjoux, Cyril; DeAngelis, Carlo; Vuong, Sherlyn; Chow, Edward

    2016-01-01

    The purpose was to examine the baseline characteristics, symptoms and quality of life (QOL) in patients who receive different treatments for brain metastases. Eligible patients were divided and analysed based on their treatment: whole brain radiotherapy (WBRT) alone versus stereotactic radiosurgery (SRS) or neurosurgery with or without WBRT. The Functional Assessment of Cancer Therapy-Brain (FACT-Br) items were grouped according to different domains for summary scores. The domains used for summary scores were physical, social/family, emotional, functional well-being (FWB) and additional concerns. A total of 120 patients were enrolled, with 37 treated with WBRT alone and 83 with SRS or neurosurgery with or without WBRT. Of the 50 baseline FACT-Br items, only five items (I feel ill; I get support from my friends; I worry about dying; I have difficulty expressing my thoughts, I am able to put my thoughts into action) were statistically worse in patients treated with WBRT alone (P<0.05). Patients who received SRS or surgery with or without WBRT had statistically (P<0.05) higher scores for the FWB domain, additional concerns domain, and FACT-G total scores, indicating better QOL. Patients selected for WBRT alone reported statistically different baseline QOL as compared to patients who were treated with SRS or neurosurgery (with or without WBRT).

  5. [A documentation procedure for community social psychiatry services--a pilot project in Bielefeld and Minden].

    PubMed

    Hellmeier, W; Genin, G; Klewe-Niemann, S

    1996-04-01

    The status of health reporting (on community levels) has improved considerably during recent years. It is being increasingly used as an instrument for planning, controlling and evaluating political processes. In addition to individual studies the statistics within the departments of the health authorities are an important factor for meaningful health reporting on a local level. The IDIS (from Jan. 1st, 1995 LOGD) and the social psychiatric services on the Minden-Lübbecke district and the city of Bielefeld have developed a programme for automation-aided management of the statistics for social psychiatric services on a local level. Details on the personal situation and illnesses of the clients as well as on the activities of the services staff are recorded and analysed. Based on the WHO programme EPI-info 6.01 the documentation programme SPD-STAT was developed. This programme is menudriven and, in addition to the functions for the statistical data input and retrieval of fixed table sets, also offers the possibility of processing data with the full functionality of the ANALYSIS-module of EPI-Info. Thus interactive ad-hoc evaluations for current questions are made possible. Using SPD-STAT in as many local regions in NRW as possible may be a big step forward for health reporting on local levels as well as for health reporting on a state level.

  6. Tips and Tricks for Successful Application of Statistical Methods to Biological Data.

    PubMed

    Schlenker, Evelyn

    2016-01-01

    This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.

  7. Aircraft Maneuvers for the Evaluation of Flying Qualities and Agility. Volume 1. Maneuver Development Process and Initial Maneuver Set

    DTIC Science & Technology

    1993-08-01

    subtitled "Simulation Data," consists of detailed infonrnation on the design parmneter variations tested, subsequent statistical analyses conducted...used with confidence during the design process. The data quality can be examined in various forms such as statistical analyses of measure of merit data...merit, such as time to capture or nmaximurn pitch rate, can be calculated from the simulation time history data. Statistical techniques are then used

  8. Improved score statistics for meta-analysis in single-variant and gene-level association studies.

    PubMed

    Yang, Jingjing; Chen, Sai; Abecasis, Gonçalo

    2018-06-01

    Meta-analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta-analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case-control ratios. Here, we investigate the power loss problem by the standard meta-analysis methods for unbalanced studies, and further propose novel meta-analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta-score-statistics that can accurately approximate the joint-score-statistics with combined individual-level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene-level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene-level tests with 26 unbalanced studies of age-related macular degeneration . In addition, we took the meta-analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta-analyzing multi-ethnic samples. In summary, our improved meta-score-statistics with corrections for population stratification can be used to construct both single-variant and gene-level association studies, providing a useful framework for ensuring well-powered, convenient, cross-study analyses. © 2018 WILEY PERIODICALS, INC.

  9. A Quantitative Analysis of Latino Acculturation and Alcohol Use: Myth Versus Reality.

    PubMed

    Alvarez, Miriam J; Frietze, Gabriel; Ramos, Corin; Field, Craig; Zárate, Michael A

    2017-07-01

    Research on health among Latinos often focuses on acculturation processes and the associated stressors that influence drinking behavior. Given the common use of acculturation measures and the state of the knowledge on alcohol-related health among Latino populations, the current analyses tested the efficacy of acculturation measures to predict various indicators of alcohol consumption. Specifically, this quantitative review assessed the predictive utility of acculturation on alcohol consumption behaviors (frequency, volume, and quantity). Two main analyses were conducted-a p-curve analysis and a meta-analysis of the observed associations between acculturation and drinking behavior. Results demonstrated that current measures of acculturation are a statistically significant predictor of alcohol use (Z = -20.75, p < 0.0001). The meta-analysis included a cumulative sample size of 29,589 Latino participants across 31 studies. A random-effects model yielded a weighted average correlation of 0.16 (95% confidence interval = 0.12, 0.19). Additional subgroup analyses examined the effects of gender and using different scales to measure acculturation. Altogether, results demonstrated that acculturation is a useful predictor of alcohol use. In addition, the meta-analysis revealed that a small positive correlation exists between acculturation and alcohol use in Latinos with a between-study variance of only 1.5% (τ 2  = 0.015). Our analyses reveal that the association between current measures of acculturation and alcohol use is relatively small. Copyright © 2017 by the Research Society on Alcoholism.

  10. Identification of T1D susceptibility genes within the MHC region by combining protein interaction networks and SNP genotyping data

    PubMed Central

    Brorsson, C.; Hansen, N. T.; Lage, K.; Bergholdt, R.; Brunak, S.; Pociot, F.

    2009-01-01

    Aim To develop novel methods for identifying new genes that contribute to the risk of developing type 1 diabetes within the Major Histocompatibility Complex (MHC) region on chromosome 6, independently of the known linkage disequilibrium (LD) between human leucocyte antigen (HLA)-DRB1, -DQA1, -DQB1 genes. Methods We have developed a novel method that combines single nucleotide polymorphism (SNP) genotyping data with protein–protein interaction (ppi) networks to identify disease-associated network modules enriched for proteins encoded from the MHC region. Approximately 2500 SNPs located in the 4 Mb MHC region were analysed in 1000 affected offspring trios generated by the Type 1 Diabetes Genetics Consortium (T1DGC). The most associated SNP in each gene was chosen and genes were mapped to ppi networks for identification of interaction partners. The association testing and resulting interacting protein modules were statistically evaluated using permutation. Results A total of 151 genes could be mapped to nodes within the protein interaction network and their interaction partners were identified. Five protein interaction modules reached statistical significance using this approach. The identified proteins are well known in the pathogenesis of T1D, but the modules also contain additional candidates that have been implicated in β-cell development and diabetic complications. Conclusions The extensive LD within the MHC region makes it important to develop new methods for analysing genotyping data for identification of additional risk genes for T1D. Combining genetic data with knowledge about functional pathways provides new insight into mechanisms underlying T1D. PMID:19143816

  11. Matrix metalloproteinases and educational attainment in refractive error: evidence of gene-environment interactions in the AREDS study

    PubMed Central

    Wojciechowski, Robert; Yee, Stephanie S.; Simpson, Claire L.; Bailey-Wilson, Joan E.; Stambolian, Dwight

    2012-01-01

    Purpose A previous study of Old Order Amish families has shown association of ocular refraction with markers proximal to matrix metalloproteinase (MMP) genes MMP1 and MMP10 and intragenic to MMP2. We conducted a candidate gene replication study of association between refraction and single nucleotide polymorphisms (SNPs) within these genomic regions. Design Candidate gene genetic association study. Participants 2,000 participants drawn from the Age Related Eye Disease Study (AREDS) were chosen for genotyping. After quality control filtering, 1912 individuals were available for analysis. Methods Microarray genotyping was performed using the HumanOmni 2.5 bead array. SNPs originally typed in the previous Amish association study were extracted for analysis. In addition, haplotype tagging SNPs were genotyped using TaqMan assays. Quantitative trait association analyses of mean spherical equivalent refraction (MSE) were performed on 30 markers using linear regression models and an additive genetic risk model, while adjusting for age, sex, education, and population substructure. Post-hoc analyses were performed after stratifying on a dichotomous education variable. Pointwise (P-emp) and multiple-test study-wise (P-multi) significance levels were calculated empirically through permutation. Main outcome measures MSE was used as a quantitative measure of ocular refraction. Results The mean age and ocular refraction were 68 years (SD=4.7) and +0.55 D (SD=2.14), respectively. Pointwise statistical significance was obtained for rs1939008 (P-emp=0.0326). No SNP attained statistical significance after correcting for multiple testing. In stratified analyses, multiple SNPs reached pointwise significance in the lower-education group: 2 of these were statistically significant after multiple testing correction. The two highest-ranking SNPs in Amish families (rs1939008 and rs9928731) showed pointwise P-emp<0.01 in the lower-education stratum of AREDS participants. Conclusions We show suggestive evidence of replication of an association signal for ocular refraction to a marker between MMP1 and MMP10. We also provide evidence of a gene-environment interaction between previously-reported markers and education on refractive error. Variants in MMP1- MMP10 and MMP2 regions appear to affect population variation in ocular refraction in environmental conditions less favorable for myopia development. PMID:23098370

  12. Kidney function changes with aging in adults: comparison between cross-sectional and longitudinal data analyses in renal function assessment.

    PubMed

    Chung, Sang M; Lee, David J; Hand, Austin; Young, Philip; Vaidyanathan, Jayabharathi; Sahajwalla, Chandrahas

    2015-12-01

    The study evaluated whether the renal function decline rate per year with age in adults varies based on two primary statistical analyses: cross-section (CS), using one observation per subject, and longitudinal (LT), using multiple observations per subject over time. A total of 16628 records (3946 subjects; age range 30-92 years) of creatinine clearance and relevant demographic data were used. On average, four samples per subject were collected for up to 2364 days (mean: 793 days). A simple linear regression and random coefficient models were selected for CS and LT analyses, respectively. The renal function decline rates per year were 1.33 and 0.95 ml/min/year for CS and LT analyses, respectively, and were slower when the repeated individual measurements were considered. The study confirms that rates are different based on statistical analyses, and that a statistically robust longitudinal model with a proper sampling design provides reliable individual as well as population estimates of the renal function decline rates per year with age in adults. In conclusion, our findings indicated that one should be cautious in interpreting the renal function decline rate with aging information because its estimation was highly dependent on the statistical analyses. From our analyses, a population longitudinal analysis (e.g. random coefficient model) is recommended if individualization is critical, such as a dose adjustment based on renal function during a chronic therapy. Copyright © 2015 John Wiley & Sons, Ltd.

  13. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics

    PubMed Central

    2016-01-01

    Background We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. Objective To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. Methods The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Results Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. Conclusions IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise. PMID:27729304

  14. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics.

    PubMed

    Hoyt, Robert Eugene; Snider, Dallas; Thompson, Carla; Mantravadi, Sarita

    2016-10-11

    We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise.

  15. Analysis methodology and development of a statistical tool for biodistribution data from internal contamination with actinides.

    PubMed

    Lamart, Stephanie; Griffiths, Nina M; Tchitchek, Nicolas; Angulo, Jaime F; Van der Meeren, Anne

    2017-03-01

    The aim of this work was to develop a computational tool that integrates several statistical analysis features for biodistribution data from internal contamination experiments. These data represent actinide levels in biological compartments as a function of time and are derived from activity measurements in tissues and excreta. These experiments aim at assessing the influence of different contamination conditions (e.g. intake route or radioelement) on the biological behavior of the contaminant. The ever increasing number of datasets and diversity of experimental conditions make the handling and analysis of biodistribution data difficult. This work sought to facilitate the statistical analysis of a large number of datasets and the comparison of results from diverse experimental conditions. Functional modules were developed using the open-source programming language R to facilitate specific operations: descriptive statistics, visual comparison, curve fitting, and implementation of biokinetic models. In addition, the structure of the datasets was harmonized using the same table format. Analysis outputs can be written in text files and updated data can be written in the consistent table format. Hence, a data repository is built progressively, which is essential for the optimal use of animal data. Graphical representations can be automatically generated and saved as image files. The resulting computational tool was applied using data derived from wound contamination experiments conducted under different conditions. In facilitating biodistribution data handling and statistical analyses, this computational tool ensures faster analyses and a better reproducibility compared with the use of multiple office software applications. Furthermore, re-analysis of archival data and comparison of data from different sources is made much easier. Hence this tool will help to understand better the influence of contamination characteristics on actinide biokinetics. Our approach can aid the optimization of treatment protocols and therefore contribute to the improvement of the medical response after internal contamination with actinides.

  16. Statistical Parametric Mapping to Identify Differences between Consensus-Based Joint Patterns during Gait in Children with Cerebral Palsy.

    PubMed

    Nieuwenhuys, Angela; Papageorgiou, Eirini; Desloovere, Kaat; Molenaers, Guy; De Laet, Tinne

    2017-01-01

    Experts recently identified 49 joint motion patterns in children with cerebral palsy during a Delphi consensus study. Pattern definitions were therefore the result of subjective expert opinion. The present study aims to provide objective, quantitative data supporting the identification of these consensus-based patterns. To do so, statistical parametric mapping was used to compare the mean kinematic waveforms of 154 trials of typically developing children (n = 56) to the mean kinematic waveforms of 1719 trials of children with cerebral palsy (n = 356), which were classified following the classification rules of the Delphi study. Three hypotheses stated that: (a) joint motion patterns with 'no or minor gait deviations' (n = 11 patterns) do not differ significantly from the gait pattern of typically developing children; (b) all other pathological joint motion patterns (n = 38 patterns) differ from typically developing gait and the locations of difference within the gait cycle, highlighted by statistical parametric mapping, concur with the consensus-based classification rules. (c) all joint motion patterns at the level of each joint (n = 49 patterns) differ from each other during at least one phase of the gait cycle. Results showed that: (a) ten patterns with 'no or minor gait deviations' differed somewhat unexpectedly from typically developing gait, but these differences were generally small (≤3°); (b) all other joint motion patterns (n = 38) differed from typically developing gait and the significant locations within the gait cycle that were indicated by the statistical analyses, coincided well with the classification rules; (c) joint motion patterns at the level of each joint significantly differed from each other, apart from two sagittal plane pelvic patterns. In addition to these results, for several joints, statistical analyses indicated other significant areas during the gait cycle that were not included in the pattern definitions of the consensus study. Based on these findings, suggestions to improve pattern definitions were made.

  17. Statistical Parametric Mapping to Identify Differences between Consensus-Based Joint Patterns during Gait in Children with Cerebral Palsy

    PubMed Central

    Papageorgiou, Eirini; Desloovere, Kaat; Molenaers, Guy; De Laet, Tinne

    2017-01-01

    Experts recently identified 49 joint motion patterns in children with cerebral palsy during a Delphi consensus study. Pattern definitions were therefore the result of subjective expert opinion. The present study aims to provide objective, quantitative data supporting the identification of these consensus-based patterns. To do so, statistical parametric mapping was used to compare the mean kinematic waveforms of 154 trials of typically developing children (n = 56) to the mean kinematic waveforms of 1719 trials of children with cerebral palsy (n = 356), which were classified following the classification rules of the Delphi study. Three hypotheses stated that: (a) joint motion patterns with ‘no or minor gait deviations’ (n = 11 patterns) do not differ significantly from the gait pattern of typically developing children; (b) all other pathological joint motion patterns (n = 38 patterns) differ from typically developing gait and the locations of difference within the gait cycle, highlighted by statistical parametric mapping, concur with the consensus-based classification rules. (c) all joint motion patterns at the level of each joint (n = 49 patterns) differ from each other during at least one phase of the gait cycle. Results showed that: (a) ten patterns with ‘no or minor gait deviations’ differed somewhat unexpectedly from typically developing gait, but these differences were generally small (≤3°); (b) all other joint motion patterns (n = 38) differed from typically developing gait and the significant locations within the gait cycle that were indicated by the statistical analyses, coincided well with the classification rules; (c) joint motion patterns at the level of each joint significantly differed from each other, apart from two sagittal plane pelvic patterns. In addition to these results, for several joints, statistical analyses indicated other significant areas during the gait cycle that were not included in the pattern definitions of the consensus study. Based on these findings, suggestions to improve pattern definitions were made. PMID:28081229

  18. Maintaining disorder: the micropolitics of drugs policy in Iran

    PubMed Central

    2018-01-01

    Abstract This article analyses the ways in which the state ‘treats’ addiction among precarious drug (ab)users in Iran. While most Muslim-majority as well as some Western states have been reluctant to adopt harm reduction measures, the Islamic Republic of Iran has done so on a nationwide scale and through a sophisticated system of welfare intervention. Additionally, it has introduced devices of management of ‘addiction’ (the ‘camps’) that defy statist modes of punishment and private violence. What legal and ethical framework has this new situation engendered? And what does this new situation tell us about the governmentality of the state? Through a combination of historical analysis and ethnographic fieldwork, the article analyses the paradigm of government of the Iranian state with regard to disorder as embodied by the lives of poor drug (ab)users. PMID:29456274

  19. Tempo-spatial analysis of Fennoscandian intraplate seismicity

    NASA Astrophysics Data System (ADS)

    Roberts, Roland; Lund, Björn

    2017-04-01

    Coupled spatial-temporal patterns of the occurrence of earthquakes in Fennoscandia are analysed using non-parametric methods. The occurrence of larger events is unambiguously and very strongly temporally clustered, with major implications for the assessment of seismic hazard in areas such as Fennoscandia. In addition, there is a clear pattern of geographical migration of activity. Data from the Swedish National Seismic Network and a collated international catalogue are analysed. Results show consistent patterns on different spatial and temporal scales. We are currently investigating these patterns in order to assess the statistical significance of the tempo-spatial patterns, and to what extent these may be consistent with stress transfer mechanism such as coulomb stress and pore fluid migration. Indications are that some further mechanism is necessary in order to explain the data, perhaps related to post-glacial uplift, which is up to 1cm/year.

  20. Writing Quality in Chinese Children: Speed and Fluency Matter

    PubMed Central

    Yan, Cathy Ming Wai; McBride-Chang, Catherine; Wagner, Richard K.; Zhang, Juan; Wong, Anita M. Y.; Shu, Hua

    2015-01-01

    There were two goals of the present study. The first was to create a scoring scheme by which 9-year-old Chinese children’s writing compositions could be rated to form a total score for writing quality. The second was to examine cognitive correlates of writing quality at age 9 from measures administered at ages 6–9. Age 9 writing compositions were scored using a 7-element rubric; following confirmatory factor analyses, 5 of these elements were retained to represent overall writing quality for subsequent analyses. Measures of vocabulary knowledge, Chinese word dictation, phonological awareness, speed of processing, speeded naming, and handwriting fluency at ages 6–9 were all significantly associated with the obtained overall writing quality measure even when the statistical effect of age was removed. With vocabulary knowledge, dictation skill, age, gender, and phonological awareness included in a regression equation, 35% of the variance in age 9 writing quality was explained. With the variables of speed of processing, speeded naming, and handwriting fluency additionally included as a block, 12% additional variance in the equation was explained. In addition to gender, overall unique correlates of writing quality were dictation, speed of processing, and handwriting fluency, underscoring the importance of both general automaticity and specific writing fluency for writing quality development in children. PMID:25750486

  1. Transformation (normalization) of slope gradient and surface curvatures, automated for statistical analyses from DEMs

    NASA Astrophysics Data System (ADS)

    Csillik, O.; Evans, I. S.; Drăguţ, L.

    2015-03-01

    Automated procedures are developed to alleviate long tails in frequency distributions of morphometric variables. They minimize the skewness of slope gradient frequency distributions, and modify the kurtosis of profile and plan curvature distributions toward that of the Gaussian (normal) model. Box-Cox (for slope) and arctangent (for curvature) transformations are tested on nine digital elevation models (DEMs) of varying origin and resolution, and different landscapes, and shown to be effective. Resulting histograms are illustrated and show considerable improvements over those for previously recommended slope transformations (sine, square root of sine, and logarithm of tangent). Unlike previous approaches, the proposed method evaluates the frequency distribution of slope gradient values in a given area and applies the most appropriate transform if required. Sensitivity of the arctangent transformation is tested, showing that Gaussian-kurtosis transformations are acceptable also in terms of histogram shape. Cube root transformations of curvatures produced bimodal histograms. The transforms are applicable to morphometric variables and many others with skewed or long-tailed distributions. By avoiding long tails and outliers, they permit parametric statistics such as correlation, regression and principal component analyses to be applied, with greater confidence that requirements for linearity, additivity and even scatter of residuals (constancy of error variance) are likely to be met. It is suggested that such transformations should be routinely applied in all parametric analyses of long-tailed variables. Our Box-Cox and curvature automated transformations are based on a Python script, implemented as an easy-to-use script tool in ArcGIS.

  2. Manual vs. computer-assisted sperm analysis: can CASA replace manual assessment of human semen in clinical practice?

    PubMed

    Talarczyk-Desole, Joanna; Berger, Anna; Taszarek-Hauke, Grażyna; Hauke, Jan; Pawelczyk, Leszek; Jedrzejczak, Piotr

    2017-01-01

    The aim of the study was to check the quality of computer-assisted sperm analysis (CASA) system in comparison to the reference manual method as well as standardization of the computer-assisted semen assessment. The study was conducted between January and June 2015 at the Andrology Laboratory of the Division of Infertility and Reproductive Endocrinology, Poznań University of Medical Sciences, Poland. The study group consisted of 230 men who gave sperm samples for the first time in our center as part of an infertility investigation. The samples underwent manual and computer-assisted assessment of concentration, motility and morphology. A total of 184 samples were examined twice: manually, according to the 2010 WHO recommendations, and with CASA, using the program set-tings provided by the manufacturer. Additionally, 46 samples underwent two manual analyses and two computer-assisted analyses. The p-value of p < 0.05 was considered as statistically significant. Statistically significant differences were found between all of the investigated sperm parameters, except for non-progressive motility, measured with CASA and manually. In the group of patients where all analyses with each method were performed twice on the same sample we found no significant differences between both assessments of the same probe, neither in the samples analyzed manually nor with CASA, although standard deviation was higher in the CASA group. Our results suggest that computer-assisted sperm analysis requires further improvement for a wider application in clinical practice.

  3. Full in-vitro analyses of new-generation bulk fill dental composites cured by halogen light.

    PubMed

    Tekin, Tuçe Hazal; Kantürk Figen, Aysel; Yılmaz Atalı, Pınar; Coşkuner Filiz, Bilge; Pişkin, Mehmet Burçin

    2017-08-01

    The objective of this study was to investigate the full in-vitro analyses of new-generation bulk-fill dental composites cured by halogen light (HLG). Two types' four composites were studied: Surefill SDR (SDR) and Xtra Base (XB) as bulk-fill flowable materials; QuixFill (QF) and XtraFill (XF) as packable bulk-fill materials. Samples were prepared for each analysis and test by applying the same procedure, but with different diameters and thicknesses appropriate to the analysis and test requirements. Thermal properties were determined by thermogravimetric analysis (TG/DTG) and differential scanning calorimetry (DSC) analysis; the Vickers microhardness (VHN) was measured after 1, 7, 15 and 30days of storage in water. The degree of conversion values for the materials (DC, %) were immediately measured using near-infrared spectroscopy (FT-IR). The surface morphology of the composites was investigated by scanning electron microscopes (SEM) and atomic-force microscopy (AFM) analyses. The sorption and solubility measurements were also performed after 1, 7, 15 and 30days of storage in water. In addition to his, the data were statistically analyzed using one-way analysis of variance, and both the Newman Keuls and Tukey multiple comparison tests. The statistical significance level was established at p<0.05. According to the ISO 4049 standards, all the tested materials showed acceptable water sorption and solubility, and a halogen light source was an option to polymerize bulk-fill, resin-based dental composites. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Inferential Statistics in "Language Teaching Research": A Review and Ways Forward

    ERIC Educational Resources Information Center

    Lindstromberg, Seth

    2016-01-01

    This article reviews all (quasi)experimental studies appearing in the first 19 volumes (1997-2015) of "Language Teaching Research" (LTR). Specifically, it provides an overview of how statistical analyses were conducted in these studies and of how the analyses were reported. The overall conclusion is that there has been a tight adherence…

  5. SU-E-J-85: Leave-One-Out Perturbation (LOOP) Fitting Algorithm for Absolute Dose Film Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu, A; Ahmad, M; Chen, Z

    2014-06-01

    Purpose: To introduce an outliers-recognition fitting routine for film dosimetry. It cannot only be flexible with any linear and non-linear regression but also can provide information for the minimal number of sampling points, critical sampling distributions and evaluating analytical functions for absolute film-dose calibration. Methods: The technique, leave-one-out (LOO) cross validation, is often used for statistical analyses on model performance. We used LOO analyses with perturbed bootstrap fitting called leave-one-out perturbation (LOOP) for film-dose calibration . Given a threshold, the LOO process detects unfit points (“outliers”) compared to other cohorts, and a bootstrap fitting process follows to seek any possibilitiesmore » of using perturbations for further improvement. After that outliers were reconfirmed by a traditional t-test statistics and eliminated, then another LOOP feedback resulted in the final. An over-sampled film-dose- calibration dataset was collected as a reference (dose range: 0-800cGy), and various simulated conditions for outliers and sampling distributions were derived from the reference. Comparisons over the various conditions were made, and the performance of fitting functions, polynomial and rational functions, were evaluated. Results: (1) LOOP can prove its sensitive outlier-recognition by its statistical correlation to an exceptional better goodness-of-fit as outliers being left-out. (2) With sufficient statistical information, the LOOP can correct outliers under some low-sampling conditions that other “robust fits”, e.g. Least Absolute Residuals, cannot. (3) Complete cross-validated analyses of LOOP indicate that the function of rational type demonstrates a much superior performance compared to the polynomial. Even with 5 data points including one outlier, using LOOP with rational function can restore more than a 95% value back to its reference values, while the polynomial fitting completely failed under the same conditions. Conclusion: LOOP can cooperate with any fitting routine functioning as a “robust fit”. In addition, it can be set as a benchmark for film-dose calibration fitting performance.« less

  6. Statistical Analysis of Individual Participant Data Meta-Analyses: A Comparison of Methods and Recommendations for Practice

    PubMed Central

    Stewart, Gavin B.; Altman, Douglas G.; Askie, Lisa M.; Duley, Lelia; Simmonds, Mark C.; Stewart, Lesley A.

    2012-01-01

    Background Individual participant data (IPD) meta-analyses that obtain “raw” data from studies rather than summary data typically adopt a “two-stage” approach to analysis whereby IPD within trials generate summary measures, which are combined using standard meta-analytical methods. Recently, a range of “one-stage” approaches which combine all individual participant data in a single meta-analysis have been suggested as providing a more powerful and flexible approach. However, they are more complex to implement and require statistical support. This study uses a dataset to compare “two-stage” and “one-stage” models of varying complexity, to ascertain whether results obtained from the approaches differ in a clinically meaningful way. Methods and Findings We included data from 24 randomised controlled trials, evaluating antiplatelet agents, for the prevention of pre-eclampsia in pregnancy. We performed two-stage and one-stage IPD meta-analyses to estimate overall treatment effect and to explore potential treatment interactions whereby particular types of women and their babies might benefit differentially from receiving antiplatelets. Two-stage and one-stage approaches gave similar results, showing a benefit of using anti-platelets (Relative risk 0.90, 95% CI 0.84 to 0.97). Neither approach suggested that any particular type of women benefited more or less from antiplatelets. There were no material differences in results between different types of one-stage model. Conclusions For these data, two-stage and one-stage approaches to analysis produce similar results. Although one-stage models offer a flexible environment for exploring model structure and are useful where across study patterns relating to types of participant, intervention and outcome mask similar relationships within trials, the additional insights provided by their usage may not outweigh the costs of statistical support for routine application in syntheses of randomised controlled trials. Researchers considering undertaking an IPD meta-analysis should not necessarily be deterred by a perceived need for sophisticated statistical methods when combining information from large randomised trials. PMID:23056232

  7. Investigation of serum biomarkers in primary gout patients using iTRAQ-based screening.

    PubMed

    Ying, Ying; Chen, Yong; Zhang, Shun; Huang, Haiyan; Zou, Rouxin; Li, Xiaoke; Chu, Zanbo; Huang, Xianqian; Peng, Yong; Gan, Minzhi; Geng, Baoqing; Zhu, Mengya; Ying, Yinyan; Huang, Zuoan

    2018-03-21

    Primary gout is a major disease that affects human health; however, its pathogenesis is not well known. The purpose of this study was to identify biomarkers to explore the underlying mechanisms of primary gout. We used the isobaric tags for relative and absolute quantitation (iTRAQ) technique combined with liquid chromatography-tandem mass spectrometry to screen differentially expressed proteins between gout patients and controls. We also identified proteins potentially involved in gout pathogenesis by analysing biological processes, cellular components, molecular functions, Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways and protein-protein interactions. We further verified some samples using enzyme-linked immunosorbent assay (ELISA). Statistical analyses were carried out using SPSS v. 20.0 and ROC (receiver operating characterstic) curve analyses were carried out using Medcalc software. Two-sided p-values <0.05 were deemed to be statistically significant for all analyses. We identified 95 differentially expressed proteins (50 up-regulated and 45 down-regulated), and selected nine proteins (α-enolase (ENOA), glyceraldehyde-3-phosphate dehydrogenase (G3P), complement component C9 (CO9), profilin-1 (PROF1), lipopolysaccharide-binding protein (LBP), tubulin beta-4A chain (TBB4A), phosphoglycerate kinase (PGK1), glucose-6-phosphate isomerase (G6PI), and transketolase (TKT)) for verification. This showed that the level of TBB4A was significantly higher in primary gout than in controls (p=0.023). iTRAQ technology was useful in the selection of differentially expressed proteins from proteomes, and provides a strong theoretical basis for the study of biomarkers and mechanisms in primary gout. In addition, TBB4A protein may be associated with primary gout.

  8. Assessing the effects of habitat patches ensuring propagule supply and different costs inclusion in marine spatial planning through multivariate analyses.

    PubMed

    Appolloni, L; Sandulli, R; Vetrano, G; Russo, G F

    2018-05-15

    Marine Protected Areas are considered key tools for conservation of coastal ecosystems. However, many reserves are characterized by several problems mainly related to inadequate zonings that often do not protect high biodiversity and propagule supply areas precluding, at the same time, economic important zones for local interests. The Gulf of Naples is here employed as a study area to assess the effects of inclusion of different conservation features and costs in reserve design process. In particular eight scenarios are developed using graph theory to identify propagule source patches and fishing and exploitation activities as costs-in-use for local population. Scenarios elaborated by MARXAN, software commonly used for marine conservation planning, are compared using multivariate analyses (MDS, PERMANOVA and PERMDISP) in order to assess input data having greatest effects on protected areas selection. MARXAN is heuristic software able to give a number of different correct results, all of them near to the best solution. Its outputs show that the most important areas to be protected, in order to ensure long-term habitat life and adequate propagule supply, are mainly located around the Gulf islands. In addition through statistical analyses it allowed us to prove that different choices on conservation features lead to statistically different scenarios. The presence of propagule supply patches forces MARXAN to select almost the same areas to protect decreasingly different MARXAN results and, thus, choices for reserves area selection. The multivariate analyses applied here to marine spatial planning proved to be very helpful allowing to identify i) how different scenario input data affect MARXAN and ii) what features have to be taken into account in study areas characterized by peculiar biological and economic interests. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Crop identification technology assessment for remote sensing (CITARS). Volume 6: Data processing at the laboratory for applications of remote sensing

    NASA Technical Reports Server (NTRS)

    Bauer, M. E.; Cary, T. K.; Davis, B. J.; Swain, P. H.

    1975-01-01

    The results of classifications and experiments for the crop identification technology assessment for remote sensing are summarized. Using two analysis procedures, 15 data sets were classified. One procedure used class weights while the other assumed equal probabilities of occurrence for all classes. Additionally, 20 data sets were classified using training statistics from another segment or date. The classification and proportion estimation results of the local and nonlocal classifications are reported. Data also describe several other experiments to provide additional understanding of the results of the crop identification technology assessment for remote sensing. These experiments investigated alternative analysis procedures, training set selection and size, effects of multitemporal registration, spectral discriminability of corn, soybeans, and other, and analyses of aircraft multispectral data.

  10. Polymorphisms in the FGF2 gene and risk of serous ovarian cancer: results from the Ovarian Cancer Association Consortium

    PubMed Central

    Johnatty, Sharon E.; Beesley, Jonathan; Chen, Xiaoqing; Spurdle, Amanda B.; deFazio, Anna; Webb, Penelope M; Goode, Ellen L.; Rider, David N.; Vierkant, Robert A.; Anderson, Stephanie; Wu, Anna H.; Pike, Malcolm; Van Den Berg, David; Moysich, Kirsten; Ness, Roberta; Doherty, Jennifer; Rossing, Mary-Anne; Pearce, Celeste Leigh; Chenevix-Trench, Georgia

    2009-01-01

    Fibroblast growth factor (FGF)-2 (basic) is a potent angiogenic molecule involved in tumour progression, and is one of several growth factors with a central role in ovarian carcinogenesis. We hypothesised that common single nucleotide polymorphisms (SNPs) in the FGF2 gene may alter angiogenic potential and thereby susceptibility to ovarian cancer. We analysed 25 FGF2 tgSNPs using five independent study populations from the United States and Australia. Analysis was restricted to non-Hispanic White women with serous ovarian carcinoma (1269 cases and 2829 controls). There were no statistically significant associations between any FGF2 SNPs and ovarian cancer risk. There were two nominally statistically significant associations between heterozygosity for two FGF2 SNPs (rs308379 and rs308447; p<0.05) and serous ovarian cancer risk in the combined dataset, but rare homozygous estimates did not achieve statistical significance, nor were they consistent with the log additive model of inheritance. Overall genetic variation in FGF2 does not appear to play a role in susceptibility to ovarian cancer. PMID:19456219

  11. Occupational dose reduction in cardiac catheterisation laboratory: a randomised trial using a shield drape placed on the patient.

    PubMed

    Ordiales, J M; Nogales, J M; Vano, E; López-Mínguez, J R; Alvarez, F J; Ramos, J; Martínez, G; Sánchez, R M

    2017-04-25

    The aim of this study was to evaluate the occupational radiation dose in interventional cardiology by using a shielding drape on the patient. A random study with and without the protective material was conducted. The following control parameters were registered: demographic data, number of stents, contrast media volume, fluoroscopy time, number of cine images, kerma-area product and cumulative air kerma. Occupational dose data were obtained by electronic active dosemeters. No statistically significant differences in the analysed control parameters were registered. The median dose value received by the interventional cardiologist was 50% lower in the group with a shielding drape with a statistically significant p-value <0.001. In addition, the median value of the maximum scatter radiation dose was 31% lower in this group with a statistically significant p-value <0.001. This study showed that a shielding drape is a useful tool for reducing the occupational radiation dose in a cardiac catheterisation laboratory. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. A statistical and experimental approach for assessing the preservation of plant lipids in soil

    NASA Astrophysics Data System (ADS)

    Mueller, K. E.; Eissenstat, D. M.; Oleksyn, J.; Freeman, K. H.

    2011-12-01

    Plant-derived lipids contribute to stable soil organic matter, but further interpretations of their abundance in soils are limited because the factors that control lipid preservation are poorly understood. Using data from a long-term field experiment and simple statistical models, we provide novel constraints on several predictors of the concentration of hydrolyzable lipids in forest mineral soils. Focal lipids included common monomers of cutin, suberin, and plant waxes present in tree leaves and roots. Soil lipid concentrations were most strongly influenced by the concentrations of lipids in leaves and roots of the overlying trees, but were also affected by the type of lipid (e.g. alcohols vs. acids), lipid chain length, and whether lipids originated in leaves or roots. Collectively, these factors explained ~80% of the variation in soil lipid concentrations beneath 11 different tree species. In order to use soil lipid analyses to test and improve conceptual models of soil organic matter stabilization, additional studies that provide experimental and quantitative (i.e. statistical) constraints on plant lipid preservation are needed.

  13. A preliminary study of the statistical analyses and sampling strategies associated with the integration of remote sensing capabilities into the current agricultural crop forecasting system

    NASA Technical Reports Server (NTRS)

    Sand, F.; Christie, R.

    1975-01-01

    Extending the crop survey application of remote sensing from small experimental regions to state and national levels requires that a sample of agricultural fields be chosen for remote sensing of crop acreage, and that a statistical estimate be formulated with measurable characteristics. The critical requirements for the success of the application are reviewed in this report. The problem of sampling in the presence of cloud cover is discussed. Integration of remotely sensed information about crops into current agricultural crop forecasting systems is treated on the basis of the USDA multiple frame survey concepts, with an assumed addition of a new frame derived from remote sensing. Evolution of a crop forecasting system which utilizes LANDSAT and future remote sensing systems is projected for the 1975-1990 time frame.

  14. If You Build (and Moderate) It, They Will Come: The Smokefree Women Facebook Page

    PubMed Central

    2013-01-01

    This analysis explores the impact of modifying the Smokefree Women Facebook social media strategy, from primarily promoting resources to encouraging participation in communications about smoking cessation by posting user-generated content. Analyses were performed using data from the Smokefree Women Facebook page to assess the impact of the revised strategy on reach and engagement. Fan engagement increased 430%, and a strong and statistically significant correlation (P < .05) between the frequency of moderator posts and community engagement was observed. The reach of the page also increased by 420%. Our findings indicate that the strategy shift had a statistically significant and positive effect on the frequency of interactions on the Facebook page, providing an example of an approach that may prove useful for reaching and engaging users in online communities. Additional research is needed to assess the association between engagement in virtual communities and health behavior outcomes. PMID:24395993

  15. Earth system feedback statistically extracted from the Indian Ocean deep-sea sediments recording Eocene hyperthermals.

    PubMed

    Yasukawa, Kazutaka; Nakamura, Kentaro; Fujinaga, Koichiro; Ikehara, Minoru; Kato, Yasuhiro

    2017-09-12

    Multiple transient global warming events occurred during the early Palaeogene. Although these events, called hyperthermals, have been reported from around the globe, geologic records for the Indian Ocean are limited. In addition, the recovery processes from relatively modest hyperthermals are less constrained than those from the severest and well-studied hothouse called the Palaeocene-Eocene Thermal Maximum. In this study, we constructed a new and high-resolution geochemical dataset of deep-sea sediments clearly recording multiple Eocene hyperthermals in the Indian Ocean. We then statistically analysed the high-dimensional data matrix and extracted independent components corresponding to the biogeochemical responses to the hyperthermals. The productivity feedback commonly controls and efficiently sequesters the excess carbon in the recovery phases of the hyperthermals via an enhanced biological pump, regardless of the magnitude of the events. Meanwhile, this negative feedback is independent of nannoplankton assemblage changes generally recognised in relatively large environmental perturbations.

  16. Changes in fall-related mortality in older adults in Quebec, 1981-2009.

    PubMed

    Gagné, M; Robitaille, Y; Jean, S; Perron, P-A

    2013-09-01

    Our purpose was to evaluate changes in fall-related mortality in adults aged 65 years and over in Quebec and to propose a case definition based on all the causes entered on Return of Death forms. The analysis covers deaths between 1981 and 2009 recorded in the Quebec vital statistics data. While the number of fall-related deaths increased between 1981 and 2009, the adjusted falls-related mortality rate remained relatively stable. Since the early 2000s, this stability has masked opposing trends. The mortality rate associated with certified falls (W00-W19) has increased while the rate for presumed falls (exposure to an unspecified factor causing a fracture) has decreased. For fall surveillance, analyses using indicators from the vital statistics data should include both certified falls and presumed falls. In addition, a possible shift in the coding of fall-related deaths toward secondary causes should be taken into account.

  17. If you build (and moderate) it, they will come: the Smokefree Women Facebook page.

    PubMed

    Post, Samantha D; Taylor, Shani C; Sanders, Amy E; Goldfarb, Jeffrey M; Hunt, Yvonne M; Augustson, Erik M

    2013-12-01

    This analysis explores the impact of modifying the Smokefree Women Facebook social media strategy, from primarily promoting resources to encouraging participation in communications about smoking cessation by posting user-generated content. Analyses were performed using data from the Smokefree Women Facebook page to assess the impact of the revised strategy on reach and engagement. Fan engagement increased 430%, and a strong and statistically significant correlation (P < .05) between the frequency of moderator posts and community engagement was observed. The reach of the page also increased by 420%. Our findings indicate that the strategy shift had a statistically significant and positive effect on the frequency of interactions on the Facebook page, providing an example of an approach that may prove useful for reaching and engaging users in online communities. Additional research is needed to assess the association between engagement in virtual communities and health behavior outcomes.

  18. Low-dose ionizing radiation increases the mortality risk of solid cancers in nuclear industry workers: A meta-analysis

    PubMed Central

    Qu, Shu-Gen; Gao, Jin; Tang, Bo; Yu, Bo; Shen, Yue-Ping; Tu, Yu

    2018-01-01

    Low-dose ionizing radiation (LDIR) may increase the mortality of solid cancers in nuclear industry workers, but only few individual cohort studies exist, and the available reports have low statistical power. The aim of the present study was to focus on solid cancer mortality risk from LDIR in the nuclear industry using standard mortality ratios (SMRs) and 95% confidence intervals. A systematic literature search through the PubMed and Embase databases identified 27 studies relevant to this meta-analysis. There was statistical significance for total, solid and lung cancers, with meta-SMR values of 0.88, 0.80, and 0.89, respectively. There was evidence of stochastic effects by IR, but more definitive conclusions require additional analyses using standardized protocols to determine whether LDIR increases the risk of solid cancer-related mortality. PMID:29725540

  19. The Influence of Cognitive Reserve on Recovery from Traumatic Brain Injury.

    PubMed

    Donders, Jacobus; Stout, Jacob

    2018-04-12

    we sought to determine the degree to which cognitive reserve, as assessed by the Test of Premorbid Functioning in combination with demographic variables, could act as a buffer against the effect of traumatic brain injury (TBI) on cognitive test performance. retrospective analysis of a cohort of 121 persons with TBI who completed the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) within 1-12 months after injury. regression analyses indicated that cognitive reserve was a statistically significant predictor of all postinjury WAIS-IV factor index scores, after controlling for various premorbid and comorbid confounding variables. Only for Processing Speed did injury severity make an additional statistically significant contribution to the prediction model. cognitive reserve has a protective effect with regard to the impact of TBI on cognitive test performance but this effect is imperfect and does not completely negate the effect of injury severity.

  20. regioneR: an R/Bioconductor package for the association analysis of genomic regions based on permutation tests.

    PubMed

    Gel, Bernat; Díez-Villanueva, Anna; Serra, Eduard; Buschbeck, Marcus; Peinado, Miguel A; Malinverni, Roberto

    2016-01-15

    Statistically assessing the relation between a set of genomic regions and other genomic features is a common challenging task in genomic and epigenomic analyses. Randomization based approaches implicitly take into account the complexity of the genome without the need of assuming an underlying statistical model. regioneR is an R package that implements a permutation test framework specifically designed to work with genomic regions. In addition to the predefined randomization and evaluation strategies, regioneR is fully customizable allowing the use of custom strategies to adapt it to specific questions. Finally, it also implements a novel function to evaluate the local specificity of the detected association. regioneR is an R package released under Artistic-2.0 License. The source code and documents are freely available through Bioconductor (http://www.bioconductor.org/packages/regioneR). rmalinverni@carrerasresearch.org. © The Author 2015. Published by Oxford University Press.

  1. ENSO related variability in the Southern Hemisphere, 1948-2000

    NASA Astrophysics Data System (ADS)

    Ribera, Pedro; Mann, Michael E.

    2003-01-01

    The spatiotemporal evolution of Southern Hemisphere climate variability is diagnosed based on the use of the NCEP reanalysis (1948-2000) dataset. Using the MTM-SVD analysis method, significant narrowband variability is isolated from the multi-variate dataset. It is found that the ENSO signal exhibits statistically significant behavior at quasiquadrennial (3-6 yr) timescales for the full time-period. A significant quasibiennial (2-3 yr) timescales emerges only for the latter half of period. Analyses of the spatial evolution of the two reconstructed signals shed additional light on linkages between low and high-latitude Southern Hemisphere climate anomalies.

  2. KERNELHR: A program for estimating animal home ranges

    USGS Publications Warehouse

    Seaman, D.E.; Griffith, B.; Powell, R.A.

    1998-01-01

    Kernel methods are state of the art for estimating animal home-range area and utilization distribution (UD). The KERNELHR program was developed to provide researchers and managers a tool to implement this extremely flexible set of methods with many variants. KERNELHR runs interactively or from the command line on any personal computer (PC) running DOS. KERNELHR provides output of fixed and adaptive kernel home-range estimates, as well as density values in a format suitable for in-depth statistical and spatial analyses. An additional package of programs creates contour files for plotting in geographic information systems (GIS) and estimates core areas of ranges.

  3. Novel image encryption algorithm based on multiple-parameter discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Dong, Taiji; Wu, Jianhua

    2010-08-01

    A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.

  4. Bootstrap versus Statistical Effect Size Corrections: A Comparison with Data from the Finding Embedded Figures Test.

    ERIC Educational Resources Information Center

    Thompson, Bruce; Melancon, Janet G.

    Effect sizes have been increasingly emphasized in research as more researchers have recognized that: (1) all parametric analyses (t-tests, analyses of variance, etc.) are correlational; (2) effect sizes have played an important role in meta-analytic work; and (3) statistical significance testing is limited in its capacity to inform scientific…

  5. Comments on `A Cautionary Note on the Interpretation of EOFs'.

    NASA Astrophysics Data System (ADS)

    Behera, Swadhin K.; Rao, Suryachandra A.; Saji, Hameed N.; Yamagata, Toshio

    2003-04-01

    The misleading aspect of the statistical analyses used in Dommenget and Latif, which raises concerns on some of the reported climate modes, is demonstrated. Adopting simple statistical techniques, the physical existence of the Indian Ocean dipole mode is shown and then the limitations of varimax and regression analyses in capturing the climate mode are discussed.

  6. Regional regression equations for the estimation of selected monthly low-flow duration and frequency statistics at ungaged sites on streams in New Jersey

    USGS Publications Warehouse

    Watson, Kara M.; McHugh, Amy R.

    2014-01-01

    Regional regression equations were developed for estimating monthly flow-duration and monthly low-flow frequency statistics for ungaged streams in Coastal Plain and non-coastal regions of New Jersey for baseline and current land- and water-use conditions. The equations were developed to estimate 87 different streamflow statistics, which include the monthly 99-, 90-, 85-, 75-, 50-, and 25-percentile flow-durations of the minimum 1-day daily flow; the August–September 99-, 90-, and 75-percentile minimum 1-day daily flow; and the monthly 7-day, 10-year (M7D10Y) low-flow frequency. These 87 streamflow statistics were computed for 41 continuous-record streamflow-gaging stations (streamgages) with 20 or more years of record and 167 low-flow partial-record stations in New Jersey with 10 or more streamflow measurements. The regression analyses used to develop equations to estimate selected streamflow statistics were performed by testing the relation between flow-duration statistics and low-flow frequency statistics for 32 basin characteristics (physical characteristics, land use, surficial geology, and climate) at the 41 streamgages and 167 low-flow partial-record stations. The regression analyses determined drainage area, soil permeability, average April precipitation, average June precipitation, and percent storage (water bodies and wetlands) were the significant explanatory variables for estimating the selected flow-duration and low-flow frequency statistics. Streamflow estimates were computed for two land- and water-use conditions in New Jersey—land- and water-use during the baseline period of record (defined as the years a streamgage had little to no change in development and water use) and current land- and water-use conditions (1989–2008)—for each selected station using data collected through water year 2008. The baseline period of record is representative of a period when the basin was unaffected by change in development. The current period is representative of the increased development of the last 20 years (1989–2008). The two different land- and water-use conditions were used as surrogates for development to determine whether there have been changes in low-flow statistics as a result of changes in development over time. The State was divided into two low-flow regression regions, the Coastal Plain and the non-coastal region, in order to improve the accuracy of the regression equations. The left-censored parametric survival regression method was used for the analyses to account for streamgages and partial-record stations that had zero flow values for some of the statistics. The average standard error of estimate for the 348 regression equations ranged from 16 to 340 percent. These regression equations and basin characteristics are presented in the U.S. Geological Survey (USGS) StreamStats Web-based geographic information system application. This tool allows users to click on an ungaged site on a stream in New Jersey and get the estimated flow-duration and low-flow frequency statistics. Additionally, the user can click on a streamgage or partial-record station and get the “at-site” streamflow statistics. The low-flow characteristics of a stream ultimately affect the use of the stream by humans. Specific information on the low-flow characteristics of streams is essential to water managers who deal with problems related to municipal and industrial water supply, fish and wildlife conservation, and dilution of wastewater.

  7. A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania.

    PubMed

    Bradford, Kathryn; Abrahams, Leslie; Hegglin, Miriam; Klima, Kelly

    2015-10-06

    With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare data sets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.

  8. A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania

    NASA Astrophysics Data System (ADS)

    Klima, K.; Abrahams, L.; Bradford, K.; Hegglin, M.

    2015-12-01

    With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/ Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare datasets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.

  9. Immunochip Analyses of Epistasis in Rheumatoid Arthritis Confirm Multiple Interactions within MHC and Suggest Novel Non-MHC Epistatic Signals.

    PubMed

    Wei, Wen-Hua; Loh, Chia-Yin; Worthington, Jane; Eyre, Stephen

    2016-05-01

    Studying statistical gene-gene interactions (epistasis) has been limited by the difficulties in performance, both statistically and computationally, in large enough sample numbers to gain sufficient power. Three large Immunochip datasets from cohort samples recruited in the United Kingdom, United States, and Sweden with European ancestry were used to examine epistasis in rheumatoid arthritis (RA). A full pairwise search was conducted in the UK cohort using a high-throughput tool and the resultant significant epistatic signals were tested for replication in the United States and Swedish cohorts. A forward selection approach was applied to remove redundant signals, while conditioning on the preidentified additive effects. We detected abundant genome-wide significant (p < 1.0e-13) epistatic signals, all within the MHC region. These signals were reduced substantially, but a proportion remained significant (p < 1.0e-03) in conditional tests. We identified 11 independent epistatic interactions across the entire MHC, each explaining on average 0.12% of the phenotypic variance, nearly all replicated in both replication cohorts. We also identified non-MHC epistatic interactions between RA susceptible loci LOC100506023 and IRF5 with Immunochip-wide significance (p < 1.1e-08) and between 2 neighboring single-nucleotide polymorphism near PTPN22 that were in low linkage disequilibrium with independent interaction (p < 1.0e-05). Both non-MHC epistatic interactions were statistically replicated with a similar interaction pattern in the US cohort only. There are multiple but relatively weak interactions independent of the additive effects in RA and a larger sample number is required to confidently assign additional non-MHC epistasis.

  10. graph-GPA: A graphical model for prioritizing GWAS results and investigating pleiotropic architecture.

    PubMed

    Chung, Dongjun; Kim, Hang J; Zhao, Hongyu

    2017-02-01

    Genome-wide association studies (GWAS) have identified tens of thousands of genetic variants associated with hundreds of phenotypes and diseases, which have provided clinical and medical benefits to patients with novel biomarkers and therapeutic targets. However, identification of risk variants associated with complex diseases remains challenging as they are often affected by many genetic variants with small or moderate effects. There has been accumulating evidence suggesting that different complex traits share common risk basis, namely pleiotropy. Recently, several statistical methods have been developed to improve statistical power to identify risk variants for complex traits through a joint analysis of multiple GWAS datasets by leveraging pleiotropy. While these methods were shown to improve statistical power for association mapping compared to separate analyses, they are still limited in the number of phenotypes that can be integrated. In order to address this challenge, in this paper, we propose a novel statistical framework, graph-GPA, to integrate a large number of GWAS datasets for multiple phenotypes using a hidden Markov random field approach. Application of graph-GPA to a joint analysis of GWAS datasets for 12 phenotypes shows that graph-GPA improves statistical power to identify risk variants compared to statistical methods based on smaller number of GWAS datasets. In addition, graph-GPA also promotes better understanding of genetic mechanisms shared among phenotypes, which can potentially be useful for the development of improved diagnosis and therapeutics. The R implementation of graph-GPA is currently available at https://dongjunchung.github.io/GGPA/.

  11. Meta-analysis inside and outside particle physics: two traditions that should converge?

    PubMed

    Baker, Rose D; Jackson, Dan

    2013-06-01

    The use of meta-analysis in medicine and epidemiology really took off in the 1970s. However, in high-energy physics, the Particle Data Group has been carrying out meta-analyses of measurements of particle masses and other properties since 1957. Curiously, there has been virtually no interaction between those working inside and outside particle physics. In this paper, we use statistical models to study two major differences in practice. The first is the usefulness of systematic errors, which physicists are now beginning to quote in addition to statistical errors. The second is whether it is better to treat heterogeneity by scaling up errors as do the Particle Data Group or by adding a random effect as does the rest of the community. Besides fitting models, we derive and use an exact test of the error-scaling hypothesis. We also discuss the other methodological differences between the two streams of meta-analysis. Our conclusion is that systematic errors are not currently very useful and that the conventional random effects model, as routinely used in meta-analysis, has a useful role to play in particle physics. The moral we draw for statisticians is that we should be more willing to explore 'grassroots' areas of statistical application, so that good statistical practice can flow both from and back to the statistical mainstream. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Statistics and bioinformatics in nutritional sciences: analysis of complex data in the era of systems biology⋆

    PubMed Central

    Fu, Wenjiang J.; Stromberg, Arnold J.; Viele, Kert; Carroll, Raymond J.; Wu, Guoyao

    2009-01-01

    Over the past two decades, there have been revolutionary developments in life science technologies characterized by high throughput, high efficiency, and rapid computation. Nutritionists now have the advanced methodologies for the analysis of DNA, RNA, protein, low-molecular-weight metabolites, as well as access to bioinformatics databases. Statistics, which can be defined as the process of making scientific inferences from data that contain variability, has historically played an integral role in advancing nutritional sciences. Currently, in the era of systems biology, statistics has become an increasingly important tool to quantitatively analyze information about biological macromolecules. This article describes general terms used in statistical analysis of large, complex experimental data. These terms include experimental design, power analysis, sample size calculation, and experimental errors (type I and II errors) for nutritional studies at population, tissue, cellular, and molecular levels. In addition, we highlighted various sources of experimental variations in studies involving microarray gene expression, real-time polymerase chain reaction, proteomics, and other bioinformatics technologies. Moreover, we provided guidelines for nutritionists and other biomedical scientists to plan and conduct studies and to analyze the complex data. Appropriate statistical analyses are expected to make an important contribution to solving major nutrition-associated problems in humans and animals (including obesity, diabetes, cardiovascular disease, cancer, ageing, and intrauterine fetal retardation). PMID:20233650

  13. Publication of statistically significant research findings in prosthodontics & implant dentistry in the context of other dental specialties.

    PubMed

    Papageorgiou, Spyridon N; Kloukos, Dimitrios; Petridis, Haralampos; Pandis, Nikolaos

    2015-10-01

    To assess the hypothesis that there is excessive reporting of statistically significant studies published in prosthodontic and implantology journals, which could indicate selective publication. The last 30 issues of 9 journals in prosthodontics and implant dentistry were hand-searched for articles with statistical analyses. The percentages of significant and non-significant results were tabulated by parameter of interest. Univariable/multivariable logistic regression analyses were applied to identify possible predictors of reporting statistically significance findings. The results of this study were compared with similar studies in dentistry with random-effects meta-analyses. From the 2323 included studies 71% of them reported statistically significant results, with the significant results ranging from 47% to 86%. Multivariable modeling identified that geographical area and involvement of statistician were predictors of statistically significant results. Compared to interventional studies, the odds that in vitro and observational studies would report statistically significant results was increased by 1.20 times (OR: 2.20, 95% CI: 1.66-2.92) and 0.35 times (OR: 1.35, 95% CI: 1.05-1.73), respectively. The probability of statistically significant results from randomized controlled trials was significantly lower compared to various study designs (difference: 30%, 95% CI: 11-49%). Likewise the probability of statistically significant results in prosthodontics and implant dentistry was lower compared to other dental specialties, but this result did not reach statistical significant (P>0.05). The majority of studies identified in the fields of prosthodontics and implant dentistry presented statistically significant results. The same trend existed in publications of other specialties in dentistry. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. A software platform for statistical evaluation of patient respiratory patterns in radiation therapy.

    PubMed

    Dunn, Leon; Kenny, John

    2017-10-01

    The aim of this work was to design and evaluate a software tool for analysis of a patient's respiration, with the goal of optimizing the effectiveness of motion management techniques during radiotherapy imaging and treatment. A software tool which analyses patient respiratory data files (.vxp files) created by the Varian Real-Time Position Management System (RPM) was developed to analyse patient respiratory data. The software, called RespAnalysis, was created in MATLAB and provides four modules, one each for determining respiration characteristics, providing breathing coaching (biofeedback training), comparing pre and post-training characteristics and performing a fraction-by-fraction assessment. The modules analyse respiratory traces to determine signal characteristics and specifically use a Sample Entropy algorithm as the key means to quantify breathing irregularity. Simulated respiratory signals, as well as 91 patient RPM traces were analysed with RespAnalysis to test the viability of using the Sample Entropy for predicting breathing regularity. Retrospective assessment of patient data demonstrated that the Sample Entropy metric was a predictor of periodic irregularity in respiration data, however, it was found to be insensitive to amplitude variation. Additional waveform statistics assessing the distribution of signal amplitudes over time coupled with Sample Entropy method were found to be useful in assessing breathing regularity. The RespAnalysis software tool presented in this work uses the Sample Entropy method to analyse patient respiratory data recorded for motion management purposes in radiation therapy. This is applicable during treatment simulation and during subsequent treatment fractions, providing a way to quantify breathing irregularity, as well as assess the need for breathing coaching. It was demonstrated that the Sample Entropy metric was correlated to the irregularity of the patient's respiratory motion in terms of periodicity, whilst other metrics, such as percentage deviation of inhale/exhale peak positions provided insight into respiratory amplitude regularity. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  15. Performing statistical analyses on quantitative data in Taverna workflows: an example using R and maxdBrowse to identify differentially-expressed genes from microarray data.

    PubMed

    Li, Peter; Castrillo, Juan I; Velarde, Giles; Wassink, Ingo; Soiland-Reyes, Stian; Owen, Stuart; Withers, David; Oinn, Tom; Pocock, Matthew R; Goble, Carole A; Oliver, Stephen G; Kell, Douglas B

    2008-08-07

    There has been a dramatic increase in the amount of quantitative data derived from the measurement of changes at different levels of biological complexity during the post-genomic era. However, there are a number of issues associated with the use of computational tools employed for the analysis of such data. For example, computational tools such as R and MATLAB require prior knowledge of their programming languages in order to implement statistical analyses on data. Combining two or more tools in an analysis may also be problematic since data may have to be manually copied and pasted between separate user interfaces for each tool. Furthermore, this transfer of data may require a reconciliation step in order for there to be interoperability between computational tools. Developments in the Taverna workflow system have enabled pipelines to be constructed and enacted for generic and ad hoc analyses of quantitative data. Here, we present an example of such a workflow involving the statistical identification of differentially-expressed genes from microarray data followed by the annotation of their relationships to cellular processes. This workflow makes use of customised maxdBrowse web services, a system that allows Taverna to query and retrieve gene expression data from the maxdLoad2 microarray database. These data are then analysed by R to identify differentially-expressed genes using the Taverna RShell processor which has been developed for invoking this tool when it has been deployed as a service using the RServe library. In addition, the workflow uses Beanshell scripts to reconcile mismatches of data between services as well as to implement a form of user interaction for selecting subsets of microarray data for analysis as part of the workflow execution. A new plugin system in the Taverna software architecture is demonstrated by the use of renderers for displaying PDF files and CSV formatted data within the Taverna workbench. Taverna can be used by data analysis experts as a generic tool for composing ad hoc analyses of quantitative data by combining the use of scripts written in the R programming language with tools exposed as services in workflows. When these workflows are shared with colleagues and the wider scientific community, they provide an approach for other scientists wanting to use tools such as R without having to learn the corresponding programming language to analyse their own data.

  16. Performing statistical analyses on quantitative data in Taverna workflows: An example using R and maxdBrowse to identify differentially-expressed genes from microarray data

    PubMed Central

    Li, Peter; Castrillo, Juan I; Velarde, Giles; Wassink, Ingo; Soiland-Reyes, Stian; Owen, Stuart; Withers, David; Oinn, Tom; Pocock, Matthew R; Goble, Carole A; Oliver, Stephen G; Kell, Douglas B

    2008-01-01

    Background There has been a dramatic increase in the amount of quantitative data derived from the measurement of changes at different levels of biological complexity during the post-genomic era. However, there are a number of issues associated with the use of computational tools employed for the analysis of such data. For example, computational tools such as R and MATLAB require prior knowledge of their programming languages in order to implement statistical analyses on data. Combining two or more tools in an analysis may also be problematic since data may have to be manually copied and pasted between separate user interfaces for each tool. Furthermore, this transfer of data may require a reconciliation step in order for there to be interoperability between computational tools. Results Developments in the Taverna workflow system have enabled pipelines to be constructed and enacted for generic and ad hoc analyses of quantitative data. Here, we present an example of such a workflow involving the statistical identification of differentially-expressed genes from microarray data followed by the annotation of their relationships to cellular processes. This workflow makes use of customised maxdBrowse web services, a system that allows Taverna to query and retrieve gene expression data from the maxdLoad2 microarray database. These data are then analysed by R to identify differentially-expressed genes using the Taverna RShell processor which has been developed for invoking this tool when it has been deployed as a service using the RServe library. In addition, the workflow uses Beanshell scripts to reconcile mismatches of data between services as well as to implement a form of user interaction for selecting subsets of microarray data for analysis as part of the workflow execution. A new plugin system in the Taverna software architecture is demonstrated by the use of renderers for displaying PDF files and CSV formatted data within the Taverna workbench. Conclusion Taverna can be used by data analysis experts as a generic tool for composing ad hoc analyses of quantitative data by combining the use of scripts written in the R programming language with tools exposed as services in workflows. When these workflows are shared with colleagues and the wider scientific community, they provide an approach for other scientists wanting to use tools such as R without having to learn the corresponding programming language to analyse their own data. PMID:18687127

  17. Effect of exercise on depression in university students: a meta-analysis of randomized controlled trials.

    PubMed

    Yan, Shi; Jin, YinZhe; Oh, YongSeok; Choi, YoungJun

    2016-06-01

    The aim of this study was to assess the effect of exercise on depression in university students. A systematic literature search was conducted in PubMed, EMBASE and the Cochrane library from their inception through December 10, 2014 to identify relevant articles. The heterogeneity across studies was examined by Cochran's Q statistic and the I2 statistic. Standardized mean difference (SMD) and 95% confidence interval (CI) were pooled to evaluate the effect of exercise on depression. Then, sensitivity and subgroup analyses were performed. In addition, publication bias was assessed by drawing a funnel plot. A total of 352 participants (154 cases and 182 controls) from eight included trials were included. Our pooled result showed a significant alleviative depression after exercise (SMD=-0.50, 95% CI: -0.97 to -0.03, P=0.04) with significant heterogeneity (P=0.003, I2=67%). Sensitivity analyses showed that the pooled result may be unstable. Subgroup analysis indicated that sample size may be a source of heterogeneity. Moreover, no publication bias was observed in this study. Exercise may be an effective therapy for treating depression in university students. However, further clinical studies with strict design and large samples focused on this specific population should be warranted in the future.

  18. A critical evaluation of ecological indices for the comparative analysis of microbial communities based on molecular datasets.

    PubMed

    Lucas, Rico; Groeneveld, Jürgen; Harms, Hauke; Johst, Karin; Frank, Karin; Kleinsteuber, Sabine

    2017-01-01

    In times of global change and intensified resource exploitation, advanced knowledge of ecophysiological processes in natural and engineered systems driven by complex microbial communities is crucial for both safeguarding environmental processes and optimising rational control of biotechnological processes. To gain such knowledge, high-throughput molecular techniques are routinely employed to investigate microbial community composition and dynamics within a wide range of natural or engineered environments. However, for molecular dataset analyses no consensus about a generally applicable alpha diversity concept and no appropriate benchmarking of corresponding statistical indices exist yet. To overcome this, we listed criteria for the appropriateness of an index for such analyses and systematically scrutinised commonly employed ecological indices describing diversity, evenness and richness based on artificial and real molecular datasets. We identified appropriate indices warranting interstudy comparability and intuitive interpretability. The unified diversity concept based on 'effective numbers of types' provides the mathematical framework for describing community composition. Additionally, the Bray-Curtis dissimilarity as a beta-diversity index was found to reflect compositional changes. The employed statistical procedure is presented comprising commented R-scripts and example datasets for user-friendly trial application. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Experimental design of an interlaboratory study for trace metal analysis of liquid fluids. [for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Greenbauer-Seng, L. A.

    1983-01-01

    The accurate determination of trace metals and fuels is an important requirement in much of the research into and development of alternative fuels for aerospace applications. Recognizing the detrimental effects of certain metals on fuel performance and fuel systems at the part per million and in some cases part per billion levels requires improved accuracy in determining these low concentration elements. Accurate analyses are also required to ensure interchangeability of analysis results between vendor, researcher, and end use for purposes of quality control. Previous interlaboratory studies have demonstrated the inability of different laboratories to agree on the results of metal analysis, particularly at low concentration levels, yet typically good precisions are reported within a laboratory. An interlaboratory study was designed to gain statistical information about the sources of variation in the reported concentrations. Five participant laboratories were used on a fee basis and were not informed of the purpose of the analyses. The effects of laboratory, analytical technique, concentration level, and ashing additive were studied in four fuel types for 20 elements of interest. The prescribed sample preparation schemes (variations of dry ashing) were used by all of the laboratories. The analytical data were statistically evaluated using a computer program for the analysis of variance technique.

  20. MOLSIM: A modular molecular simulation software

    PubMed Central

    Jurij, Reščič

    2015-01-01

    The modular software MOLSIM for all‐atom molecular and coarse‐grained simulations is presented with focus on the underlying concepts used. The software possesses four unique features: (1) it is an integrated software for molecular dynamic, Monte Carlo, and Brownian dynamics simulations; (2) simulated objects are constructed in a hierarchical fashion representing atoms, rigid molecules and colloids, flexible chains, hierarchical polymers, and cross‐linked networks; (3) long‐range interactions involving charges, dipoles and/or anisotropic dipole polarizabilities are handled either with the standard Ewald sum, the smooth particle mesh Ewald sum, or the reaction‐field technique; (4) statistical uncertainties are provided for all calculated observables. In addition, MOLSIM supports various statistical ensembles, and several types of simulation cells and boundary conditions are available. Intermolecular interactions comprise tabulated pairwise potentials for speed and uniformity and many‐body interactions involve anisotropic polarizabilities. Intramolecular interactions include bond, angle, and crosslink potentials. A very large set of analyses of static and dynamic properties is provided. The capability of MOLSIM can be extended by user‐providing routines controlling, for example, start conditions, intermolecular potentials, and analyses. An extensive set of case studies in the field of soft matter is presented covering colloids, polymers, and crosslinked networks. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:25994597

  1. Statistical assessment of changes in extreme maximum temperatures over Saudi Arabia, 1985-2014

    NASA Astrophysics Data System (ADS)

    Raggad, Bechir

    2018-05-01

    In this study, two statistical approaches were adopted in the analysis of observed maximum temperature data collected from fifteen stations over Saudi Arabia during the period 1985-2014. In the first step, the behavior of extreme temperatures was analyzed and their changes were quantified with respect to the Expert Team on Climate Change Detection Monitoring indices. The results showed a general warming trend over most stations, in maximum temperature-related indices, during the period of analysis. In the second step, stationary and non-stationary extreme-value analyses were conducted for the temperature data. The results revealed that the non-stationary model with increasing linear trend in its location parameter outperforms the other models for two-thirds of the stations. Additionally, the 10-, 50-, and 100-year return levels were found to change with time considerably and that the maximum temperature could start to reappear in the different T-year return period for most stations. This analysis shows the importance of taking account the change over time in the estimation of return levels and therefore justifies the use of the non-stationary generalized extreme value distribution model to describe most of the data. Furthermore, these last findings are in line with the result of significant warming trends found in climate indices analyses.

  2. Parental intention to support video game play by children with autism spectrum disorder: an application of the theory of planned behavior.

    PubMed

    Finke, Erinn H; Hickerson, Benjamin; McLaughlin, Eileen

    2015-04-01

    The purpose of this study was to determine parental attitudes regarding engagement with video games by their children with autism spectrum disorder (ASD) and whether attitudes vary based on ASD symptom severity. Online survey methodology was used to gather information from parents of children with ASD between the ages of 8 and 12 years. The finalized data set included 152 cases. Descriptive statistics and frequency analyses were used to examine participant demographics and video game play. Descriptive and inferential statistics were used to evaluate questions on the theory of planned behavior. Regression analyses determined the predictive ability of the theory of planned behavior constructs, and t tests provided additional descriptive information about between-group differences. Children with ASD play video games. There are no significant differences in the time, intensity, or types of games played based on severity of ASD symptoms (mild vs. moderate). Parents of children with ASD had positive attitudes about video game play. Parents of children with ASD appear to support video game play. On average, parents indicated video game play was positive for their children with ASD, particularly if they believed the games were having a positive impact on their child's development.

  3. Field Synopsis and Re-analysis of Systematic Meta-analyses of Genetic Association Studies in Multiple Sclerosis: a Bayesian Approach.

    PubMed

    Park, Jae Hyon; Kim, Joo Hi; Jo, Kye Eun; Na, Se Whan; Eisenhut, Michael; Kronbichler, Andreas; Lee, Keum Hwa; Shin, Jae Il

    2018-07-01

    To provide an up-to-date summary of multiple sclerosis-susceptible gene variants and assess the noteworthiness in hopes of finding true associations, we investigated the results of 44 meta-analyses on gene variants and multiple sclerosis published through December 2016. Out of 70 statistically significant genotype associations, roughly a fifth (21%) of the comparisons showed noteworthy false-positive rate probability (FPRP) at a statistical power to detect an OR of 1.5 and at a prior probability of 10 -6 assumed for a random single nucleotide polymorphism. These associations (IRF8/rs17445836, STAT3/rs744166, HLA/rs4959093, HLA/rs2647046, HLA/rs7382297, HLA/rs17421624, HLA/rs2517646, HLA/rs9261491, HLA/rs2857439, HLA/rs16896944, HLA/rs3132671, HLA/rs2857435, HLA/rs9261471, HLA/rs2523393, HLA-DRB1/rs3135388, RGS1/rs2760524, PTGER4/rs9292777) also showed a noteworthy Bayesian false discovery probability (BFDP) and one additional association (CD24 rs8734/rs52812045) was also noteworthy via BFDP computation. Herein, we have identified several noteworthy biomarkers of multiple sclerosis susceptibility. We hope these data are used to study multiple sclerosis genetics and inform future screening programs.

  4. Analyses of non-fatal accidents in an opencast mine by logistic regression model - a case study.

    PubMed

    Onder, Seyhan; Mutlu, Mert

    2017-09-01

    Accidents cause major damage for both workers and enterprises in the mining industry. To reduce the number of occupational accidents, these incidents should be properly registered and carefully analysed. This study efficiently examines the Aegean Lignite Enterprise (ELI) of Turkish Coal Enterprises (TKI) in Soma between 2006 and 2011, and opencast coal mine occupational accident records were used for statistical analyses. A total of 231 occupational accidents were analysed for this study. The accident records were categorized into seven groups: area, reason, occupation, part of body, age, shift hour and lost days. The SPSS package program was used in this study for logistic regression analyses, which predicted the probability of accidents resulting in greater or less than 3 lost workdays for non-fatal injuries. Social facilities-area of surface installations, workshops and opencast mining areas are the areas with the highest probability for accidents with greater than 3 lost workdays for non-fatal injuries, while the reasons with the highest probability for these types of accidents are transporting and manual handling. Additionally, the model was tested for such reported accidents that occurred in 2012 for the ELI in Soma and estimated the probability of exposure to accidents with lost workdays correctly by 70%.

  5. Performance of statistical process control methods for regional surgical site infection surveillance: a 10-year multicentre pilot study.

    PubMed

    Baker, Arthur W; Haridy, Salah; Salem, Joseph; Ilieş, Iulian; Ergai, Awatef O; Samareh, Aven; Andrianas, Nicholas; Benneyan, James C; Sexton, Daniel J; Anderson, Deverick J

    2017-11-24

    Traditional strategies for surveillance of surgical site infections (SSI) have multiple limitations, including delayed and incomplete outbreak detection. Statistical process control (SPC) methods address these deficiencies by combining longitudinal analysis with graphical presentation of data. We performed a pilot study within a large network of community hospitals to evaluate performance of SPC methods for detecting SSI outbreaks. We applied conventional Shewhart and exponentially weighted moving average (EWMA) SPC charts to 10 previously investigated SSI outbreaks that occurred from 2003 to 2013. We compared the results of SPC surveillance to the results of traditional SSI surveillance methods. Then, we analysed the performance of modified SPC charts constructed with different outbreak detection rules, EWMA smoothing factors and baseline SSI rate calculations. Conventional Shewhart and EWMA SPC charts both detected 8 of the 10 SSI outbreaks analysed, in each case prior to the date of traditional detection. Among detected outbreaks, conventional Shewhart chart detection occurred a median of 12 months prior to outbreak onset and 22 months prior to traditional detection. Conventional EWMA chart detection occurred a median of 7 months prior to outbreak onset and 14 months prior to traditional detection. Modified Shewhart and EWMA charts additionally detected several outbreaks earlier than conventional SPC charts. Shewhart and SPC charts had low false-positive rates when used to analyse separate control hospital SSI data. Our findings illustrate the potential usefulness and feasibility of real-time SPC surveillance of SSI to rapidly identify outbreaks and improve patient safety. Further study is needed to optimise SPC chart selection and calculation, statistical outbreak detection rules and the process for reacting to signals of potential outbreaks. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. Osseointegration of titanium implants functionalised with phosphoserine-tethered poly(epsilon-lysine) dendrons: a comparative study with traditional surface treatments in sheep.

    PubMed

    Stübinger, Stefan; Nuss, Katja; Bürki, Alexander; Mosch, Isabel; le Sidler, Miché; Meikle, Steve T; von Rechenberg, Brigitte; Santin, Matteo

    2015-02-01

    The aim of this study was to analyse the osseointegrative potential of phosphoserine-tethered dendrons when applied as surface functionalisation molecules on titanium implants in a sheep model after 2 and 8 weeks of implantation. Uncoated and dendron-coated implants were implanted in six sheep. Sandblasted and etched (SE) or porous additive manufactured (AM) implants with and without additional dendron functionalisation (SE-PSD; AM-PSD) were placed in the pelvic bone. Three implants per group were examined histologically and six implants were tested biomechanically. After 2 and 8 weeks the bone-to-implant contact (BIC) total values of SE implants (43.7±12.2; 53.3±9.0%) and SE-PSD (46.7±4.5; 61.7±4.9%) as well as AM implants (20.49±5.1; 43.9±9.7%) and AM-PSD implants (19.7±3.5; 48.3±15.6%) showed no statistically significant differences. For SE-PSD and AM-PSD a separate analysis of only the cancellous BIC demonstrated a statistically significant difference after 2 and 8 weeks. Biomechanical findings proved the overall increased stability of the porous implants after 8 weeks. Overall, the great effect of implant macro design on osseointegration was further supported by additional phosphoserine-tethered dendrons for SE and AM implants.

  7. Comparing Visual and Statistical Analysis of Multiple Baseline Design Graphs.

    PubMed

    Wolfe, Katie; Dickenson, Tammiee S; Miller, Bridget; McGrath, Kathleen V

    2018-04-01

    A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.

  8. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: An example from a vertigo phase III study with longitudinal count data as primary endpoint

    PubMed Central

    2012-01-01

    Background A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. Methods We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). Results The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. Conclusions The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint. PMID:22962944

  9. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: an example from a vertigo phase III study with longitudinal count data as primary endpoint.

    PubMed

    Adrion, Christine; Mansmann, Ulrich

    2012-09-10

    A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint.

  10. Errors in statistical decision making Chapter 2 in Applied Statistics in Agricultural, Biological, and Environmental Sciences

    USDA-ARS?s Scientific Manuscript database

    Agronomic and Environmental research experiments result in data that are analyzed using statistical methods. These data are unavoidably accompanied by uncertainty. Decisions about hypotheses, based on statistical analyses of these data are therefore subject to error. This error is of three types,...

  11. An evaluation of the periapical status of teeth with necrotic pulps using periapical radiography and cone-beam computed tomography.

    PubMed

    Abella, F; Patel, S; Durán-Sindreu, F; Mercadé, M; Bueno, R; Roig, M

    2014-04-01

    To evaluate the presence or absence of periapical (PA) radiolucencies on individual roots of teeth with necrotic pulps, as assessed with digital PA radiographs and cone-beam computed tomography (CBCT). Digital PA radiographs and CBCT scans were taken from 161 endodontically untreated teeth (from 155 patients) diagnosed with non-vital pulps (pulp necrosis with normal PA tissue, symptomatic apical periodontitis, asymptomatic apical periodontitis, acute apical abscess and chronic apical abscess). Images were assessed by two calibrated endodontists to analyse the radiographic PA status of the teeth. A consensus was reached in the event of any disagreement. The data were analysed using a McNemar's test, and significance was set at P ≤ 0.05. Three hundred and forty paired images of roots were assessed with both digital PA radiographs and CBCT images. Fifteen additional roots were identified with CBCT. PA radiolucencies were present in 132 (38.8%) roots when assessed with PA radiographs, and in 196 (57.6%) roots when assessed with CBCT. This difference was statistically significant (P < 0.05). In teeth diagnosed with pulp necrosis, symptomatic apical periodontitis or acute apical abscess, CBCT images revealed a statistically larger number of PA radiolucencies than did PA radiographs (P < 0.05). No statistical differences were observed between PA radiographs and CBCT in teeth classified with asymptomatic apical periodontitis (P = 0.31) or chronic apical abscess (P = 1). Unlike PA radiographs, CBCT revealed a higher prevalence of PA radiolucencies when endodontically untreated teeth with non-vital pulps were examined. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  12. Lungworm Infections in German Dairy Cattle Herds — Seroprevalence and GIS-Supported Risk Factor Analysis

    PubMed Central

    Schunn, Anne-Marie; Conraths, Franz J.; Staubach, Christoph; Fröhlich, Andreas; Forbes, Andrew; Strube, Christina

    2013-01-01

    In November 2008, a total of 19,910 bulk tank milk (BTM) samples were obtained from dairy farms from all over Germany, corresponding to about 20% of all German dairy herds, and analysed for antibodies against the bovine lungworm Dictyocaulus viviparus by use of the recombinant MSP-ELISA. A total number of 3,397 (17.1%; n = 19,910) BTM samples tested seropositive. The prevalences in individual German federal states varied between 0.0% and 31.2% positive herds. A geospatial map was drawn to show the distribution of seropositive and seronegative herds per postal code area. ELISA results were further analysed for associations with land-use and climate data. Bivariate statistical analysis was used to identify potential spatial risk factors for dictyocaulosis. Statistically significant positive associations were found between lungworm seropositive herds and the proportion of water bodies and grassed area per postal code area. Variables that showed a statistically significant association with a positive BTM test were included in a logistic regression model, which was further refined by controlled stepwise selection of variables. The low Pseudo R2 values (0.08 for the full model and 0.06 for the final model) and further evaluation of the model by ROC analysis indicate that additional, unrecorded factors (e.g. management factors) or random effects may substantially contribute to lungworm infections in dairy cows. Veterinarians should include lungworms in the differential diagnosis of respiratory disease in dairy cattle, particularly those at pasture. Monitoring of herds through BTM screening for antibodies can help farmers and veterinarians plan and implement appropriate control measures. PMID:24040243

  13. Economic evaluation of factorial randomised controlled trials: challenges, methods and recommendations

    PubMed Central

    Gray, Alastair

    2017-01-01

    Increasing numbers of economic evaluations are conducted alongside randomised controlled trials. Such studies include factorial trials, which randomise patients to different levels of two or more factors and can therefore evaluate the effect of multiple treatments alone and in combination. Factorial trials can provide increased statistical power or assess interactions between treatments, but raise additional challenges for trial‐based economic evaluations: interactions may occur more commonly for costs and quality‐adjusted life‐years (QALYs) than for clinical endpoints; economic endpoints raise challenges for transformation and regression analysis; and both factors must be considered simultaneously to assess which treatment combination represents best value for money. This article aims to examine issues associated with factorial trials that include assessment of costs and/or cost‐effectiveness, describe the methods that can be used to analyse such studies and make recommendations for health economists, statisticians and trialists. A hypothetical worked example is used to illustrate the challenges and demonstrate ways in which economic evaluations of factorial trials may be conducted, and how these methods affect the results and conclusions. Ignoring interactions introduces bias that could result in adopting a treatment that does not make best use of healthcare resources, while considering all interactions avoids bias but reduces statistical power. We also introduce the concept of the opportunity cost of ignoring interactions as a measure of the bias introduced by not taking account of all interactions. We conclude by offering recommendations for planning, analysing and reporting economic evaluations based on factorial trials, taking increased analysis costs into account. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28470760

  14. Dissecting effects of complex mixtures: who's afraid of informative priors?

    PubMed

    Thomas, Duncan C; Witte, John S; Greenland, Sander

    2007-03-01

    Epidemiologic studies commonly investigate multiple correlated exposures, which are difficult to analyze appropriately. Hierarchical modeling provides a promising approach for analyzing such data by adding a higher-level structure or prior model for the exposure effects. This prior model can incorporate additional information on similarities among the correlated exposures and can be parametric, semiparametric, or nonparametric. We discuss the implications of applying these models and argue for their expanded use in epidemiology. While a prior model adds assumptions to the conventional (first-stage) model, all statistical methods (including conventional methods) make strong intrinsic assumptions about the processes that generated the data. One should thus balance prior modeling assumptions against assumptions of validity, and use sensitivity analyses to understand their implications. In doing so - and by directly incorporating into our analyses information from other studies or allied fields - we can improve our ability to distinguish true causes of disease from noise and bias.

  15. Asymmetric correlation matrices: an analysis of financial data

    NASA Astrophysics Data System (ADS)

    Livan, G.; Rebecchi, L.

    2012-06-01

    We analyse the spectral properties of correlation matrices between distinct statistical systems. Such matrices are intrinsically non-symmetric, and lend themselves to extend the spectral analyses usually performed on standard Pearson correlation matrices to the realm of complex eigenvalues. We employ some recent random matrix theory results on the average eigenvalue density of this type of matrix to distinguish between noise and non-trivial correlation structures, and we focus on financial data as a case study. Namely, we employ daily prices of stocks belonging to the American and British stock exchanges, and look for the emergence of correlations between two such markets in the eigenvalue spectrum of their non-symmetric correlation matrix. We find several non trivial results when considering time-lagged correlations over short lags, and we corroborate our findings by additionally studying the asymmetric correlation matrix of the principal components of our datasets.

  16. The Dissociative Subtype of PTSD Scale (DSPS): Initial Evaluation in a National Sample of Trauma-Exposed Veterans

    PubMed Central

    Wolf, Erika J.; Mitchell, Karen S.; Sadeh, Naomi; Hein, Christina; Fuhrman, Isaac; Pietrzak, Robert H.; Miller, Mark W.

    2015-01-01

    The fifth edition of the Diagnostic and Statistical Manual (DSM-5) includes a dissociative subtype of posttraumatic stress disorder (PTSD), but no existing measures specifically assess it. This paper describes the initial evaluation of a 15-item self-report measure of the subtype called the Dissociative Subtype of PTSD Scale (DSPS) in an on-line survey of 697 trauma-exposed military veterans representative of the US veteran population. Exploratory factor analyses of the lifetime DSPS items supported the intended structure of the measure consisting of three factors reflecting derealization/depersonalization, loss of awareness, and psychogenic amnesia. Consistent with prior research, latent profile analyses assigned 8.3% of the sample to a highly dissociative class distinguished by pronounced symptoms of derealization and depersonalization. Overall, results provide initial psychometric support for the lifetime DSPS scales; additional research in clinical and community samples is needed to further validate the measure. PMID:26603115

  17. The Dissociative Subtype of PTSD Scale: Initial Evaluation in a National Sample of Trauma-Exposed Veterans.

    PubMed

    Wolf, Erika J; Mitchell, Karen S; Sadeh, Naomi; Hein, Christina; Fuhrman, Isaac; Pietrzak, Robert H; Miller, Mark W

    2017-06-01

    The fifth edition of the Diagnostic and Statistical Manual includes a dissociative subtype of posttraumatic stress disorder, but no existing measures specifically assess it. This article describes the initial evaluation of a 15-item self-report measure of the subtype called the Dissociative Subtype of Posttraumatic Stress Disorder Scale (DSPS) in an online survey of 697 trauma-exposed military veterans representative of the U.S. veteran population. Exploratory factor analyses of the lifetime DSPS items supported the intended structure of the measure consisting of three factors reflecting derealization/depersonalization, loss of awareness, and psychogenic amnesia. Consistent with prior research, latent profile analyses assigned 8.3% of the sample to a highly dissociative class distinguished by pronounced symptoms of derealization and depersonalization. Overall, results provide initial psychometric support for the lifetime DSPS scales; additional research in clinical and community samples is needed to further validate the measure.

  18. National Trends in Trace Metals Concentrations in Ambient Particulate Matter

    NASA Astrophysics Data System (ADS)

    McCarthy, M. C.; Hafner, H. R.; Charrier, J. G.

    2007-12-01

    Ambient measurements of trace metals identified as hazardous air pollutants (HAPs, air toxics) collected in the United States from 1990 to 2006 were analyzed for long-term trends. Trace metals analyzed include lead, manganese, arsenic, chromium, nickel, cadmium, and selenium. Visual and statistical analyses were used to identify and quantify temporal variations in air toxics at national and regional levels. Trend periods were required to be at least five years. Lead particles decreased in concentration at most monitoring sites, but trends in other metals were not consistent over time or spatially. In addition, routine ambient monitoring methods had method detection limits (MDLs) too high to adequately measure concentrations for trends analysis. Differences between measurement methods at urban and rural sites also confound trends analyses. Improvements in MDLs, and a better understanding of comparability between networks, are needed to better quantify trends in trace metal concentrations in the future.

  19. Analysis of complex environment effect on near-field emission

    NASA Astrophysics Data System (ADS)

    Ravelo, B.; Lalléchère, S.; Bonnet, P.; Paladian, F.

    2014-10-01

    The article is dealing with uncertainty analyses of radiofrequency circuits electromagnetic compatibility emission based on the near-field/near-field (NF/NF) transform combined with stochastic approach. By using 2D data corresponding to electromagnetic (EM) field (X=E or H) scanned in the observation plane placed at the position z0 above the circuit under test (CUT), the X field map was extracted. Then, uncertainty analyses were assessed via the statistical moments from X component. In addition, stochastic collocation based was considered and calculations were applied to planar EM NF radiated by the CUTs as Wilkinson power divider and a microstrip line operating at GHz levels. After Matlab implementation, the mean and standard deviation were assessed. The present study illustrates how the variations of environmental parameters may impact EM fields. The NF uncertainty methodology can be applied to any physical parameter effects in complex environment and useful for printed circuit board (PCBs) design guideline.

  20. Arthroscopy for treating temporomandibular joint disorders.

    PubMed

    Currie, Roger

    2011-01-01

    The Cochrane Oral Health Group Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), Medline Embase, Lilacs, Allied and Complementary Medicine Database (AMED) and CINAHL databases were searched. In addition the reference lists of the included articles were checked and 14 journals hand searched. Randomised controlled clinical trials (RCT) of arthroscopy for treating TMDs were included. There were no restrictions regarding the language or date of publication. Two review authors independently extracted data, and three review authors independently assessed the risk of bias of included trials. The authors of the selected articles were contacted for additional information. Pooling of trials was only attempted if at least two trials of comparable protocols, with the same conditions and similar outcome measurements were available. Statistical analysis was performed in accordance with the Cochrane Collaboration guidelines. Seven RCTs (n = 349) met the inclusion criteria. All the studies were either at high or unclear risk of bias. Pain was evaluated after six months in two studies. No statistically significant differences were found between the arthroscopy versus nonsurgical groups (standardised mean difference (SMD) = 0.004; 95% confidence interval (CI) - 0.46 to 0.55, P = 0.81). Two studies analysed pain 12 months after surgery (arthroscopy and arthrocentesis) in 81 patients. No statistically significant differences were found (mean difference (MD) = 0.10; 95% CI -1.46 to 1.66, P = 0.90). Three studies analysed the same outcome in patients who had been submitted to arthroscopic surgery or to open surgery and a statistically significant difference was found after 12 months (SMD = 0.45; 95% CI 0.01 to 0.89, P = 0.05) in favour of open surgery.The two studies compared the maximum interincisal opening in six different clinical outcomes (interincisal opening over 35 mm; maximum protrusion over 5 mm; click; crepitation; tenderness on palpation in the TMJ and the jaw muscles 12 months after arthroscopy and open surgery). The outcome measures did not present statistically significant differences (odds ratio (OR) = 1.00; 95% CI 0.45 to 2.21, P = 1.00). Two studies compared the maximum interincisal opening after 12 months of postsurgical follow-up. A statistically significant difference in favour of the arthroscopy group was observed (MD = 5.28; 95% CI 3.46 to 7.10, P < 0.0001).The two studies compared the mandibular function after 12 months of follow-up with 40 patients evaluated. The outcome measure was mandibular functionality (MFIQ). This difference was not statistically significant (MD = 1.58; 95% CI -0.78 to 3.94, P = 0.19). Both arthroscopy and nonsurgical treatments reduced pain after six months. When compared with arthroscopy, open surgery was more effective at reducing pain after 12 months. Nevertheless, there were no differences in mandibular functionality or in other outcomes in clinical evaluations. Arthroscopy led to greater improvement in maximum interincisal opening after 12 months than arthrocentesis; however, there was no difference in pain.

  1. Differences in Performance Among Test Statistics for Assessing Phylogenomic Model Adequacy.

    PubMed

    Duchêne, David A; Duchêne, Sebastian; Ho, Simon Y W

    2018-05-18

    Statistical phylogenetic analyses of genomic data depend on models of nucleotide or amino acid substitution. The adequacy of these substitution models can be assessed using a number of test statistics, allowing the model to be rejected when it is found to provide a poor description of the evolutionary process. A potentially valuable use of model-adequacy test statistics is to identify when data sets are likely to produce unreliable phylogenetic estimates, but their differences in performance are rarely explored. We performed a comprehensive simulation study to identify test statistics that are sensitive to some of the most commonly cited sources of phylogenetic estimation error. Our results show that, for many test statistics, traditional thresholds for assessing model adequacy can fail to reject the model when the phylogenetic inferences are inaccurate and imprecise. This is particularly problematic when analysing loci that have few variable informative sites. We propose new thresholds for assessing substitution model adequacy and demonstrate their effectiveness in analyses of three phylogenomic data sets. These thresholds lead to frequent rejection of the model for loci that yield topological inferences that are imprecise and are likely to be inaccurate. We also propose the use of a summary statistic that provides a practical assessment of overall model adequacy. Our approach offers a promising means of enhancing model choice in genome-scale data sets, potentially leading to improvements in the reliability of phylogenomic inference.

  2. Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses

    PubMed Central

    Stephen, Emily P.; Lepage, Kyle Q.; Eden, Uri T.; Brunner, Peter; Schalk, Gerwin; Brumberg, Jonathan S.; Guenther, Frank H.; Kramer, Mark A.

    2014-01-01

    The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty—both in the functional network edges and the corresponding aggregate measures of network topology—are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here—appropriate for static and dynamic network inference and different statistical measures of coupling—permits the evaluation of confidence in network measures in a variety of settings common to neuroscience. PMID:24678295

  3. Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses.

    PubMed

    Stephen, Emily P; Lepage, Kyle Q; Eden, Uri T; Brunner, Peter; Schalk, Gerwin; Brumberg, Jonathan S; Guenther, Frank H; Kramer, Mark A

    2014-01-01

    The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty-both in the functional network edges and the corresponding aggregate measures of network topology-are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here-appropriate for static and dynamic network inference and different statistical measures of coupling-permits the evaluation of confidence in network measures in a variety of settings common to neuroscience.

  4. Functional constraints on tooth morphology in carnivorous mammals

    PubMed Central

    2012-01-01

    Background The range of potential morphologies resulting from evolution is limited by complex interacting processes, ranging from development to function. Quantifying these interactions is important for understanding adaptation and convergent evolution. Using three-dimensional reconstructions of carnivoran and dasyuromorph tooth rows, we compared statistical models of the relationship between tooth row shape and the opposing tooth row, a static feature, as well as measures of mandibular motion during chewing (occlusion), which are kinetic features. This is a new approach to quantifying functional integration because we use measures of movement and displacement, such as the amount the mandible translates laterally during occlusion, as opposed to conventional morphological measures, such as mandible length and geometric landmarks. By sampling two distantly related groups of ecologically similar mammals, we study carnivorous mammals in general rather than a specific group of mammals. Results Statistical model comparisons demonstrate that the best performing models always include some measure of mandibular motion, indicating that functional and statistical models of tooth shape as purely a function of the opposing tooth row are too simple and that increased model complexity provides a better understanding of tooth form. The predictors of the best performing models always included the opposing tooth row shape and a relative linear measure of mandibular motion. Conclusions Our results provide quantitative support of long-standing hypotheses of tooth row shape as being influenced by mandibular motion in addition to the opposing tooth row. Additionally, this study illustrates the utility and necessity of including kinetic features in analyses of morphological integration. PMID:22899809

  5. GENE-Counter: A Computational Pipeline for the Analysis of RNA-Seq Data for Gene Expression Differences

    PubMed Central

    Di, Yanming; Schafer, Daniel W.; Wilhelm, Larry J.; Fox, Samuel E.; Sullivan, Christopher M.; Curzon, Aron D.; Carrington, James C.; Mockler, Todd C.; Chang, Jeff H.

    2011-01-01

    GENE-counter is a complete Perl-based computational pipeline for analyzing RNA-Sequencing (RNA-Seq) data for differential gene expression. In addition to its use in studying transcriptomes of eukaryotic model organisms, GENE-counter is applicable for prokaryotes and non-model organisms without an available genome reference sequence. For alignments, GENE-counter is configured for CASHX, Bowtie, and BWA, but an end user can use any Sequence Alignment/Map (SAM)-compliant program of preference. To analyze data for differential gene expression, GENE-counter can be run with any one of three statistics packages that are based on variations of the negative binomial distribution. The default method is a new and simple statistical test we developed based on an over-parameterized version of the negative binomial distribution. GENE-counter also includes three different methods for assessing differentially expressed features for enriched gene ontology (GO) terms. Results are transparent and data are systematically stored in a MySQL relational database to facilitate additional analyses as well as quality assessment. We used next generation sequencing to generate a small-scale RNA-Seq dataset derived from the heavily studied defense response of Arabidopsis thaliana and used GENE-counter to process the data. Collectively, the support from analysis of microarrays as well as the observed and substantial overlap in results from each of the three statistics packages demonstrates that GENE-counter is well suited for handling the unique characteristics of small sample sizes and high variability in gene counts. PMID:21998647

  6. Exploratory Visual Analysis of Statistical Results from Microarray Experiments Comparing High and Low Grade Glioma

    PubMed Central

    Reif, David M.; Israel, Mark A.; Moore, Jason H.

    2007-01-01

    The biological interpretation of gene expression microarray results is a daunting challenge. For complex diseases such as cancer, wherein the body of published research is extensive, the incorporation of expert knowledge provides a useful analytical framework. We have previously developed the Exploratory Visual Analysis (EVA) software for exploring data analysis results in the context of annotation information about each gene, as well as biologically relevant groups of genes. We present EVA as a flexible combination of statistics and biological annotation that provides a straightforward visual interface for the interpretation of microarray analyses of gene expression in the most commonly occuring class of brain tumors, glioma. We demonstrate the utility of EVA for the biological interpretation of statistical results by analyzing publicly available gene expression profiles of two important glial tumors. The results of a statistical comparison between 21 malignant, high-grade glioblastoma multiforme (GBM) tumors and 19 indolent, low-grade pilocytic astrocytomas were analyzed using EVA. By using EVA to examine the results of a relatively simple statistical analysis, we were able to identify tumor class-specific gene expression patterns having both statistical and biological significance. Our interactive analysis highlighted the potential importance of genes involved in cell cycle progression, proliferation, signaling, adhesion, migration, motility, and structure, as well as candidate gene loci on a region of Chromosome 7 that has been implicated in glioma. Because EVA does not require statistical or computational expertise and has the flexibility to accommodate any type of statistical analysis, we anticipate EVA will prove a useful addition to the repertoire of computational methods used for microarray data analysis. EVA is available at no charge to academic users and can be found at http://www.epistasis.org. PMID:19390666

  7. Healthcare expenditures and patient satisfaction: cost and quality from the consumer's perspective in the US.

    PubMed

    Fu, Alex Z; Wang, Nan

    2008-05-01

    Both cost and quality of healthcare are major concerns in the United States. Using patient satisfaction as a quality indicator, we seek to identify the relationship between healthcare cost and quality from the perspective of the community-dwelling population in the United States. We examined a nationally representative sample of 13,980 adults (age >or= 18 years) in the 2003 Medical Expenditure Panel Survey (MEPS). Given the idiosyncrasies of the cost data distribution, a recently developed extended estimating equation (EEE) model was employed to identify the relationship between patient satisfaction and healthcare expenditure, after controlling for individual demographic covariates, co-morbidity profile, and functional and activity limitations. A series of sensitivity analyses were conducted, in addition, to verify the identified relationship. All statistics were adjusted using the proper sampling weight from the MEPS data. Average annual healthcare expenditures for 2003 ranged between $3923 and $6073 when grouped by patient satisfaction ratings with a mean value $4779 for all individuals who rated perceived satisfaction of their healthcare. We found that there is no statistically significant relationship between patient satisfaction and total healthcare expenditure (p = 0.60) and a non-monotonic relationship is not identified either. All sensitivity analyses results revealed a lack of relationship between patient satisfaction and healthcare expenditures. Patient satisfaction might not reflect the quality of healthcare from an objective clinical standpoint. The identified cost-satisfaction relationship may not be extrapolated to other quality indicators. Due to the cross-sectional study design, no causal relationship could be inferred between patient satisfaction and healthcare expenditure. Our study adds to the literature on health care cost and quality by suggesting that the improvement of patient satisfaction may not require additional health care spending.

  8. Using venlafaxine to treat behavioral disorders in patients with autism spectrum disorder.

    PubMed

    Carminati, Giuliana Galli; Gerber, Fabienne; Darbellay, Barbara; Kosel, Markus Mathaus; Deriaz, Nicolas; Chabert, Jocelyne; Fathi, Marc; Bertschy, Gilles; Ferrero, François; Carminati, Federico

    2016-02-04

    To test the efficacy of venlafaxine at a dose of 18.75 mg/day on the reduction of behavioral problems such as irritability and hyperactivity/noncompliance in patients with intellectual disabilities and autism spectrum disorder (ASD). Our secondary hypothesis was that the usual doses of zuclopenthixol and/or clonazepam would decrease in the venlafaxine-treated group. In a randomized double-blind study, we compared six patients who received venlafaxine along with their usual treatment (zuclopenthixol and/or clonazepam) with seven patients who received placebo plus usual care. Irritability, hyperactivity/noncompliance, and overall clinical improvement were measured after 2 and 8 weeks, using validated clinical scales. Univariate analyses showed that the symptom of irritability improved in the entire sample (p = 0.023 after 2 weeks, p = 0.061 at study endpoint), although no difference was observed between the venlafaxine and placebo groups. No significant decrease in hyperactivity/noncompliance was observed during the study. At the end of the study, global improvement was observed in 33% of participants treated with venlafaxine and in 71% of participants in the placebo group (p = 0.29). The study found that decreased cumulative doses of clonazepam and zuclopenthixol were required for the venlafaxine group. Multivariate analyses (principal component analyses) with at least three combinations of variables showed that the two populations could be clearly separated (p b 0.05). Moreover, in all cases, the venlafaxine population had lower values for the Aberrant Behavior Checklist (ABC), Behavior Problems Inventory (BPI), and levels of urea with respect to the placebo group. In one case, a reduction in the dosage of clonazepam was also suggested. For an additional set of variables (ABC factor 2, BPI frequency of aggressive behaviors, hematic ammonia at Day 28, and zuclopenthixol and clonazepam intake), the separation between the two samples was statistically significant as was the Bartlett's test, but the Kaiser–Meyer–Olkin Measure of Sampling Adequacy was below the accepted threshold. This set of variables showed a reduction in the cumulative intake of both zuclopenthixol and clonazepam. Despite the small sample sizes, this study documented a statistically significant effect of venlafaxine. Moreover, we showed that lower doses of zuclopenthixol and clonazepam were needed in the venlafaxine group, although this difference was not statistically significant. This was confirmed by multivariate analyses, where this difference reached statistical significance when using a combination of variables involving zuclopenthixol. Larger-scale studies are recommended to better investigate the effectiveness of venlafaxine treatment in patients with intellectual disabilities and ASD.

  9. On the nexus of air pollution and health expenditures: new empirical evidence.

    PubMed

    Blázquez-Fernández, Carla; Cantarero-Prieto, David; Pascual-Sáez, Marta

    2018-05-21

    To analyse the impact of per capita income and environmental air quality variables on health expenditure determinants. In this study, we analyse the relationship between air pollution and health expenditure in 29 OECD countries over the period 1995-2014. In addition, we test whether our findings differ between countries with higher or lower incomes. The econometric results show that per capita income has a positive effect on health expenditure, but is not as statistically significant as expected when lag-time is incorporated. In addition, an anchorage effect is observed, which implies that about 80%-90% of previous expenditure explain current expenditure. Our empirical results are quite consistent between groups and when compared with the full sample. Nevertheless, there appear to be some differences when broken down by financing scheme (total, public, and private). Overall, our findings could be used to clarify the appropriate health expenditure level or to obtain better environmental quality and social well-being. That is, empirical support is provided on how health management and policy makers should include more considerations for the use of cleaner fuels in developed countries. Copyright © 2018 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  10. Post Hoc Analyses of ApoE Genotype-Defined Subgroups in Clinical Trials.

    PubMed

    Kennedy, Richard E; Cutter, Gary R; Wang, Guoqiao; Schneider, Lon S

    2016-01-01

    Many post hoc analyses of clinical trials in Alzheimer's disease (AD) and mild cognitive impairment (MCI) are in small Phase 2 trials. Subject heterogeneity may lead to statistically significant post hoc results that cannot be replicated in larger follow-up studies. We investigated the extent of this problem using simulation studies mimicking current trial methods with post hoc analyses based on ApoE4 carrier status. We used a meta-database of 24 studies, including 3,574 subjects with mild AD and 1,171 subjects with MCI/prodromal AD, to simulate clinical trial scenarios. Post hoc analyses examined if rates of progression on the Alzheimer's Disease Assessment Scale-cognitive (ADAS-cog) differed between ApoE4 carriers and non-carriers. Across studies, ApoE4 carriers were younger and had lower baseline scores, greater rates of progression, and greater variability on the ADAS-cog. Up to 18% of post hoc analyses for 18-month trials in AD showed greater rates of progression for ApoE4 non-carriers that were statistically significant but unlikely to be confirmed in follow-up studies. The frequency of erroneous conclusions dropped below 3% with trials of 100 subjects per arm. In MCI, rates of statistically significant differences with greater progression in ApoE4 non-carriers remained below 3% unless sample sizes were below 25 subjects per arm. Statistically significant differences for ApoE4 in post hoc analyses often reflect heterogeneity among small samples rather than true differential effect among ApoE4 subtypes. Such analyses must be viewed cautiously. ApoE genotype should be incorporated into the design stage to minimize erroneous conclusions.

  11. Methodological Standards for Meta-Analyses and Qualitative Systematic Reviews of Cardiac Prevention and Treatment Studies: A Scientific Statement From the American Heart Association.

    PubMed

    Rao, Goutham; Lopez-Jimenez, Francisco; Boyd, Jack; D'Amico, Frank; Durant, Nefertiti H; Hlatky, Mark A; Howard, George; Kirley, Katherine; Masi, Christopher; Powell-Wiley, Tiffany M; Solomonides, Anthony E; West, Colin P; Wessel, Jennifer

    2017-09-05

    Meta-analyses are becoming increasingly popular, especially in the fields of cardiovascular disease prevention and treatment. They are often considered to be a reliable source of evidence for making healthcare decisions. Unfortunately, problems among meta-analyses such as the misapplication and misinterpretation of statistical methods and tests are long-standing and widespread. The purposes of this statement are to review key steps in the development of a meta-analysis and to provide recommendations that will be useful for carrying out meta-analyses and for readers and journal editors, who must interpret the findings and gauge methodological quality. To make the statement practical and accessible, detailed descriptions of statistical methods have been omitted. Based on a survey of cardiovascular meta-analyses, published literature on methodology, expert consultation, and consensus among the writing group, key recommendations are provided. Recommendations reinforce several current practices, including protocol registration; comprehensive search strategies; methods for data extraction and abstraction; methods for identifying, measuring, and dealing with heterogeneity; and statistical methods for pooling results. Other practices should be discontinued, including the use of levels of evidence and evidence hierarchies to gauge the value and impact of different study designs (including meta-analyses) and the use of structured tools to assess the quality of studies to be included in a meta-analysis. We also recommend choosing a pooling model for conventional meta-analyses (fixed effect or random effects) on the basis of clinical and methodological similarities among studies to be included, rather than the results of a test for statistical heterogeneity. © 2017 American Heart Association, Inc.

  12. Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions.

    PubMed

    Liu, Hongcheng; Yao, Tao; Li, Runze; Ye, Yinyu

    2017-11-01

    This paper concerns the folded concave penalized sparse linear regression (FCPSLR), a class of popular sparse recovery methods. Although FCPSLR yields desirable recovery performance when solved globally, computing a global solution is NP-complete. Despite some existing statistical performance analyses on local minimizers or on specific FCPSLR-based learning algorithms, it still remains open questions whether local solutions that are known to admit fully polynomial-time approximation schemes (FPTAS) may already be sufficient to ensure the statistical performance, and whether that statistical performance can be non-contingent on the specific designs of computing procedures. To address the questions, this paper presents the following threefold results: (i) Any local solution (stationary point) is a sparse estimator, under some conditions on the parameters of the folded concave penalties. (ii) Perhaps more importantly, any local solution satisfying a significant subspace second-order necessary condition (S 3 ONC), which is weaker than the second-order KKT condition, yields a bounded error in approximating the true parameter with high probability. In addition, if the minimal signal strength is sufficient, the S 3 ONC solution likely recovers the oracle solution. This result also explicates that the goal of improving the statistical performance is consistent with the optimization criteria of minimizing the suboptimality gap in solving the non-convex programming formulation of FCPSLR. (iii) We apply (ii) to the special case of FCPSLR with minimax concave penalty (MCP) and show that under the restricted eigenvalue condition, any S 3 ONC solution with a better objective value than the Lasso solution entails the strong oracle property. In addition, such a solution generates a model error (ME) comparable to the optimal but exponential-time sparse estimator given a sufficient sample size, while the worst-case ME is comparable to the Lasso in general. Furthermore, to guarantee the S 3 ONC admits FPTAS.

  13. Reporting and methodological quality of meta-analyses in urological literature.

    PubMed

    Xia, Leilei; Xu, Jing; Guzzo, Thomas J

    2017-01-01

    To assess the overall quality of published urological meta-analyses and identify predictive factors for high quality. We systematically searched PubMed to identify meta-analyses published from January 1st, 2011 to December 31st, 2015 in 10 predetermined major paper-based urology journals. The characteristics of the included meta-analyses were collected, and their reporting and methodological qualities were assessed by the PRISMA checklist (27 items) and AMSTAR tool (11 items), respectively. Descriptive statistics were used for individual items as a measure of overall compliance, and PRISMA and AMSTAR scores were calculated as the sum of adequately reported domains. Logistic regression was used to identify predictive factors for high qualities. A total of 183 meta-analyses were included. The mean PRISMA and AMSTAR scores were 22.74 ± 2.04 and 7.57 ± 1.41, respectively. PRISMA item 5, protocol and registration, items 15 and 22, risk of bias across studies, items 16 and 23, additional analysis had less than 50% adherence. AMSTAR item 1, " a priori " design, item 5, list of studies and item 10, publication bias had less than 50% adherence. Logistic regression analyses showed that funding support and " a priori " design were associated with superior reporting quality, following PRISMA guideline and " a priori " design were associated with superior methodological quality. Reporting and methodological qualities of recently published meta-analyses in major paper-based urology journals are generally good. Further improvement could potentially be achieved by strictly adhering to PRISMA guideline and having " a priori " protocol.

  14. Rasch analysis for psychometric improvement of science attitude rating scales

    NASA Astrophysics Data System (ADS)

    Oon, Pey-Tee; Fan, Xitao

    2017-04-01

    Students' attitude towards science (SAS) is often a subject of investigation in science education research. Survey of rating scale is commonly used in the study of SAS. The present study illustrates how Rasch analysis can be used to provide psychometric information of SAS rating scales. The analyses were conducted on a 20-item SAS scale used in an existing dataset of The Trends in International Mathematics and Science Study (TIMSS) (2011). Data of all the eight-grade participants from Hong Kong and Singapore (N = 9942) were retrieved for analyses. Additional insights from Rasch analysis that are not commonly available from conventional test and item analyses were discussed, such as invariance measurement of SAS, unidimensionality of SAS construct, optimum utilization of SAS rating categories, and item difficulty hierarchy in the SAS scale. Recommendations on how TIMSS items on the measurement of SAS can be better designed were discussed. The study also highlights the importance of using Rasch estimates for statistical parametric tests (e.g. ANOVA, t-test) that are common in science education research for group comparisons.

  15. Changing response of the North Atlantic/European winter climate to the 11 year solar cycle

    NASA Astrophysics Data System (ADS)

    Ma, Hedi; Chen, Haishan; Gray, Lesley; Zhou, Liming; Li, Xing; Wang, Ruili; Zhu, Siguang

    2018-03-01

    Recent studies have presented conflicting results regarding the 11 year solar cycle (SC) influences on winter climate over the North Atlantic/European region. Analyses of only the most recent decades suggest a synchronized North Atlantic Oscillation (NAO)-like response pattern to the SC. Analyses of long-term climate data sets dating back to the late 19th century, however, suggest a mean sea level pressure (mslp) response that lags the SC by 2-4 years in the southern node of the NAO (i.e. Azores region). To understand the conflicting nature and cause of these time dependencies in the SC surface response, the present study employs a lead/lag multi-linear regression technique with a sliding window of 44 years over the period 1751-2016. Results confirm previous analyses, in which the average response for the whole time period features a statistically significant 2-4 year lagged mslp response centered over the Azores region. Overall, the lagged nature of Azores mslp response is generally consistent in time. Stronger and statistically significant SC signals tend to appear in the periods when the SC forcing amplitudes are relatively larger. Individual month analysis indicates the consistent lagged response in December-January-February average arises primarily from early winter months (i.e. December and January), which has been associated with ocean feedback processes that involve reinforcement by anomalies from the previous winter. Additional analysis suggests that the synchronous NAO-like response in recent decades arises primarily from late winter (February), possibly reflecting a result of strong internal noise.

  16. Manual therapy compared with physical therapy in patients with non-specific neck pain: a randomized controlled trial.

    PubMed

    Groeneweg, Ruud; van Assen, Luite; Kropman, Hans; Leopold, Huco; Mulder, Jan; Smits-Engelsman, Bouwien C M; Ostelo, Raymond W J G; Oostendorp, Rob A B; van Tulder, Maurits W

    2017-01-01

    Manual therapy according to the School of Manual Therapy Utrecht (MTU) is a specific type of passive manual joint mobilization. MTU has not yet been systematically compared to other manual therapies and physical therapy. In this study the effectiveness of MTU is compared to physical therapy, particularly active exercise therapy (PT) in patients with non-specific neck pain. Patients neck pain, aged between 18-70 years, were included in a pragmatic randomized controlled trial with a one-year follow-up. Primary outcome measures were global perceived effect and functioning (Neck Disability Index), the secondary outcome was pain intensity (Numeric Rating Scale for Pain). Outcomes were measured at 3, 7, 13, 26 and 52 weeks. Multilevel analyses (intention-to-treat) were the primary analyses for overall between-group differences. Additional to the primary and secondary outcomes the number of treatment sessions of the MTU group and PT group was analyzed. Data were collected from September 2008 to February 2011. A total of 181 patients were included. Multilevel analyses showed no statistically significant overall differences at one year between the MTU and PT groups on any of the primary and secondary outcomes. The MTU group showed significantly lower treatment sessions compared to the PT group (respectively 3.1 vs. 5.9 after 7 weeks; 6.1 vs. 10.0 after 52 weeks). Patients with neck pain improved in both groups without statistical significantly or clinically relevant differences between the MTU and PT groups during one-year follow-up. ClinicalTrials.gov Identifier: NCT00713843.

  17. Back to "once a caesarean: always a caesarean"? A trend analysis in Switzerland.

    PubMed

    Christmann-Schmid, Corina; Raio, Luigi; Scheibner, Katrin; Müller, Martin; Surbek, Daniel

    2016-11-01

    Caesarean sections (CS) have significantly increased worldwide and a previous CS is nowadays an important and increasingly reported indication to perform a repeat CS. There is a paucity of information in Switzerland on the incidence of repeat CS after previous CS and relationship between the rates of vaginal birth after CS (VBAC). The aim of this study was to analyse the actual trend in VBAC in Switzerland. We performed a retrospective cohort study to analyse the proportion of VBAC among all pregnant women with previous sections which give birth during two time periods (group 1:1998/1999 vs. group 2:2004/2005) in our tertiary care referral hospital and in the annual statistics of Swiss Women's Hospitals (ASF-Statistics). In addition, the proportion of induction of labour after a previous caesarean and its success was analysed. In both cohorts studied, we found a significant decrease of vaginal births (p < 0.05) and a significant increase of primary elective repeat caesarean section (p < 0.05) from the first to the second time period, while there was a decrease of secondary repeat caesarean sections. The prevalence of labour induction did not decrease. Our study shows that vaginal birth after a prior caesarean section has decreased over time in Switzerland. There was no significant change in labour induction during the study period. While this trend might reflect an increasing demand for safety in pregnancy and childbirth, it concomitantly increases maternal risks of further pregnancies, and women need to be appropriately informed about long-term risks.

  18. A Primer on Receiver Operating Characteristic Analysis and Diagnostic Efficiency Statistics for Pediatric Psychology: We Are Ready to ROC

    PubMed Central

    2014-01-01

    Objective To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Method Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Results Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. Conclusions This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses. PMID:23965298

  19. A primer on receiver operating characteristic analysis and diagnostic efficiency statistics for pediatric psychology: we are ready to ROC.

    PubMed

    Youngstrom, Eric A

    2014-03-01

    To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses.

  20. Distinguishing Mediational Models and Analyses in Clinical Psychology: Atemporal Associations Do Not Imply Causation.

    PubMed

    Winer, E Samuel; Cervone, Daniel; Bryant, Jessica; McKinney, Cliff; Liu, Richard T; Nadorff, Michael R

    2016-09-01

    A popular way to attempt to discern causality in clinical psychology is through mediation analysis. However, mediation analysis is sometimes applied to research questions in clinical psychology when inferring causality is impossible. This practice may soon increase with new, readily available, and easy-to-use statistical advances. Thus, we here provide a heuristic to remind clinical psychological scientists of the assumptions of mediation analyses. We describe recent statistical advances and unpack assumptions of causality in mediation, underscoring the importance of time in understanding mediational hypotheses and analyses in clinical psychology. Example analyses demonstrate that statistical mediation can occur despite theoretical mediation being improbable. We propose a delineation of mediational effects derived from cross-sectional designs into the terms temporal and atemporal associations to emphasize time in conceptualizing process models in clinical psychology. The general implications for mediational hypotheses and the temporal frameworks from within which they may be drawn are discussed. © 2016 Wiley Periodicals, Inc.

  1. Cancer Statistics Animator

    Cancer.gov

    This tool allows users to animate cancer trends over time by cancer site and cause of death, race, and sex. Provides access to incidence, mortality, and survival. Select the type of statistic, variables, format, and then extract the statistics in a delimited format for further analyses.

  2. Ataxia Telangiectasia–Mutated Gene Polymorphisms and Acute Normal Tissue Injuries in Cancer Patients After Radiation Therapy: A Systematic Review and Meta-analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Lihua; Cui, Jingkun; Tang, Fengjiao

    Purpose: Studies of the association between ataxia telangiectasia–mutated (ATM) gene polymorphisms and acute radiation injuries are often small in sample size, and the results are inconsistent. We conducted the first meta-analysis to provide a systematic review of published findings. Methods and Materials: Publications were identified by searching PubMed up to April 25, 2014. Primary meta-analysis was performed for all acute radiation injuries, and subgroup meta-analyses were based on clinical endpoint. The influence of sample size and radiation injury incidence on genetic effects was estimated in sensitivity analyses. Power calculations were also conducted. Results: The meta-analysis was conducted on the ATMmore » polymorphism rs1801516, including 5 studies with 1588 participants. For all studies, the cut-off for differentiating cases from controls was grade 2 acute radiation injuries. The primary meta-analysis showed a significant association with overall acute radiation injuries (allelic model: odds ratio = 1.33, 95% confidence interval: 1.04-1.71). Subgroup analyses detected an association between the rs1801516 polymorphism and a significant increase in urinary and lower gastrointestinal injuries and an increase in skin injury that was not statistically significant. There was no between-study heterogeneity in any meta-analyses. In the sensitivity analyses, small studies did not show larger effects than large studies. In addition, studies with high incidence of acute radiation injuries showed larger effects than studies with low incidence. Power calculations revealed that the statistical power of the primary meta-analysis was borderline, whereas there was adequate power for the subgroup analysis of studies with high incidence of acute radiation injuries. Conclusions: Our meta-analysis showed a consistency of the results from the overall and subgroup analyses. We also showed that the genetic effect of the rs1801516 polymorphism on acute radiation injuries was dependent on the incidence of the injury. These support the evidence of an association between the rs1801516 polymorphism and acute radiation injuries, encouraging further research of this topic.« less

  3. Meta-analyses of Adverse Effects Data Derived from Randomised Controlled Trials as Compared to Observational Studies: Methodological Overview

    PubMed Central

    Golder, Su; Loke, Yoon K.; Bland, Martin

    2011-01-01

    Background There is considerable debate as to the relative merits of using randomised controlled trial (RCT) data as opposed to observational data in systematic reviews of adverse effects. This meta-analysis of meta-analyses aimed to assess the level of agreement or disagreement in the estimates of harm derived from meta-analysis of RCTs as compared to meta-analysis of observational studies. Methods and Findings Searches were carried out in ten databases in addition to reference checking, contacting experts, citation searches, and hand-searching key journals, conference proceedings, and Web sites. Studies were included where a pooled relative measure of an adverse effect (odds ratio or risk ratio) from RCTs could be directly compared, using the ratio of odds ratios, with the pooled estimate for the same adverse effect arising from observational studies. Nineteen studies, yielding 58 meta-analyses, were identified for inclusion. The pooled ratio of odds ratios of RCTs compared to observational studies was estimated to be 1.03 (95% confidence interval 0.93–1.15). There was less discrepancy with larger studies. The symmetric funnel plot suggests that there is no consistent difference between risk estimates from meta-analysis of RCT data and those from meta-analysis of observational studies. In almost all instances, the estimates of harm from meta-analyses of the different study designs had 95% confidence intervals that overlapped (54/58, 93%). In terms of statistical significance, in nearly two-thirds (37/58, 64%), the results agreed (both studies showing a significant increase or significant decrease or both showing no significant difference). In only one meta-analysis about one adverse effect was there opposing statistical significance. Conclusions Empirical evidence from this overview indicates that there is no difference on average in the risk estimate of adverse effects of an intervention derived from meta-analyses of RCTs and meta-analyses of observational studies. This suggests that systematic reviews of adverse effects should not be restricted to specific study types. Please see later in the article for the Editors' Summary PMID:21559325

  4. Color and Morphology of Lava Flows on Io

    NASA Astrophysics Data System (ADS)

    Piatek, Jennifer L.; McElfresh, Sarah B. Z.; Byrnes, Jeffrey M.; Hale, Amy Snyder; Crown, David A.

    2000-12-01

    Analyses of color and morphologic changes in Voyager images of lava flows on Io were conducted to extend previous flow studies to additional volcanoes in preparation for comparison to Galileo data. Blue and orange filter images of Atar, Daedalus, and Ra Paterae were examined to identify systematic downflow decreases in blue/orange reflectivity suggested in earlier studies as diagnostic of color changes in cooled sulfur flows. Analyses of the color and morphology of 21 lava flows were conducted at these volcanoes, with additional morphologic analysis of lava flows at Agni, Masaaw, Mbali, Shoshu, and Talos Paterae. A total of 66 lava flows of up to 245 km in length were mapped to identify morphologic changes consistent with the rheologic changes expected to occur in sulfur flows. Although downflow color changes are observed, the trends are not consistent, even at the same edifice. Individual flows exhibit a statistically significant increase in blue/orange ratio, decrease in blue/orange ratio, or a lack of progressive downflow color variation. Color changes have similar magnitudes downflow and across flow, and the color ranges observed are similar from volcano to volcano, suggesting that similar processes are controlling color ratios at these edifices. In addition, using flow widening and branching as an indicator of the low viscosity exhibited by sulfur cooling from high temperatures, these flows do not exhibit morphologic changes consistent with the systematic behavior expected from the simple progressive cooling of sulfur.

  5. 3D-Digital soil property mapping by geoadditive models

    NASA Astrophysics Data System (ADS)

    Papritz, Andreas

    2016-04-01

    In many digital soil mapping (DSM) applications, soil properties must be predicted not only for a single but for multiple soil depth intervals. In the GlobalSoilMap project, as an example, predictions are computed for the 0-5 cm, 5-15 cm, 15-30 cm, 30-60 cm, 60-100 cm, 100-200 cm depth intervals (Arrouays et al., 2014). Legacy soil data are often used for DSM. It is common for such datasets that soil properties were measured for soil horizons or for layers at varying soil depth and with non-constant thickness (support). This poses problems for DSM: One strategy is to harmonize the soil data to common depth prior to the analyses (e.g. Bishop et al., 1999) and conduct the statistical analyses for each depth interval independently. The disadvantage of this approach is that the predictions for different depths are computed independently from each other so that the predicted depth profiles may be unrealistic. Furthermore, the error induced by the harmonization to common depth is ignored in this approach (Orton et al. 2016). A better strategy is therefore to process all soil data jointly without prior harmonization by a 3D-analysis that takes soil depth and geographical position explicitly into account. Usually, the non-constant support of the data is then ignored, but Orton et al. (2016) presented recently a geostatistical approach that accounts for non-constant support of soil data and relies on restricted maximum likelihood estimation (REML) of a linear geostatistical model with a separable, heteroscedastic, zonal anisotropic auto-covariance function and area-to-point kriging (Kyriakidis, 2004.) Although this model is theoretically coherent and elegant, estimating its many parameters by REML and selecting covariates for the spatial mean function is a formidable task. A simpler approach might be to use geoadditive models (Kammann and Wand, 2003; Wand, 2003) for 3D-analyses of soil data. geoAM extend the scope of the linear model with spatially correlated errors to account for nonlinear effects of covariates by fitting componentwise smooth, nonlinear functions to the covariates (additive terms). REML estimation of model parameters and computing best linear unbiased predictions (BLUP) builds in the geoAM framework on the fact that both geostatistical and additive models can be parametrized as linear mixed models Wand, 2003. For 3D-DSM analysis of soil data, it is natural to model depth profiles of soil properties by additive terms of soil depth. Including interactions between these additive terms and covariates of the spatial mean function allows to model spatially varying depth profiles. Furthermore, with suitable choice of the basis functions of the additive term (e.g. polynomial regression splines), non-constant support of the soil data can be taken into account. Finally, boosting (Bühlmann and Hothorn, 2007) can be used for selecting covariates for the spatial mean function. The presentation will detail the geoAM approach and present an example of geoAM for 3D-analysis of legacy soil data. Arrouays, D., McBratney, A. B., Minasny, B., Hempel, J. W., Heuvelink, G. B. M., MacMillan, R. A., Hartemink, A. E., Lagacherie, P., and McKenzie, N. J. (2014). The GlobalSoilMap project specifications. In GlobalSoilMap Basis of the global spatial soil information system, pages 9-12. CRC Press. Bishop, T., McBratney, A., and Laslett, G. (1999). Modelling soil attribute depth functions with equal-area quadratic smoothing splines. Geoderma, 91(1-2), 27-45. Bühlmann, P. and Hothorn, T. (2007). Boosting algorithms: Regularization, prediction and model fitting. Statistical Science, 22(4), 477-505. Kammann, E. E. and Wand, M. P. (2003). Geoadditive models. Journal of the Royal Statistical Society. Series C: Applied Statistics, 52(1), 1-18. Kyriakidis, P. (2004). A geostatistical framework for area-to-point spatial interpolation. Geographical Analysis, 36(3), 259-289. Orton, T., Pringle, M., and Bishop, T. (2016). A one-step approach for modelling and mapping soil properties based on profile data sampled over varying depth intervals. Geoderma, 262, 174-186. Wand, M. P. (2003). Smoothing and mixed models. Computational Statistics, 18(2), 223-249.

  6. Dissecting the genetics of complex traits using summary association statistics.

    PubMed

    Pasaniuc, Bogdan; Price, Alkes L

    2017-02-01

    During the past decade, genome-wide association studies (GWAS) have been used to successfully identify tens of thousands of genetic variants associated with complex traits and diseases. These studies have produced extensive repositories of genetic variation and trait measurements across large numbers of individuals, providing tremendous opportunities for further analyses. However, privacy concerns and other logistical considerations often limit access to individual-level genetic data, motivating the development of methods that analyse summary association statistics. Here, we review recent progress on statistical methods that leverage summary association data to gain insights into the genetic basis of complex traits and diseases.

  7. Statistical innovations in diagnostic device evaluation.

    PubMed

    Yu, Tinghui; Li, Qin; Gray, Gerry; Yue, Lilly Q

    2016-01-01

    Due to rapid technological development, innovations in diagnostic devices are proceeding at an extremely fast pace. Accordingly, the needs for adopting innovative statistical methods have emerged in the evaluation of diagnostic devices. Statisticians in the Center for Devices and Radiological Health at the Food and Drug Administration have provided leadership in implementing statistical innovations. The innovations discussed in this article include: the adoption of bootstrap and Jackknife methods, the implementation of appropriate multiple reader multiple case study design, the application of robustness analyses for missing data, and the development of study designs and data analyses for companion diagnostics.

  8. Fundamentals and Catalytic Innovation: The Statistical and Data Management Center of the Antibacterial Resistance Leadership Group

    PubMed Central

    Huvane, Jacqueline; Komarow, Lauren; Hill, Carol; Tran, Thuy Tien T.; Pereira, Carol; Rosenkranz, Susan L.; Finnemeyer, Matt; Earley, Michelle; Jiang, Hongyu (Jeanne); Wang, Rui; Lok, Judith

    2017-01-01

    Abstract The Statistical and Data Management Center (SDMC) provides the Antibacterial Resistance Leadership Group (ARLG) with statistical and data management expertise to advance the ARLG research agenda. The SDMC is active at all stages of a study, including design; data collection and monitoring; data analyses and archival; and publication of study results. The SDMC enhances the scientific integrity of ARLG studies through the development and implementation of innovative and practical statistical methodologies and by educating research colleagues regarding the application of clinical trial fundamentals. This article summarizes the challenges and roles, as well as the innovative contributions in the design, monitoring, and analyses of clinical trials and diagnostic studies, of the ARLG SDMC. PMID:28350899

  9. Across the Great Divide: The Effects of Technology in Secondary Biology Classrooms

    NASA Astrophysics Data System (ADS)

    Worley, Johnny Howard, II

    This study investigates the relationship between technology use and student achievement in public high school across North Carolina. The purpose of this study was to determine whether a digital divide (differences in technology utilization based on student demographics of race/ethnicity, gender, socioeconomic status, and municipality) exists among schools and whether those differences relate to student achievement in high school biology classrooms. The study uses North Carolina end-of-course (EOC) data for biology to analyze student demographic data and assessment results from the 2010-2011 school year from the North Carolina Department of Public Instruction. The data analyses use descriptive and factorial univariate statistics to determine the existence of digital divides and their effects on biology achievement. Analysis of these data described patterns of technology use to determine whether potential variances resulted in a digital divide. Specific technology uses were identified in the data and then their impact on biology achievement scores within various demographic groups was examined. Research findings revealed statistically significant variations of use within different population groups. Despite being statistically significant, the relevance of the association in the variations was minimal at best -- based on the effect scale established by Cohen (1988). Additional factorial univariate analyses were employed to determine potential relationships between technology use and student achievement. The data revealed that technology use did not influence the variation of student achievement scale scores as much as race/ethnicity and socioeconomic status. White students outperformed Hispanic students by an average of three scale score points and Black students by an average of six scale score points. Technology use alone averaged less than a one point difference in mean scale scores, and only when interacting with race, gender, and/or SES did the mean difference increase. However, this increase within the context of the biology scale score range was negligible. This study contributes to the existing body of research on the effects of technology use on student achievement and its influence within various student demographic groups and municipalities. The study also provides additional research information for effective technology utilization, implementation, and instruction in educational environments.

  10. Synergistic effect of interaction between perceived health and social activity on depressive symptoms in the middle-aged and elderly: a population-based longitudinal study.

    PubMed

    Chun, Sung-Youn; Han, Kyu-Tae; Lee, Seo Yoon; Kim, Chan Ok; Park, Eun-Cheol

    2015-03-13

    To examine the synergistic effect of interaction between perceived health and social activity on depressive symptoms. We investigated whether the interaction between perceived health and social activity has a synergistic effect on depressive symptoms in the middle-aged and elderly using data from 6590 respondents aged 45 and older in the Korean Longitudinal Study on Aging (KLoSA), 2006-2012. A generalised linear mixed-effects model was used to investigate the association in a longitudinal data form. Depressive symptoms were measured using the Center for Epidemiological Studies Depression 10 Scale (CES-D10). Perceived health and level of social activity were categorical variables with three values. Participation in six social activities was assessed. Interactions between perceived health status and social activity were statistically significant for almost all social activity/perceived health combinations. Addition of the interaction term significantly decreased CES-D10 scores, confirming the synergistic effect of the interaction between perceived health status and social activity ('normal×moderate', β=-0.1826; 'poor×moderate', β=-0.5739; 'poor×active', β=-0.8935). In addition, we performed stratified analyses by region: urban or rural. In urban respondents, the additional effect of the interaction term decreased CES-D10 scores and all social activity/perceived health combinations were statistically significant ('normal×moderate', β=-0.2578; 'normal×active', β=-0.3945; 'poor×moderate', β=-0.5739; 'poor×active', β=-0.8935). In rural respondents, only one social activity/perceived health combination was statistically significant, and the additional effect of the interaction term showed no consistent trend on CES-D10 scores. The interaction between perceived health and social activity has a synergistic effect on depressive symptoms; the additional effect of the interaction term significantly decreased CES-D10 scores in our models. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  11. An Observational Study on Quantifying the Distance of Supercell-Boundary Interaction in the Great Plains

    NASA Astrophysics Data System (ADS)

    Magee, Kathleen Marie

    Several case studies and numerical simulations have confirmed that baroclinic boundaries provide enhanced horizontal and vertical vorticity, wind shear, helicity, and moisture that induce stronger updrafts, higher reflectivity, and stronger low-level rotation in supercells. However, the distance at which a boundary will provide such enhancement is less well-defined. Previous studies have identified distances ranging from 5 km to 200 km, and only focused on tornadogenesis rather than all forms of severe weather. To better aid short-term forecasts, the actual distances at which supercells produce severe weather in proximity to a boundary needs to be assessed. In this study, the distance between a large number of supercells and nearby surface boundaries (including warm fronts, stationary fronts, and outflow boundaries) is measured throughout the lifetime of each storm; the distance at which associated reports of large hail, strong winds, and tornadoes is also collected. Data is gathered from SPC storm reports, WPC surface analyses, ASOS archives, and NCDC radar archives. The Integrated Data Viewer (IDV) from Unidata is used to analyze the distance of the supercell from the boundary every twenty minutes. These distances are then used to interpolate the distance at which severe and significantly severe reports occurred; reports are grouped by severe type (tornado, hail, or wind), then by boundary type (warm front, stationary front, or outflow). Using these distributions, the range from the boundary at which each type of severe weather is produced is identified per boundary type to assist forecasters. Overall, the range at which tornadoes are more likely to be produced is generally closer to the boundary than hail and wind reports for all three boundary types, likely owing to the enhanced helicity present near a boundary. Statistical analyses are used to assess the sensitivity of report distributions to report type, boundary type, distance from the boundary, and boundary strength. The distance of different report types near the same boundary surprisingly did not test as statistically different, except for warm fronts; however, when comparing the distances of the same report type associated with different boundaries, statistically significant differences were found for different boundaries in which different report types are maximally produced. Temperature gradients are not as statistically different between boundaries, but the temperature gradient within the same boundary is statistically significant for the occurrence of tornadoes vs. hail. Additional analyses were conducted to complement the distributions of severe report distances. Previous research has demonstrated that the angle at which supercells interact with boundaries impacts tornado production; in the present dataset, supercells interacting at an angle less than 45° are not only more likely to produce tornadoes, but also more likely to produce significantly severe tornadoes. Additionally, other studies have suggested that the temperature gradient of a boundary is of minimal importance with regards to tornado production. Interestingly, the tornadoes in this study were associated with stronger temperature gradients than for hail or wind production, thus v implying that a stronger amount of baroclinically-generated horizontal vorticity is required for tornadogenesis, likely due to the enhanced low-level rotation.

  12. ISSUES IN THE STATISTICAL ANALYSIS OF SMALL-AREA HEALTH DATA. (R825173)

    EPA Science Inventory

    The availability of geographically indexed health and population data, with advances in computing, geographical information systems and statistical methodology, have opened the way for serious exploration of small area health statistics based on routine data. Such analyses may be...

  13. Impact of Medicare Advantage Prescription Drug Plan Star Ratings on Enrollment before and after Implementation of Quality-Related Bonus Payments in 2012

    PubMed Central

    Li, Pengxiang; Doshi, Jalpa A.

    2016-01-01

    Objective Since 2007, the Centers for Medicare and Medicaid Services have published 5-star quality rating measures to aid consumers in choosing Medicare Advantage Prescription Drug Plans (MAPDs). We examined the impact of these star ratings on Medicare Advantage Prescription Drug (MAPD) enrollment before and after 2012, when star ratings became tied to bonus payments for MAPDs that could be used to improve plan benefits and/or reduce premiums in the subsequent year. Methods A longitudinal design and multivariable hybrid models were used to assess whether star ratings had a direct impact on concurrent year MAPD contract enrollment (by influencing beneficiary choice) and/or an indirect impact on subsequent year MAPD contract enrollment (because ratings were linked to bonus payments). The main analysis was based on contract-year level data from 2009–2015. We compared effects of star ratings in the pre-bonus payment period (2009–2011) and post-bonus payment period (2012–2015). Extensive sensitivity analyses varied the analytic techniques, unit of analysis, and sample inclusion criteria. Similar analyses were conducted separately using stand-alone PDP contract-year data; since PDPs were not eligible for bonus payments, they served as an external comparison group. Result The main analysis included 3,866 MAPD contract-years. A change of star rating had no statistically significant effect on concurrent year enrollment in any of the pre-, post-, or pre-post combined periods. On the other hand, star rating increase was associated with a statistically significant increase in the subsequent year enrollment (a 1-star increase associated with +11,337 enrollees, p<0.001) in the post-bonus payment period but had a very small and statistically non-significant effect on subsequent year enrollment in the pre-bonus payment period. Further, the difference in effects on subsequent year enrollment was statistically significant between the pre- and post-periods (p = 0.011). Sensitivity analyses indicated that the findings were robust. No statistically significant effect of star ratings was found on concurrent or subsequent year enrollment in the pre- or post-period in the external comparison group of stand-alone PDP contracts. Conclusion Star ratings had no direct impact on concurrent year MAPD enrollment before or after the introduction of bonus payments tied to star ratings. However, after the introduction of these bonus payments, MAPD star ratings had a significant indirect impact of increasing subsequent year enrollment, likely via the reinvestment of bonuses to provide lower premiums and/or additional member benefits in the following year. PMID:27149092

  14. Impact of Medicare Advantage Prescription Drug Plan Star Ratings on Enrollment before and after Implementation of Quality-Related Bonus Payments in 2012.

    PubMed

    Li, Pengxiang; Doshi, Jalpa A

    2016-01-01

    Since 2007, the Centers for Medicare and Medicaid Services have published 5-star quality rating measures to aid consumers in choosing Medicare Advantage Prescription Drug Plans (MAPDs). We examined the impact of these star ratings on Medicare Advantage Prescription Drug (MAPD) enrollment before and after 2012, when star ratings became tied to bonus payments for MAPDs that could be used to improve plan benefits and/or reduce premiums in the subsequent year. A longitudinal design and multivariable hybrid models were used to assess whether star ratings had a direct impact on concurrent year MAPD contract enrollment (by influencing beneficiary choice) and/or an indirect impact on subsequent year MAPD contract enrollment (because ratings were linked to bonus payments). The main analysis was based on contract-year level data from 2009-2015. We compared effects of star ratings in the pre-bonus payment period (2009-2011) and post-bonus payment period (2012-2015). Extensive sensitivity analyses varied the analytic techniques, unit of analysis, and sample inclusion criteria. Similar analyses were conducted separately using stand-alone PDP contract-year data; since PDPs were not eligible for bonus payments, they served as an external comparison group. The main analysis included 3,866 MAPD contract-years. A change of star rating had no statistically significant effect on concurrent year enrollment in any of the pre-, post-, or pre-post combined periods. On the other hand, star rating increase was associated with a statistically significant increase in the subsequent year enrollment (a 1-star increase associated with +11,337 enrollees, p<0.001) in the post-bonus payment period but had a very small and statistically non-significant effect on subsequent year enrollment in the pre-bonus payment period. Further, the difference in effects on subsequent year enrollment was statistically significant between the pre- and post-periods (p = 0.011). Sensitivity analyses indicated that the findings were robust. No statistically significant effect of star ratings was found on concurrent or subsequent year enrollment in the pre- or post-period in the external comparison group of stand-alone PDP contracts. Star ratings had no direct impact on concurrent year MAPD enrollment before or after the introduction of bonus payments tied to star ratings. However, after the introduction of these bonus payments, MAPD star ratings had a significant indirect impact of increasing subsequent year enrollment, likely via the reinvestment of bonuses to provide lower premiums and/or additional member benefits in the following year.

  15. Trait humor and longevity: do comics have the last laugh?

    PubMed

    Rotton, J

    1992-01-01

    Four sets of biographical data were analyzed in order to test the hypothesis that the ability to generate humor is associated with longevity. Although steps were taken to ensure that tests had high levels of statistical power, analyses provided very little support for the idea that individuals with a well-developed sense of humor live longer than serious writers and other entertainers. In addition, a subsidiary analysis revealed that those in the business of entertaining others died at an earlier age than those in other lines of endeavor. These findings suggest that researchers should turn their attention from trait humor to the effects of humorous material.

  16. Statistics for the Relative Detectability of Chemicals in Weak Gaseous Plumes in LWIR Hyperspectral Imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metoyer, Candace N.; Walsh, Stephen J.; Tardiff, Mark F.

    2008-10-30

    The detection and identification of weak gaseous plumes using thermal imaging data is complicated by many factors. These include variability due to atmosphere, ground and plume temperature, and background clutter. This paper presents an analysis of one formulation of the physics-based model that describes the at-sensor observed radiance. The motivating question for the analyses performed in this paper is as follows. Given a set of backgrounds, is there a way to predict the background over which the probability of detecting a given chemical will be the highest? Two statistics were developed to address this question. These statistics incorporate data frommore » the long-wave infrared band to predict the background over which chemical detectability will be the highest. These statistics can be computed prior to data collection. As a preliminary exploration into the predictive ability of these statistics, analyses were performed on synthetic hyperspectral images. Each image contained one chemical (either carbon tetrachloride or ammonia) spread across six distinct background types. The statistics were used to generate predictions for the background ranks. Then, the predicted ranks were compared to the empirical ranks obtained from the analyses of the synthetic images. For the simplified images under consideration, the predicted and empirical ranks showed a promising amount of agreement. One statistic accurately predicted the best and worst background for detection in all of the images. Future work may include explorations of more complicated plume ingredients, background types, and noise structures.« less

  17. Statistical analysis and interpretation of prenatal diagnostic imaging studies, Part 2: descriptive and inferential statistical methods.

    PubMed

    Tuuli, Methodius G; Odibo, Anthony O

    2011-08-01

    The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.

  18. Using a Five-Step Procedure for Inferential Statistical Analyses

    ERIC Educational Resources Information Center

    Kamin, Lawrence F.

    2010-01-01

    Many statistics texts pose inferential statistical problems in a disjointed way. By using a simple five-step procedure as a template for statistical inference problems, the student can solve problems in an organized fashion. The problem and its solution will thus be a stand-by-itself organic whole and a single unit of thought and effort. The…

  19. Statistical studies of selected trace elements with reference to geology and genesis of the Carlin gold deposit, Nevada

    USGS Publications Warehouse

    Harris, Michael; Radtke, Arthur S.

    1976-01-01

    Linear regression and discriminant analyses techniques were applied to gold, mercury, arsenic, antimony, barium, copper, molybdenum, lead, zinc, boron, tellurium, selenium, and tungsten analyses from drill holes into unoxidized gold ore at the Carlin gold mine near Carlin, Nev. The statistical treatments employed were used to judge proposed hypotheses on the origin and geochemical paragenesis of this disseminated gold deposit.

  20. The Effects of Using a Wiki on Student Engagement and Learning of Report Writing Skills in a University Statistics Course

    ERIC Educational Resources Information Center

    Neumann, David L.; Hood, Michelle

    2009-01-01

    A wiki was used as part of a blended learning approach to promote collaborative learning among students in a first year university statistics class. One group of students analysed a data set and communicated the results by jointly writing a practice report using a wiki. A second group analysed the same data but communicated the results in a…

  1. Extreme between-study homogeneity in meta-analyses could offer useful insights.

    PubMed

    Ioannidis, John P A; Trikalinos, Thomas A; Zintzaras, Elias

    2006-10-01

    Meta-analyses are routinely evaluated for the presence of large between-study heterogeneity. We examined whether it is also important to probe whether there is extreme between-study homogeneity. We used heterogeneity tests with left-sided statistical significance for inference and developed a Monte Carlo simulation test for testing extreme homogeneity in risk ratios across studies, using the empiric distribution of the summary risk ratio and heterogeneity statistic. A left-sided P=0.01 threshold was set for claiming extreme homogeneity to minimize type I error. Among 11,803 meta-analyses with binary contrasts from the Cochrane Library, 143 (1.21%) had left-sided P-value <0.01 for the asymptotic Q statistic and 1,004 (8.50%) had left-sided P-value <0.10. The frequency of extreme between-study homogeneity did not depend on the number of studies in the meta-analyses. We identified examples where extreme between-study homogeneity (left-sided P-value <0.01) could result from various possibilities beyond chance. These included inappropriate statistical inference (asymptotic vs. Monte Carlo), use of a specific effect metric, correlated data or stratification using strong predictors of outcome, and biases and potential fraud. Extreme between-study homogeneity may provide useful insights about a meta-analysis and its constituent studies.

  2. Metal and physico-chemical variations at a hydroelectric reservoir analyzed by Multivariate Analyses and Artificial Neural Networks: environmental management and policy/decision-making tools.

    PubMed

    Cavalcante, Y L; Hauser-Davis, R A; Saraiva, A C F; Brandão, I L S; Oliveira, T F; Silveira, A M

    2013-01-01

    This paper compared and evaluated seasonal variations in physico-chemical parameters and metals at a hydroelectric power station reservoir by applying Multivariate Analyses and Artificial Neural Networks (ANN) statistical techniques. A Factor Analysis was used to reduce the number of variables: the first factor was composed of elements Ca, K, Mg and Na, and the second by Chemical Oxygen Demand. The ANN showed 100% correct classifications in training and validation samples. Physico-chemical analyses showed that water pH values were not statistically different between the dry and rainy seasons, while temperature, conductivity, alkalinity, ammonia and DO were higher in the dry period. TSS, hardness and COD, on the other hand, were higher during the rainy season. The statistical analyses showed that Ca, K, Mg and Na are directly connected to the Chemical Oxygen Demand, which indicates a possibility of their input into the reservoir system by domestic sewage and agricultural run-offs. These statistical applications, thus, are also relevant in cases of environmental management and policy decision-making processes, to identify which factors should be further studied and/or modified to recover degraded or contaminated water bodies. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Methodological reporting of randomized trials in five leading Chinese nursing journals.

    PubMed

    Shi, Chunhu; Tian, Jinhui; Ren, Dan; Wei, Hongli; Zhang, Lihuan; Wang, Quan; Yang, Kehu

    2014-01-01

    Randomized controlled trials (RCTs) are not always well reported, especially in terms of their methodological descriptions. This study aimed to investigate the adherence of methodological reporting complying with CONSORT and explore associated trial level variables in the Chinese nursing care field. In June 2012, we identified RCTs published in five leading Chinese nursing journals and included trials with details of randomized methods. The quality of methodological reporting was measured through the methods section of the CONSORT checklist and the overall CONSORT methodological items score was calculated and expressed as a percentage. Meanwhile, we hypothesized that some general and methodological characteristics were associated with reporting quality and conducted a regression with these data to explore the correlation. The descriptive and regression statistics were calculated via SPSS 13.0. In total, 680 RCTs were included. The overall CONSORT methodological items score was 6.34 ± 0.97 (Mean ± SD). No RCT reported descriptions and changes in "trial design," changes in "outcomes" and "implementation," or descriptions of the similarity of interventions for "blinding." Poor reporting was found in detailing the "settings of participants" (13.1%), "type of randomization sequence generation" (1.8%), calculation methods of "sample size" (0.4%), explanation of any interim analyses and stopping guidelines for "sample size" (0.3%), "allocation concealment mechanism" (0.3%), additional analyses in "statistical methods" (2.1%), and targeted subjects and methods of "blinding" (5.9%). More than 50% of trials described randomization sequence generation, the eligibility criteria of "participants," "interventions," and definitions of the "outcomes" and "statistical methods." The regression analysis found that publication year and ITT analysis were weakly associated with CONSORT score. The completeness of methodological reporting of RCTs in the Chinese nursing care field is poor, especially with regard to the reporting of trial design, changes in outcomes, sample size calculation, allocation concealment, blinding, and statistical methods.

  4. From sexless to sexy: Why it is time for human genetics to consider and report analyses of sex.

    PubMed

    Powers, Matthew S; Smith, Phillip H; McKee, Sherry A; Ehringer, Marissa A

    2017-01-01

    Science has come a long way with regard to the consideration of sex differences in clinical and preclinical research, but one field remains behind the curve: human statistical genetics. The goal of this commentary is to raise awareness and discussion about how to best consider and evaluate possible sex effects in the context of large-scale human genetic studies. Over the course of this commentary, we reinforce the importance of interpreting genetic results in the context of biological sex, establish evidence that sex differences are not being considered in human statistical genetics, and discuss how best to conduct and report such analyses. Our recommendation is to run stratified analyses by sex no matter the sample size or the result and report the findings. Summary statistics from stratified analyses are helpful for meta-analyses, and patterns of sex-dependent associations may be hidden in a combined dataset. In the age of declining sequencing costs, large consortia efforts, and a number of useful control samples, it is now time for the field of human genetics to appropriately include sex in the design, analysis, and reporting of results.

  5. Transfusion Indication Threshold Reduction (TITRe2) randomized controlled trial in cardiac surgery: statistical analysis plan.

    PubMed

    Pike, Katie; Nash, Rachel L; Murphy, Gavin J; Reeves, Barnaby C; Rogers, Chris A

    2015-02-22

    The Transfusion Indication Threshold Reduction (TITRe2) trial is the largest randomized controlled trial to date to compare red blood cell transfusion strategies following cardiac surgery. This update presents the statistical analysis plan, detailing how the study will be analyzed and presented. The statistical analysis plan has been written following recommendations from the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use, prior to database lock and the final analysis of trial data. Outlined analyses are in line with the Consolidated Standards of Reporting Trials (CONSORT). The study aims to randomize 2000 patients from 17 UK centres. Patients are randomized to either a restrictive (transfuse if haemoglobin concentration <7.5 g/dl) or liberal (transfuse if haemoglobin concentration <9 g/dl) transfusion strategy. The primary outcome is a binary composite outcome of any serious infectious or ischaemic event in the first 3 months following randomization. The statistical analysis plan details how non-adherence with the intervention, withdrawals from the study, and the study population will be derived and dealt with in the analysis. The planned analyses of the trial primary and secondary outcome measures are described in detail, including approaches taken to deal with multiple testing, model assumptions not being met and missing data. Details of planned subgroup and sensitivity analyses and pre-specified ancillary analyses are given, along with potential issues that have been identified with such analyses and possible approaches to overcome such issues. ISRCTN70923932 .

  6. Prognostic factors in patients with advanced cancer: use of the patient-generated subjective global assessment in survival prediction.

    PubMed

    Martin, Lisa; Watanabe, Sharon; Fainsinger, Robin; Lau, Francis; Ghosh, Sunita; Quan, Hue; Atkins, Marlis; Fassbender, Konrad; Downing, G Michael; Baracos, Vickie

    2010-10-01

    To determine whether elements of a standard nutritional screening assessment are independently prognostic of survival in patients with advanced cancer. A prospective nested cohort of patients with metastatic cancer were accrued from different units of a Regional Palliative Care Program. Patients completed a nutritional screen on admission. Data included age, sex, cancer site, height, weight history, dietary intake, 13 nutrition impact symptoms, and patient- and physician-reported performance status (PS). Univariate and multivariate survival analyses were conducted. Concordance statistics (c-statistics) were used to test the predictive accuracy of models based on training and validation sets; a c-statistic of 0.5 indicates the model predicts the outcome as well as chance; perfect prediction has a c-statistic of 1.0. A training set of patients in palliative home care (n = 1,164) was used to identify prognostic variables. Primary disease site, PS, short-term weight change (either gain or loss), dietary intake, and dysphagia predicted survival in multivariate analysis (P < .05). A model including only patients separated by disease site and PS with high c-statistics between predicted and observed responses for survival in the training set (0.90) and validation set (0.88; n = 603). The addition of weight change, dietary intake, and dysphagia did not further improve the c-statistic of the model. The c-statistic was also not altered by substituting physician-rated palliative PS for patient-reported PS. We demonstrate a high probability of concordance between predicted and observed survival for patients in distinct palliative care settings (home care, tertiary inpatient, ambulatory outpatient) based on patient-reported information.

  7. Increasing large scale windstorm damage in Western, Central and Northern European forests, 1951-2010

    NASA Astrophysics Data System (ADS)

    Gregow, H.; Laaksonen, A.; Alper, M. E.

    2017-04-01

    Using reports of forest losses caused directly by large scale windstorms (or primary damage, PD) from the European forest institute database (comprising 276 PD reports from 1951-2010), total growing stock (TGS) statistics of European forests and the daily North Atlantic Oscillation (NAO) index, we identify a statistically significant change in storm intensity in Western, Central and Northern Europe (17 countries). Using the validated set of storms, we found that the year 1990 represents a change-point at which the average intensity of the most destructive storms indicated by PD/TGS > 0.08% increased by more than a factor of three. A likelihood ratio test provides strong evidence that the change-point represents a real shift in the statistical behaviour of the time series. All but one of the seven catastrophic storms (PD/TGS > 0.2%) occurred since 1990. Additionally, we detected a related decrease in September-November PD/TGS and an increase in December-February PD/TGS. Our analyses point to the possibility that the impact of climate change on the North Atlantic storms hitting Europe has started during the last two and half decades.

  8. Increasing large scale windstorm damage in Western, Central and Northern European forests, 1951–2010

    PubMed Central

    Gregow, H.; Laaksonen, A.; Alper, M. E.

    2017-01-01

    Using reports of forest losses caused directly by large scale windstorms (or primary damage, PD) from the European forest institute database (comprising 276 PD reports from 1951–2010), total growing stock (TGS) statistics of European forests and the daily North Atlantic Oscillation (NAO) index, we identify a statistically significant change in storm intensity in Western, Central and Northern Europe (17 countries). Using the validated set of storms, we found that the year 1990 represents a change-point at which the average intensity of the most destructive storms indicated by PD/TGS > 0.08% increased by more than a factor of three. A likelihood ratio test provides strong evidence that the change-point represents a real shift in the statistical behaviour of the time series. All but one of the seven catastrophic storms (PD/TGS > 0.2%) occurred since 1990. Additionally, we detected a related decrease in September–November PD/TGS and an increase in December–February PD/TGS. Our analyses point to the possibility that the impact of climate change on the North Atlantic storms hitting Europe has started during the last two and half decades. PMID:28401947

  9. A simple method to accurately position Port-A-Cath without the aid of intraoperative fluoroscopy or other localizing devices.

    PubMed

    Horng, Huann-Cheng; Yuan, Chiou-Chung; Chao, Kuan-Chong; Cheng, Ming-Huei; Wang, Peng-Hui

    2007-06-01

    To evaluate the efficacy and acceptability of the Port-A-Cath (PAC) insertion method with (conventional group as II) and without (modified group as I) the aid of intraoperative fluoroscopy or other localizing devices. A total of 158 women with various kinds of gynecological cancers warranting PAC insertion (n = 86 in group I and n = 72 in group II, respectively) were evaluated. Data for analyses included patient age, main disease, dislocation site, surgical time, complications, and catheter outcome. There was no statistical difference between the two groups in terms of age, main disease, complications, and the experiencing of patent catheters. However, appropriate positioning (100% in group I, and 82% in group II) in the superior vena cava (SVC) showed statistical differences between the two groups (P = 0.001). In addition, the surgical time in group I was statistically shorter than that in group II (P < 0.001). The modified method for inserting the PAC offered the following benefits: including avoiding X-ray exposure for both the operator and the patient, defining the appropriate position in the SVC, and less surgical time. (c) 2007 Wiley-Liss, Inc.

  10. Statistical quality control through overall vibration analysis

    NASA Astrophysics Data System (ADS)

    Carnero, M. a. Carmen; González-Palma, Rafael; Almorza, David; Mayorga, Pedro; López-Escobar, Carlos

    2010-05-01

    The present study introduces the concept of statistical quality control in automotive wheel bearings manufacturing processes. Defects on products under analysis can have a direct influence on passengers' safety and comfort. At present, the use of vibration analysis on machine tools for quality control purposes is not very extensive in manufacturing facilities. Noise and vibration are common quality problems in bearings. These failure modes likely occur under certain operating conditions and do not require high vibration amplitudes but relate to certain vibration frequencies. The vibration frequencies are affected by the type of surface problems (chattering) of ball races that are generated through grinding processes. The purpose of this paper is to identify grinding process variables that affect the quality of bearings by using statistical principles in the field of machine tools. In addition, an evaluation of the quality results of the finished parts under different combinations of process variables is assessed. This paper intends to establish the foundations to predict the quality of the products through the analysis of self-induced vibrations during the contact between the grinding wheel and the parts. To achieve this goal, the overall self-induced vibration readings under different combinations of process variables are analysed using statistical tools. The analysis of data and design of experiments follows a classical approach, considering all potential interactions between variables. The analysis of data is conducted through analysis of variance (ANOVA) for data sets that meet normality and homoscedasticity criteria. This paper utilizes different statistical tools to support the conclusions such as chi squared, Shapiro-Wilks, symmetry, Kurtosis, Cochran, Hartlett, and Hartley and Krushal-Wallis. The analysis presented is the starting point to extend the use of predictive techniques (vibration analysis) for quality control. This paper demonstrates the existence of predictive variables (high-frequency vibration displacements) that are sensible to the processes setup and the quality of the products obtained. Based on the result of this overall vibration analysis, a second paper will analyse self-induced vibration spectrums in order to define limit vibration bands, controllable every cycle or connected to permanent vibration-monitoring systems able to adjust sensible process variables identified by ANOVA, once the vibration readings exceed established quality limits.

  11. The space of ultrametric phylogenetic trees.

    PubMed

    Gavryushkin, Alex; Drummond, Alexei J

    2016-08-21

    The reliability of a phylogenetic inference method from genomic sequence data is ensured by its statistical consistency. Bayesian inference methods produce a sample of phylogenetic trees from the posterior distribution given sequence data. Hence the question of statistical consistency of such methods is equivalent to the consistency of the summary of the sample. More generally, statistical consistency is ensured by the tree space used to analyse the sample. In this paper, we consider two standard parameterisations of phylogenetic time-trees used in evolutionary models: inter-coalescent interval lengths and absolute times of divergence events. For each of these parameterisations we introduce a natural metric space on ultrametric phylogenetic trees. We compare the introduced spaces with existing models of tree space and formulate several formal requirements that a metric space on phylogenetic trees must possess in order to be a satisfactory space for statistical analysis, and justify them. We show that only a few known constructions of the space of phylogenetic trees satisfy these requirements. However, our results suggest that these basic requirements are not enough to distinguish between the two metric spaces we introduce and that the choice between metric spaces requires additional properties to be considered. Particularly, that the summary tree minimising the square distance to the trees from the sample might be different for different parameterisations. This suggests that further fundamental insight is needed into the problem of statistical consistency of phylogenetic inference methods. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Ultrasound analysis of mental artery flow in elderly patients: a case-control study.

    PubMed

    Baladi, Marina G; Tucunduva Neto, Raul R C M; Cortes, Arthur R G; Aoki, Eduardo M; Arita, Emiko S; Freitas, Claudio F

    2015-01-01

    Mental artery flow decreases with age and may have an aetiological role in alveolar ridge atrophy. The aim of this study was to identify factors associated with alterations of mental artery flow, assessed by ultrasonography. This case-control study was conducted on elderly patients (aged above 60 years) at the beginning of dental treatment. Intraoral B-mode Doppler ultrasonography was used to assess mental artery flow. The cases were defined as patients with a weak/absent ultrasound signal, whereas the controls presented a strong ultrasound signal. Demographics and radiographic findings (low bone mineral density on dual-energy X-ray absorptiometry and mandibular cortical index on panoramic radiographs) were analysed as risk factors for weak/absent ultrasound signal and were calculated as adjusted odds ratios (AORs) with 95% confidence intervals (CIs) using conditional logistic regression. In addition, the Student's t-test was used to compare the mean alveolar bone height of the analysed groups. A p-value <0.05 was considered statistically significant. A total of 30 ultrasound examinations (12 cases and 18 controls) were analysed. A weak/absent mental artery pulse strength was significantly associated with edentulism (AOR = 3.67; 95% CI = 0.86-15.63; p = 0.046). In addition, there was a significant difference in alveolar bone height between edentulous cases and controls (p = 0.036). Within the limitations of this study, the present results indicate that edentulism is associated with diminished mental artery flow, which, in turn, affects alveolar bone height.

  13. Development of the multiple sclerosis (MS) early mobility impairment questionnaire (EMIQ).

    PubMed

    Ziemssen, Tjalf; Phillips, Glenn; Shah, Ruchit; Mathias, Adam; Foley, Catherine; Coon, Cheryl; Sen, Rohini; Lee, Andrew; Agarwal, Sonalee

    2016-10-01

    The Early Mobility Impairment Questionnaire (EMIQ) was developed to facilitate early identification of mobility impairments in multiple sclerosis (MS) patients. We describe the initial development of the EMIQ with a focus on the psychometric evaluation of the questionnaire using classical and item response theory methods. The initial 20-item EMIQ was constructed by clinical specialists and qualitatively tested among people with MS and physicians via cognitive interviews. Data from an observational study was used to make additional updates to the instrument based on exploratory factor analysis (EFA) and item response theory (IRT) analysis, and psychometric analyses were performed to evaluate the reliability and validity of the final instrument's scores and screening properties (i.e., sensitivity and specificity). Based on qualitative interview analyses, a revised 15-item EMIQ was included in the observational study. EFA, IRT and item-to-item correlation analyses revealed redundant items which were removed leading to the final nine-item EMIQ. The nine-item EMIQ performed well with respect to: test-retest reliability (ICC = 0.858); internal consistency (α = 0.893); convergent validity; and known-groups methods for construct validity. A cut-point of 41 on the 0-to-100 scale resulted in sufficient sensitivity and specificity statistics for viably identifying patients with mobility impairment. The EMIQ is a content valid and psychometrically sound instrument for capturing MS patients' experience with mobility impairments in a clinical practice setting. Additional research is suggested to further confirm the EMIQ's screening properties over time.

  14. Where the Solar system meets the solar neighbourhood: patterns in the distribution of radiants of observed hyperbolic minor bodies

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, Carlos; de la Fuente Marcos, Raúl; Aarseth, Sverre J.

    2018-05-01

    Observed hyperbolic minor bodies might have an interstellar origin, but they can be natives of the Solar system as well. Fly-bys with the known planets or the Sun may result in the hyperbolic ejection of an originally bound minor body; in addition, members of the Oort cloud could be forced to follow inbound hyperbolic paths as a result of secular perturbations induced by the Galactic disc or, less frequently, due to impulsive interactions with passing stars. These four processes must leave distinctive signatures in the distribution of radiants of observed hyperbolic objects, both in terms of coordinates and velocity. Here, we perform a systematic numerical exploration of the past orbital evolution of known hyperbolic minor bodies using a full N-body approach and statistical analyses to study their radiants. Our results confirm the theoretical expectations that strong anisotropies are present in the data. We also identify a statistically significant overdensity of high-speed radiants towards the constellation of Gemini that could be due to the closest and most recent known fly-by of a star to the Solar system, that of the so-called Scholz's star. In addition to and besides 1I/2017 U1 (`Oumuamua), we single out eight candidate interstellar comets based on their radiants' velocities.

  15. Estimation of regionalized compositions: A comparison of three methods

    USGS Publications Warehouse

    Pawlowsky, V.; Olea, R.A.; Davis, J.C.

    1995-01-01

    A regionalized composition is a random vector function whose components are positive and sum to a constant at every point of the sampling region. Consequently, the components of a regionalized composition are necessarily spatially correlated. This spatial dependence-induced by the constant sum constraint-is a spurious spatial correlation and may lead to misinterpretations of statistical analyses. Furthermore, the cross-covariance matrices of the regionalized composition are singular, as is the coefficient matrix of the cokriging system of equations. Three methods of performing estimation or prediction of a regionalized composition at unsampled points are discussed: (1) the direct approach of estimating each variable separately; (2) the basis method, which is applicable only when a random function is available that can he regarded as the size of the regionalized composition under study; (3) the logratio approach, using the additive-log-ratio transformation proposed by J. Aitchison, which allows statistical analysis of compositional data. We present a brief theoretical review of these three methods and compare them using compositional data from the Lyons West Oil Field in Kansas (USA). It is shown that, although there are no important numerical differences, the direct approach leads to invalid results, whereas the basis method and the additive-log-ratio approach are comparable. ?? 1995 International Association for Mathematical Geology.

  16. Robustly detecting differential expression in RNA sequencing data using observation weights

    PubMed Central

    Zhou, Xiaobei; Lindsay, Helen; Robinson, Mark D.

    2014-01-01

    A popular approach for comparing gene expression levels between (replicated) conditions of RNA sequencing data relies on counting reads that map to features of interest. Within such count-based methods, many flexible and advanced statistical approaches now exist and offer the ability to adjust for covariates (e.g. batch effects). Often, these methods include some sort of ‘sharing of information’ across features to improve inferences in small samples. It is important to achieve an appropriate tradeoff between statistical power and protection against outliers. Here, we study the robustness of existing approaches for count-based differential expression analysis and propose a new strategy based on observation weights that can be used within existing frameworks. The results suggest that outliers can have a global effect on differential analyses. We demonstrate the effectiveness of our new approach with real data and simulated data that reflects properties of real datasets (e.g. dispersion-mean trend) and develop an extensible framework for comprehensive testing of current and future methods. In addition, we explore the origin of such outliers, in some cases highlighting additional biological or technical factors within the experiment. Further details can be downloaded from the project website: http://imlspenticton.uzh.ch/robinson_lab/edgeR_robust/. PMID:24753412

  17. 3D printing PLGA: a quantitative examination of the effects of polymer composition and printing parameters on print resolution

    PubMed Central

    Guo, Ting; Holzberg, Timothy R; Lim, Casey G; Gao, Feng; Gargava, Ankit; Trachtenberg, Jordan E; Mikos, Antonios G; Fisher, John P

    2018-01-01

    In the past few decades, 3D printing has played a significant role in fabricating scaffolds with consistent, complex structure that meet patient-specific needs in future clinical applications. Although many studies have contributed to this emerging field of additive manufacturing, which includes material development and computer-aided scaffold design, current quantitative analyses do not correlate material properties, printing parameters, and printing outcomes to a great extent. A model that correlates these properties has tremendous potential to standardize 3D printing for tissue engineering and biomaterial science. In this study, we printed poly(lactic-co-glycolic acid) (PLGA) utilizing a direct melt extrusion technique without additional ingredients. We investigated PLGA with various lactic acid: glycolic acid (LA:GA) molecular weight ratios and end caps to demonstrate the dependence of the extrusion process on the polymer composition. Micro-computed tomography was then used to evaluate printed scaffolds containing different LA:GA ratios, composed of different fiber patterns, and processed under different printing conditions. We built a statistical model to reveal the correlation and predominant factors that determine printing precision. Our model showed a strong linear relationship between the actual and predicted precision under different combinations of printing conditions and material compositions. This quantitative examination establishes a significant foreground to 3D print biomaterials following a systematic fabrication procedure. Additionally, our proposed statistical models can be applied to couple specific biomaterials and 3D printing applications for patient implants with particular requirements. PMID:28244880

  18. 3D printing PLGA: a quantitative examination of the effects of polymer composition and printing parameters on print resolution.

    PubMed

    Guo, Ting; Holzberg, Timothy R; Lim, Casey G; Gao, Feng; Gargava, Ankit; Trachtenberg, Jordan E; Mikos, Antonios G; Fisher, John P

    2017-04-12

    In the past few decades, 3D printing has played a significant role in fabricating scaffolds with consistent, complex structure that meet patient-specific needs in future clinical applications. Although many studies have contributed to this emerging field of additive manufacturing, which includes material development and computer-aided scaffold design, current quantitative analyses do not correlate material properties, printing parameters, and printing outcomes to a great extent. A model that correlates these properties has tremendous potential to standardize 3D printing for tissue engineering and biomaterial science. In this study, we printed poly(lactic-co-glycolic acid) (PLGA) utilizing a direct melt extrusion technique without additional ingredients. We investigated PLGA with various lactic acid:glycolic acid (LA:GA) molecular weight ratios and end caps to demonstrate the dependence of the extrusion process on the polymer composition. Micro-computed tomography was then used to evaluate printed scaffolds containing different LA:GA ratios, composed of different fiber patterns, and processed under different printing conditions. We built a statistical model to reveal the correlation and predominant factors that determine printing precision. Our model showed a strong linear relationship between the actual and predicted precision under different combinations of printing conditions and material compositions. This quantitative examination establishes a significant foreground to 3D print biomaterials following a systematic fabrication procedure. Additionally, our proposed statistical models can be applied to couple specific biomaterials and 3D printing applications for patient implants with particular requirements.

  19. Testing a Coupled Global-limited-area Data Assimilation System using Observations from the 2004 Pacific Typhoon Season

    NASA Astrophysics Data System (ADS)

    Holt, C. R.; Szunyogh, I.; Gyarmati, G.; Hoffman, R. N.; Leidner, M.

    2011-12-01

    Tropical cyclone (TC) track and intensity forecasts have improved in recent years due to increased model resolution, improved data assimilation, and the rapid increase in the number of routinely assimilated observations over oceans. The data assimilation approach that has received the most attention in recent years is Ensemble Kalman Filtering (EnKF). The most attractive feature of the EnKF is that it uses a fully flow-dependent estimate of the error statistics, which can have important benefits for the analysis of rapidly developing TCs. We implement the Local Ensemble Transform Kalman Filter algorithm, a vari- ation of the EnKF, on a reduced-resolution version of the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) model and the NCEP Regional Spectral Model (RSM) to build a coupled global-limited area anal- ysis/forecast system. This is the first time, to our knowledge, that such a system is used for the analysis and forecast of tropical cyclones. We use data from summer 2004 to study eight tropical cyclones in the Northwest Pacific. The benchmark data sets that we use to assess the performance of our system are the NCEP Reanalysis and the NCEP Operational GFS analyses from 2004. These benchmark analyses were both obtained by the Statistical Spectral Interpolation, which was the operational data assimilation system of NCEP in 2004. The GFS Operational analysis assimilated a large number of satellite radiance observations in addition to the observations assimilated in our system. All analyses are verified against the Joint Typhoon Warning Center Best Track data set. The errors are calculated for the position and intensity of the TCs. The global component of the ensemble-based system shows improvement in po- sition analysis over the NCEP Reanalysis, but shows no significant difference from the NCEP operational analysis for most of the storm tracks. The regional com- ponent of our system improves position analysis over all the global analyses. The intensity analyses, measured by the minimum sea level pressure, are of similar quality in all of the analyses. Regional deterministic forecasts started from our analyses are generally not significantly different from those started from the GFS operational analysis. On average, the regional experiments performed better for longer than 48 h sea level pressure forecasts, while the global forecast performed better in predicting the position for longer than 48 h.

  20. The attitudinal and cognitive effects of interdisciplinary collaboration on elementary pre-service teachers development of biological science related lesson plans

    NASA Astrophysics Data System (ADS)

    Mills, Jada Jamerson

    There is a need for STEM (science, technology, engineering, and mathematics) education to be taught effectively in elementary schools. In order to achieve this, teacher preparation programs should graduate confident, content strong teachers to convey knowledge to elementary students. This study used interdisciplinary collaboration between the School of Education and the College of Liberal Arts through a Learning-by-Teaching method (LdL): Lernen durch Lernen in German. Pre-service teacher (PST) achievement levels of understanding science concepts based on pretest and posttest data, quality of lesson plans developed, and enjoyment of the class based on the collaboration with science students. The PSTs enrolled in two treatment sections of EDEL 404: Science in the Elementary Classroom collaborated with science students enrolled in BISC 327: Introductory Neuroscience to enhance their science skills and create case-based lesson plans on neurothology topics: echolocation, electrosensory reception, steroid hormones, and vocal learning. The PSTs enrolled in the single control section of EDEL 404 collaborated with fellow elementary education majors to develop lesson plans also based on the same selected topics. Qualitative interviews of education faculty, science faculty, and PSTs provided depth to the quantitative findings. Upon lesson plan completion, in-service teachers also graded the two best and two worst plans for the treatment and control sections and a science reviewer graded the plans for scientific accuracy. Statistical analyses were conducted for hypotheses, and one significant hypothesis found that PSTs who collaborated with science students had more positive science lesson plan writing attitudes than those who did not. Despite overall insignificant statistical analyses, all PSTs responded as more confident after collaboration. Additionally, interviews provided meaning and understanding to the insignificant statistical results as well as scientific accuracy of the lesson plans.

  1. Exposure to negative acts and risk of turnover: a study of a register-based outcome among employees in three occupational groups.

    PubMed

    Clausen, Thomas; Hansen, Jørgen V; Hogh, Annie; Garde, Anne Helene; Persson, Roger; Conway, Paul Maurice; Grynderup, Matias; Hansen, Åse Marie; Rugulies, Reiner

    2016-11-01

    To investigate whether self-reported exposure to negative acts in the workplace (bullying and threats of violence) predicted turnover in three occupational groups (human service and sales workers, office workers and manual workers). Survey data on 2766 respondents were combined with data from a national labour force register to assess turnover. Mixed effects logistic regression analysis was used to examine the association between self-reported exposure to negative acts at baseline and risk of turnover after a 1-year follow-up. We found no significant associations between exposure to negative acts (bullying and threats of violence) and risk of turnover. When participants were stratified by occupational group and analyses were adjusted for age, gender, tenure and psychosocial working conditions, we found that exposure to bullying predicted risk of turnover in office workers (OR 2.03, 95 % CI 1.05-3.90), but neither in human service and sales workers, nor in manual workers. The association in office workers lost statistical significance when additionally adjusted for depressive symptoms (OR 1.77, 95 % CI 0.90-3.49). However, in a sensitivity analysis in which we used a 2-year (instead of a 1-year) follow-up period the association between bullying and turnover remained statistically significant in office workers even after adjusting for depressive symptoms (OR 2.10, 95 % CI 1.17-3.76). We found no statistically significant associations between threats of violence and risk of turnover in the stratified analyses. Exposure to bullying predicted risk of turnover among office workers but not among human service and sales workers and among manual workers. Threats of violence were not associated with turnover in any occupational group.

  2. HAPRAP: a haplotype-based iterative method for statistical fine mapping using GWAS summary statistics.

    PubMed

    Zheng, Jie; Rodriguez, Santiago; Laurin, Charles; Baird, Denis; Trela-Larsen, Lea; Erzurumluoglu, Mesut A; Zheng, Yi; White, Jon; Giambartolomei, Claudia; Zabaneh, Delilah; Morris, Richard; Kumari, Meena; Casas, Juan P; Hingorani, Aroon D; Evans, David M; Gaunt, Tom R; Day, Ian N M

    2017-01-01

    Fine mapping is a widely used approach for identifying the causal variant(s) at disease-associated loci. Standard methods (e.g. multiple regression) require individual level genotypes. Recent fine mapping methods using summary-level data require the pairwise correlation coefficients ([Formula: see text]) of the variants. However, haplotypes rather than pairwise [Formula: see text], are the true biological representation of linkage disequilibrium (LD) among multiple loci. In this article, we present an empirical iterative method, HAPlotype Regional Association analysis Program (HAPRAP), that enables fine mapping using summary statistics and haplotype information from an individual-level reference panel. Simulations with individual-level genotypes show that the results of HAPRAP and multiple regression are highly consistent. In simulation with summary-level data, we demonstrate that HAPRAP is less sensitive to poor LD estimates. In a parametric simulation using Genetic Investigation of ANthropometric Traits height data, HAPRAP performs well with a small training sample size (N < 2000) while other methods become suboptimal. Moreover, HAPRAP's performance is not affected substantially by single nucleotide polymorphisms (SNPs) with low minor allele frequencies. We applied the method to existing quantitative trait and binary outcome meta-analyses (human height, QTc interval and gallbladder disease); all previous reported association signals were replicated and two additional variants were independently associated with human height. Due to the growing availability of summary level data, the value of HAPRAP is likely to increase markedly for future analyses (e.g. functional prediction and identification of instruments for Mendelian randomization). The HAPRAP package and documentation are available at http://apps.biocompute.org.uk/haprap/ CONTACT: : jie.zheng@bristol.ac.uk or tom.gaunt@bristol.ac.ukSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  3. Preliminary testing of flow-ecology hypotheses developed for the GCP LCC region

    USGS Publications Warehouse

    Brewer, Shannon K.; Davis, Mary

    2014-01-01

    The Ecological Limits of Hydrological Alteration (ELOHA) framework calls for the development of flow-ecology hypotheses to support protection of the flow regime from ecologically harmful alteration due to human activities. As part of a larger instream flow project for the Gulf Coast Prairie Landscape Conservation Cooperative (GCP LCC), regional flow-ecology hypotheses were developed for fish, mussels, birds, and riparian vegetation (Davis and Brewer 20141). The objective of this study was to assess the usefulness of existing ecological and hydrological data to test these hypotheses or others that may be developed in the future. Several databases related to biological collections and hydrologic data from Oklahoma, Texas, and Louisiana were compiled. State fish-community data from Oklahoma and Louisiana were summarized and paired with existing USGS gage data having at least a 40-year period of record that could be separated into reference and current conditions for comparison. The objective of this study was not to conduct exhaustive analyses of these data, the hypotheses, or analyses interpretation, but rather to use these data to determine if existing data were adequate to statistically test the regional flow-ecology hypotheses. The regional flow-ecology hypotheses were developed for the GCP LCC by a committee chaired by Shannon Brewer and Mary Davis (Davis and Brewer 2014). Existing data were useful for informing the hypotheses and suggest support for some hypotheses, but also highlight the need for additional testing and development as some results contradicted hypotheses. Results presented here suggest existing data are adequate to support some flow-ecology hypotheses; however, lack of sampling effort reported with the fish collections and the need for ecoregion-specific analyses suggest more data would be beneficial to analyses in some ecoregions. Additional fish sampling data from Texas and Louisiana will be available for future analyses and may ameliorate some of the data concerns and improve hypothesis interpretation. If the regional hydrologic model currently under development by the U.S. Geological Survey for the South-Central Climate Science Center is improved to produce daily hydrographs, it will enable use of fish data at ungaged locations. In future efforts, exhaustive analyses using these data, in addition to the development of more complex multivariate hypotheses, would be beneficial to understanding data gaps, particularly as relevant to species of conservation concern.

  4. Statistical analysis of Thematic Mapper Simulator data for the geobotanical discrimination of rock types in southwest Oregon

    NASA Technical Reports Server (NTRS)

    Morrissey, L. A.; Weinstock, K. J.; Mouat, D. A.; Card, D. H.

    1984-01-01

    An evaluation of Thematic Mapper Simulator (TMS) data for the geobotanical discrimination of rock types based on vegetative cover characteristics is addressed in this research. A methodology for accomplishing this evaluation utilizing univariate and multivariate techniques is presented. TMS data acquired with a Daedalus DEI-1260 multispectral scanner were integrated with vegetation and geologic information for subsequent statistical analyses, which included a chi-square test, an analysis of variance, stepwise discriminant analysis, and Duncan's multiple range test. Results indicate that ultramafic rock types are spectrally separable from nonultramafics based on vegetative cover through the use of statistical analyses.

  5. A multi-state model for sick-leave data applied to a randomized control trial study of low back pain.

    PubMed

    Lie, Stein Atle; Eriksen, Hege R; Ursin, Holger; Hagen, Eli Molde

    2008-05-01

    Analysing and presenting data on different outcomes after sick-leave is challenging. The use of extended statistical methods supplies additional information and allows further exploitation of data. Four hundred and fifty-seven patients, sick-listed for 8-12 weeks for low back pain, were randomized to intervention (n=237) or control (n=220). Outcome was measured as: "sick-listed'', "returned to work'', or "disability pension''. The individuals shifted between the three states between one and 22 times (mean 6.4 times). In a multi-state model, shifting between the states was set up in a transition intensity matrix. The probability of being in any of the states was calculated as a transition probability matrix. The effects of the intervention were modelled using a non-parametric model. There was an effect of the intervention for leaving the state sick-listed and shifting to returned to work (relative risk (RR)=1.27, 95% confidence interval (CI) 1.09- 1.47). The nonparametric estimates showed an effect of the intervention for leaving sick-listed and shifting to returned to work in the first 6 months. We found a protective effect of the intervention for shifting back to sick-listed between 6 and 18 months. The analyses showed that the probability of staying in the state returned to work was not different between the intervention and control groups at the end of the follow-up (3 years). We demonstrate that these alternative analyses give additional results and increase the strength of the analyses. The simple intervention did not decrease the probability of being on sick-leave in the long term; however, it decreased the time that individuals were on sick-leave.

  6. Selected Water-Quality Data from the Cedar River and Cedar Rapids Well Fields, Cedar Rapids, Iowa, 1999-2005

    USGS Publications Warehouse

    Littin, Gregory R.; Schnoebelen, Douglas J.

    2010-01-01

    The Cedar River alluvial aquifer is the primary source of municipal water in the Cedar Rapids, Iowa area. Municipal wells are completed in the alluvial aquifer at approximately 40 to 80 feet deep. The City of Cedar Rapids and the U.S. Geological Survey have been conducting a cooperative study of the groundwater-flow system and water quality near the well fields since 1992. Previous cooperative studies between the City of Cedar Rapids and the U.S. Geological Survey have documented hydrologic and water-quality data, geochemistry, and groundwater models. Water-quality samples were collected for studies involving well field monitoring, trends, source-water protection, groundwater geochemistry, evaluation of surface and ground-water interaction, assessment of pesticides in groundwater and surface water, and to evaluate water quality near a wetland area in the Seminole well field. Typical water-quality analyses included major ions (boron, bromide, calcium, chloride, fluoride, iron, magnesium, manganese, potassium, silica, sodium, and sulfate), nutrients (ammonia as nitrogen, nitrite as nitrogen, nitrite plus nitrate as nitrogen, and orthophosphate as phosphorus), dissolved organic carbon, and selected pesticides including two degradates of the herbicide atrazine. In addition, two synoptic samplings included analyses of additional pesticide degradates in water samples. Physical field parameters (alkalinity, dissolved oxygen, pH, specific conductance and water temperature) were recorded with each water sample collected. This report presents the results of water quality data-collection activities from January 1999 through December 2005. Methods of data collection, quality-assurance samples, water-quality analyses, and statistical summaries are presented. Data include the results of water-quality analyses from quarterly and synoptic sampling from monitoring wells, municipal wells, and the Cedar River.

  7. [Clinical research=design*measurements*statistical analyses].

    PubMed

    Furukawa, Toshiaki

    2012-06-01

    A clinical study must address true endpoints that matter for the patients and the doctors. A good clinical study starts with a good clinical question. Formulating a clinical question in the form of PECO can sharpen one's original question. In order to perform a good clinical study one must have a knowledge of study design, measurements and statistical analyses: The first is taught by epidemiology, the second by psychometrics and the third by biostatistics.

  8. Frustration in the pattern formation of polysyllabic words

    NASA Astrophysics Data System (ADS)

    Hayata, Kazuya

    2016-12-01

    A novel frustrated system is given for the analysis of (m + 1)-syllabled vocal sounds for languages with the m-vowel system, where the varieties of vowels are assumed to be m (m > 2). The necessary and sufficient condition for observing the sound frustration is that the configuration of m vowels in an m-syllabled word has a preference for the ‘repulsive’ type, in which there is no duplication of an identical vowel. For languages that meet this requirement, no (m + 1)-syllabled word can in principle select the present type because at most m different vowels are available and consequently the duplicated use of an identical vowel is inevitable. For languages showing a preference for the ‘attractive’ type, where an identical vowel aggregates in a word, there arises no such conflict. In this paper, we first elucidate for Arabic with m = 3 how to deal with the conflicting situation, where a statistical approach based on the chi-square testing is employed. In addition to the conventional three-vowel system, analyses are made also for Russian, where a polysyllabic word contains both a stressed and an indeterminate vowel. Through the statistical analyses the selection scheme for quadrisyllabic configurations is found to be strongly dependent on the parts of speech as well as the gender of nouns. In order to emphasize the relevance to the sound model of binary oppositions, analyzed results of Greek verbs are also given.

  9. Associations of indoor carbon dioxide concentrations and environmental susceptibilities with mucous membrane and lower respiratory building related symptoms in the BASE study: Analyses of the 100 building dataset

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erdmann, Christine A.; Apte, Michael G.

    Using the US EPA 100 office-building BASE Study dataset, they conducted multivariate logistic regression analyses to quantify the relationship between indoor CO{sub 2} concentrations (dCO{sub 2}) and mucous membrane (MM) and lower respiratory system (LResp) building related symptoms, adjusting for age, sex, smoking status, presence of carpet in workspace, thermal exposure, relative humidity, and a marker for entrained automobile exhaust. In addition, they tested the hypothesis that certain environmentally-mediated health conditions (e.g., allergies and asthma) confer increased susceptibility to building related symptoms within office buildings. Adjusted odds ratios (ORs) for statistically significant, dose-dependent associations (p < 0.05) for dry eyes,more » sore throat, nose/sinus congestion, and wheeze symptoms with 100 ppm increases in dCO{sub 2} ranged from 1.1 to 1.2. These results suggest that increases in the ventilation rates per person among typical office buildings will, on average, reduce the prevalence of several building related symptoms by up to 70%, even when these buildings meet the existing ASHRAE ventilation standards for office buildings. Building occupants with certain environmentally-mediated health conditions are more likely to experience building related symptoms than those without these conditions (statistically significant ORs ranged from 2 to 11).« less

  10. Association of Personality Traits with Elder Self-Neglect in a Community Dwelling Population

    PubMed Central

    Dong, XinQi; Simon, Melissa; Wilson, Robert; Beck, Todd; McKinell, Kelly; Evans, Denis

    2010-01-01

    Objective Elder self-neglect is an important public health issue. However, little is known about the association between personality traits and risk of elder self-neglect among community-dwelling populations. The objectives of this study are: 1) to examine the association of personality traits with elder self-neglect and 2) to examine the association of personality traits with elder self-neglect severity. Methods Population-based study conducted from 1993–2005 of community-dwelling older adults (N=9,056) participating in the Chicago Health Aging Project (CHAP). Subsets of the CHAP participants (N=1,820) were identified for suspected self-neglect by social services agency, which assessed the severity. Personality traits assessed included neuroticism, extraversion, rigidity and information processing. Logistic and linear regressions were used to assess these associations. Results In the bivariate analyses, personality traits (neuroticism, extraversion, information processing, and rigidity) were significantly associated with increased risk of elder self-neglect. However, after adjusting for potential confounders, the above associations were no longer statistically significant. In addition, personality traits were not associated with increased risk of greater self-neglect severity. Furthermore, interaction term analyses of personality traits with health and psychosocial factors were not statistically significant with elder self-neglect outcomes. Conclusion Neuroticism, extraversion, rigidity and information processing were not associated with significantly increased risk of elder self-neglect after consideration of potential confounders. PMID:21788924

  11. Elevated expression of LSD1 (Lysine-specific demethylase 1) during tumour progression from pre-invasive to invasive ductal carcinoma of the breast

    PubMed Central

    2012-01-01

    Background Lysine-specific demethylase1 (LSD1) is a nuclear protein which belongs to the aminooxidase-enzymes playing an important role in controlling gene expression. It has also been found highly expressed in several human malignancies including breast carcinoma. Our aim was to detect LSD1 expression also in pre-invasive neoplasias of the breast. In the current study we therefore analysed LSD1 protein expression in ductal carcinoma in situ (DCIS) in comparison to invasive ductal breast cancer (IDC). Methods Using immunohistochemistry we systematically analysed LSD1 expression in low grade DCIS (n = 27), intermediate grade DCIS (n = 30), high grade DCIS (n = 31) and in invasive ductal breast cancer (n = 32). SPSS version 18.0 was used for statistical analysis. Results LSD1 was differentially expressed in DCIS and invasive ductal breast cancer. Interestingly, LSD1 was significantly overexpressed in high grade DCIS versus low grade DCIS. Differences in LSD1 expression levels were also statistically significant between low/intermediate DCIS and invasive ductal breast carcinoma. Conclusions LSD1 is also expressed in pre-invasive neoplasias of the breast. Additionally, there is a gradual increase of LSD1 expression within tumour progression from pre-invasive DCIS to invasive ductal breast carcinoma. Therefore upregulation of LSD1 may be an early tumour promoting event. PMID:22920283

  12. Elevated expression of LSD1 (Lysine-specific demethylase 1) during tumour progression from pre-invasive to invasive ductal carcinoma of the breast.

    PubMed

    Serce, Nuran; Gnatzy, Annette; Steiner, Susanne; Lorenzen, Henning; Kirfel, Jutta; Buettner, Reinhard

    2012-08-24

    Lysine-specific demethylase1 (LSD1) is a nuclear protein which belongs to the aminooxidase-enzymes playing an important role in controlling gene expression. It has also been found highly expressed in several human malignancies including breast carcinoma. Our aim was to detect LSD1 expression also in pre-invasive neoplasias of the breast. In the current study we therefore analysed LSD1 protein expression in ductal carcinoma in situ (DCIS) in comparison to invasive ductal breast cancer (IDC). Using immunohistochemistry we systematically analysed LSD1 expression in low grade DCIS (n = 27), intermediate grade DCIS (n = 30), high grade DCIS (n = 31) and in invasive ductal breast cancer (n = 32). SPSS version 18.0 was used for statistical analysis. LSD1 was differentially expressed in DCIS and invasive ductal breast cancer. Interestingly, LSD1 was significantly overexpressed in high grade DCIS versus low grade DCIS. Differences in LSD1 expression levels were also statistically significant between low/intermediate DCIS and invasive ductal breast carcinoma. LSD1 is also expressed in pre-invasive neoplasias of the breast. Additionally, there is a gradual increase of LSD1 expression within tumour progression from pre-invasive DCIS to invasive ductal breast carcinoma. Therefore upregulation of LSD1 may be an early tumour promoting event.

  13. MetalPDB in 2018: a database of metal sites in biological macromolecular structures.

    PubMed

    Putignano, Valeria; Rosato, Antonio; Banci, Lucia; Andreini, Claudia

    2018-01-04

    MetalPDB (http://metalweb.cerm.unifi.it/) is a database providing information on metal-binding sites detected in the three-dimensional (3D) structures of biological macromolecules. MetalPDB represents such sites as 3D templates, called Minimal Functional Sites (MFSs), which describe the local environment around the metal(s) independently of the larger context of the macromolecular structure. The 2018 update of MetalPDB includes new contents and tools. A major extension is the inclusion of proteins whose structures do not contain metal ions although their sequences potentially contain a known MFS. In addition, MetalPDB now provides extensive statistical analyses addressing several aspects of general metal usage within the PDB, across protein families and in catalysis. Users can also query MetalPDB to extract statistical information on structural aspects associated with individual metals, such as preferred coordination geometries or aminoacidic environment. A further major improvement is the functional annotation of MFSs; the annotation is manually performed via a password-protected annotator interface. At present, ∼50% of all MFSs have such a functional annotation. Other noteworthy improvements are bulk query functionality, through the upload of a list of PDB identifiers, and ftp access to MetalPDB contents, allowing users to carry out in-depth analyses on their own computational infrastructure. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Monitoring the quality consistency of Weibizhi tablets by micellar electrokinetic chromatography fingerprints combined with multivariate statistical analyses, the simple quantified ratio fingerprint method, and the fingerprint-efficacy relationship.

    PubMed

    Liu, Yingchun; Sun, Guoxiang; Wang, Yan; Yang, Lanping; Yang, Fangliang

    2015-06-01

    Micellar electrokinetic chromatography fingerprinting combined with quantification was successfully developed and applied to monitor the quality consistency of Weibizhi tablets, which is a classical compound preparation used to treat gastric ulcers. A background electrolyte composed of 57 mmol/L sodium borate, 21 mmol/L sodium dodecylsulfate and 100 mmol/L sodium hydroxide was used to separate compounds. To optimize capillary electrophoresis conditions, multivariate statistical analyses were applied. First, the most important factors influencing sample electrophoretic behavior were identified as background electrolyte concentrations. Then, a Box-Benhnken design response surface strategy using resolution index RF as an integrated response was set up to correlate factors with response. RF reflects the effective signal amount, resolution, and signal homogenization in an electropherogram, thus, it was regarded as an excellent indicator. In fingerprint assessments, simple quantified ratio fingerprint method was established for comprehensive quality discrimination of traditional Chinese medicines/herbal medicines from qualitative and quantitative perspectives, by which the quality of 27 samples from the same manufacturer were well differentiated. In addition, the fingerprint-efficacy relationship between fingerprints and antioxidant activities was established using partial least squares regression, which provided important medicinal efficacy information for quality control. The present study offered an efficient means for monitoring Weibizhi tablet quality consistency. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Appraisal of data for ground-water quality in Nebraska

    USGS Publications Warehouse

    Engberg, R.A.

    1984-01-01

    This report summarizes existing data for groundwater quality in Nebraska and indicates their adequacy as a data base. Analyses have been made of water from nearly 10,000 wells by 8 agencies. Those analyses that meet reliability criteria have been aggregated by geologic source of water into four principal aquifer groupings--Holocene-Pleistocene aquifers, Tertiary aquifers, Mesozoic aquifers, and Paleozoic aquifers. For each aquifer grouping, data for specific conductance and 24 constituents in the water are summarized statistically. Also, diagrams are presented showing differences in statistical parameters, or in chemical composition, of water from the different aquifer groupings. Additionally, for each grouping except Paleozoic aquifers, maps show ranges in concentration of dissolved solids, calcium, alkalinity, and sulfate. In areas where data are insufficient to delimit, ranges in concentration also are shown on the maps. Point-source contamination has been identified at 41 locations and nonpoint-source contamination in 3 areas, namely, the central Platte Valley, Holt County, and Boyd County. Potential for nonpoint-source contamination exists in 10 major areas, which together comprise more than one-third of the State. Existing data are mostly from specific projects having limited areas and objectives. Consequently, a lack of data exists for other areas and for certain geologic units, particularly the Mesozoic and Paleozoic aquifers. Specific data needs for each of the four principal aquifer groupings are indicated in a matrix table.

  16. Web-TCGA: an online platform for integrated analysis of molecular cancer data sets.

    PubMed

    Deng, Mario; Brägelmann, Johannes; Schultze, Joachim L; Perner, Sven

    2016-02-06

    The Cancer Genome Atlas (TCGA) is a pool of molecular data sets publicly accessible and freely available to cancer researchers anywhere around the world. However, wide spread use is limited since an advanced knowledge of statistics and statistical software is required. In order to improve accessibility we created Web-TCGA, a web based, freely accessible online tool, which can also be run in a private instance, for integrated analysis of molecular cancer data sets provided by TCGA. In contrast to already available tools, Web-TCGA utilizes different methods for analysis and visualization of TCGA data, allowing users to generate global molecular profiles across different cancer entities simultaneously. In addition to global molecular profiles, Web-TCGA offers highly detailed gene and tumor entity centric analysis by providing interactive tables and views. As a supplement to other already available tools, such as cBioPortal (Sci Signal 6:pl1, 2013, Cancer Discov 2:401-4, 2012), Web-TCGA is offering an analysis service, which does not require any installation or configuration, for molecular data sets available at the TCGA. Individual processing requests (queries) are generated by the user for mutation, methylation, expression and copy number variation (CNV) analyses. The user can focus analyses on results from single genes and cancer entities or perform a global analysis (multiple cancer entities and genes simultaneously).

  17. Reframing Serial Murder Within Empirical Research.

    PubMed

    Gurian, Elizabeth A

    2017-04-01

    Empirical research on serial murder is limited due to the lack of consensus on a definition, the continued use of primarily descriptive statistics, and linkage to popular culture depictions. These limitations also inhibit our understanding of these offenders and affect credibility in the field of research. Therefore, this comprehensive overview of a sample of 508 cases (738 total offenders, including partnered groups of two or more offenders) provides analyses of solo male, solo female, and partnered serial killers to elucidate statistical differences and similarities in offending and adjudication patterns among the three groups. This analysis of serial homicide offenders not only supports previous research on offending patterns present in the serial homicide literature but also reveals that empirically based analyses can enhance our understanding beyond traditional case studies and descriptive statistics. Further research based on these empirical analyses can aid in the development of more accurate classifications and definitions of serial murderers.

  18. On the structure of personality disorder traits: conjoint analyses of the CAT-PD, PID-5, and NEO-PI-3 trait models.

    PubMed

    Wright, Aidan G C; Simms, Leonard J

    2014-01-01

    The current study examines the relations among contemporary models of pathological and normal range personality traits. Specifically, we report on (a) conjoint exploratory factor analyses of the Computerized Adaptive Test of Personality Disorder static form (CAT-PD-SF) with the Personality Inventory for the Diagnostic and Statistical Manual of Mental Disorders, fifth edition and NEO Personality Inventory-3 First Half, and (b) unfolding hierarchical analyses of the three measures in a large general psychiatric outpatient sample (n = 628; 64% Female). A five-factor solution provided conceptually coherent alignment among the CAT-PD-SF, PID-5, and NEO-PI-3FH scales. Hierarchical solutions suggested that higher-order factors bear strong resemblance to dimensions that emerge from structural models of psychopathology (e.g., Internalizing and Externalizing spectra). These results demonstrate that the CAT-PD-SF adheres to the consensual structure of broad trait domains at the five-factor level. Additionally, patterns of scale loadings further inform questions of structure and bipolarity of facet and domain level constructs. Finally, hierarchical analyses strengthen the argument for using broad dimensions that span normative and pathological functioning to scaffold a quantitatively derived phenotypic structure of psychopathology to orient future research on explanatory, etiological, and maintenance mechanisms.

  19. [Continuity of hospital identifiers in hospital discharge data - Analysis of the nationwide German DRG Statistics from 2005 to 2013].

    PubMed

    Nimptsch, Ulrike; Wengler, Annelene; Mansky, Thomas

    2016-11-01

    In Germany, nationwide hospital discharge data (DRG statistics provided by the research data centers of the Federal Statistical Office and the Statistical Offices of the 'Länder') are increasingly used as data source for health services research. Within this data hospitals can be separated via their hospital identifier ([Institutionskennzeichen] IK). However, this hospital identifier primarily designates the invoicing unit and is not necessarily equivalent to one hospital location. Aiming to investigate direction and extent of possible bias in hospital-level analyses this study examines the continuity of the hospital identifier within a cross-sectional and longitudinal approach and compares the results to official hospital census statistics. Within the DRG statistics from 2005 to 2013 the annual number of hospitals as classified by hospital identifiers was counted for each year of observation. The annual number of hospitals derived from DRG statistics was compared to the number of hospitals in the official census statistics 'Grunddaten der Krankenhäuser'. Subsequently, the temporal continuity of hospital identifiers in the DRG statistics was analyzed within cohorts of hospitals. Until 2013, the annual number of hospital identifiers in the DRG statistics fell by 175 (from 1,725 to 1,550). This decline affected only providers with small or medium case volume. The number of hospitals identified in the DRG statistics was lower than the number given in the census statistics (e.g., in 2013 1,550 IK vs. 1,668 hospitals in the census statistics). The longitudinal analyses revealed that the majority of hospital identifiers persisted in the years of observation, while one fifth of hospital identifiers changed. In cross-sectional studies of German hospital discharge data the separation of hospitals via the hospital identifier might lead to underestimating the number of hospitals and consequential overestimation of caseload per hospital. Discontinuities of hospital identifiers over time might impair the follow-up of hospital cohorts. These limitations must be taken into account in analyses of German hospital discharge data focusing on the hospital level. Copyright © 2016. Published by Elsevier GmbH.

  20. The Effects of Child Abuse and Exposure to Domestic Violence on Adolescent Internalizing and Externalizing Behavior Problems.

    PubMed

    Moylan, Carrie A; Herrenkohl, Todd I; Sousa, Cindy; Tajima, Emiko A; Herrenkohl, Roy C; Russo, M Jean

    2010-01-01

    This study examines the effects of child abuse and domestic violence exposure in childhood on adolescent internalizing and externalizing behaviors. Data for this analysis are from the Lehigh Longitudinal Study, a prospective study of 457 youth addressing outcomes of family violence and resilience in individuals and families. Results show that child abuse, domestic violence, and both in combination (i.e., dual exposure) increase a child's risk for internalizing and externalizing outcomes in adolescence. When accounting for risk factors associated with additional stressors in the family and surrounding environment, only those children with dual exposure had an elevated risk of the tested outcomes compared to non-exposed youth. However, while there were some observable differences in the prediction of outcomes for children with dual exposure compared to those with single exposure (i.e., abuse only or exposure to domestic violence only), these difference were not statistically significant. Analyses showed that the effects of exposure for boys and girls are statistically comparable.

  1. The Dysexecutive Questionnaire advanced: item and test score characteristics, 4-factor solution, and severity classification.

    PubMed

    Bodenburg, Sebastian; Dopslaff, Nina

    2008-01-01

    The Dysexecutive Questionnaire (DEX, , Behavioral assessment of the dysexecutive syndrome, 1996) is a standardized instrument to measure possible behavioral changes as a result of the dysexecutive syndrome. Although initially intended only as a qualitative instrument, the DEX has also been used increasingly to address quantitative problems. Until now there have not been more fundamental statistical analyses of the questionnaire's testing quality. The present study is based on an unselected sample of 191 patients with acquired brain injury and reports on the data relating to the quality of the items, the reliability and the factorial structure of the DEX. Item 3 displayed too great an item difficulty, whereas item 11 was not sufficiently discriminating. The DEX's reliability in self-rating is r = 0.85. In addition to presenting the statistical values of the tests, a clinical severity classification of the overall scores of the 4 found factors and of the questionnaire as a whole is carried out on the basis of quartile standards.

  2. The shape of CMB temperature and polarization peaks on the sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcos-Caballero, A.; Fernández-Cobos, R.; Martínez-González, E.

    2016-04-01

    We present a theoretical study of CMB temperature peaks, including its effect over the polarization field, and allowing nonzero eccentricity. The formalism is developed in harmonic space and using the covariant derivative on the sphere, which guarantees that the expressions obtained are completely valid at large scales (i.e., no flat approximation). The expected patterns induced by the peak, either in temperature or polarization, are calculated, as well as their covariances. It is found that the eccentricity introduces a quadrupolar dependence in the peak shape, which is proportional to a complex bias parameter b {sub ε}, characterizing the peak asymmetry andmore » orientation. In addition, the one-point statistics of the variables defining the peak on the sphere is reviewed, finding some differences with respect to the flat case for large peaks. Finally, we present a mechanism to simulate constrained CMB maps with a particular peak on the field, which is an interesting tool for analysing the statistical properties of the peaks present in the data.« less

  3. Application of Bingham statistics to a paleopole data set: Towards a better definition of APWP trends?

    NASA Astrophysics Data System (ADS)

    Cederquist, D. P.; Mac Niocaill, C.; Van der Voo, R.

    1997-01-01

    Bingham statistical analyses were applied to paleomagnetic data from 50 published studies from North America, of Carboniferous through Early Jurassic age, in an attempt to test whether the azimuths of the long axes of the Bingham ellipses lie tangent to the apparent polar wander path. The underlying assumption is that paleomagnetic directions will form a Fisherian (circular) distribution if no apparent polar wander has taken place during magnetization acquisition. However, the distribution should appear elongated (elliptical) if magnetization acquisition occurred over a significant amount of time involving apparent polar wander. The long axes in direction space yield corresponding azimuths in paleopole space, which can be compared to the North American APWP. We find that, generally, these azimuths are indeed sub-parallel to the APWP, validating the methods and the hypothesis. Plotting a pole as an azimuthal cord, representing the long axis of the ellipse, will provide additional robustness or definition to an APWP based upon temporally sparse paleomagnetic studies.

  4. Scheduler software for tracking and data relay satellite system loading analysis: User manual and programmer guide

    NASA Technical Reports Server (NTRS)

    Craft, R.; Dunn, C.; Mccord, J.; Simeone, L.

    1980-01-01

    A user guide and programmer documentation is provided for a system of PRIME 400 minicomputer programs. The system was designed to support loading analyses on the Tracking Data Relay Satellite System (TDRSS). The system is a scheduler for various types of data relays (including tape recorder dumps and real time relays) from orbiting payloads to the TDRSS. Several model options are available to statistically generate data relay requirements. TDRSS time lines (representing resources available for scheduling) and payload/TDRSS acquisition and loss of sight time lines are input to the scheduler from disk. Tabulated output from the interactive system includes a summary of the scheduler activities over time intervals specified by the user and overall summary of scheduler input and output information. A history file, which records every event generated by the scheduler, is written to disk to allow further scheduling on remaining resources and to provide data for graphic displays or additional statistical analysis.

  5. On vital aid: the why, what and how of validation

    PubMed Central

    Kleywegt, Gerard J.

    2009-01-01

    Limitations to the data and subjectivity in the structure-determination process may cause errors in macromolecular crystal structures. Appropriate validation techniques may be used to reveal problems in structures, ideally before they are analysed, published or deposited. Additionally, such tech­niques may be used a posteriori to assess the (relative) merits of a model by potential users. Weak validation methods and statistics assess how well a model reproduces the information that was used in its construction (i.e. experimental data and prior knowledge). Strong methods and statistics, on the other hand, test how well a model predicts data or information that were not used in the structure-determination process. These may be data that were excluded from the process on purpose, general knowledge about macromolecular structure, information about the biological role and biochemical activity of the molecule under study or its mutants or complexes and predictions that are based on the model and that can be tested experimentally. PMID:19171968

  6. The effects of acupuncture on rates of clinical pregnancy among women undergoing in vitro fertilization: a systematic review and meta-analysis

    PubMed Central

    Manheimer, Eric; van der Windt, Daniëlle; Cheng, Ke; Stafford, Kristen; Liu, Jianping; Tierney, Jayne; Lao, Lixing; Berman, Brian M.; Langenberg, Patricia; Bouter, Lex M.

    2013-01-01

    BACKGROUND Recent systematic reviews of adjuvant acupuncture for IVF have pooled heterogeneous trials, without examining variables that might explain the heterogeneity. The aims of our meta-analysis were to quantify the overall pooled effects of adjuvant acupuncture on IVF clinical pregnancy success rates, and evaluate whether study design-, treatment- and population-related factors influence effect estimates. METHODS We included randomized controlled trials that compared needle acupuncture administered within 1 day of embryo transfer, versus sham acupuncture or no adjuvant treatment. Our primary outcome was clinical pregnancy rates. We obtained from all investigators additional methodological details and outcome data not included in their original publications. We analysed sham-controlled and no adjuvant treatment-controlled trials separately, but since there were no large or significant differences between these two subsets, we pooled all trials for subgroup analyses. We prespecified 11 subgroup variables (5 clinical and 6 methodological) to investigate sources of heterogeneity, using single covariate meta-regressions. RESULTS Sixteen trials (4021 participants) were included in the meta-analyses. There was no statistically significant difference between acupuncture and controls when combining all trials [risk ratio (RR) 1.12, 95% confidence interval (CI), 0.96–1.31; I2 = 68%; 16 trials; 4021 participants], or when restricting to sham-controlled (RR 1.02, 0.83–1.26; I2 = 66%; 7 trials; 2044 participants) or no adjuvant treatment-controlled trials (RR 1.22, 0.97–1.52; I2 = 67%; 9 trials; 1977 participants). The type of control used did not significantly explain the statistical heterogeneity (interaction P = 0.27). Baseline pregnancy rate, measured as the observed rate of clinical pregnancy in the control group of each trial, was a statistically significant effect modifier (interaction P < 0.001), and this covariate explained most of the heterogeneity of the effects of adjuvant acupuncture across all trials (adjusted R2 = 93%; I2 residual = 9%). Trials with lower control group rates of clinical pregnancy showed larger effects of adjuvant acupuncture (RR 1.53, 1.28–1.84; 7 trials; 1732 participants) than trials with higher control group rates of clinical pregnancy (RR 0.90, 0.80–1.01; 9 trials; 2289 participants). The asymmetric funnel plot showed a tendency for the intervention effects to be more beneficial in smaller trials. CONCLUSIONS We found no pooled benefit of adjuvant acupuncture for IVF. The subgroup finding of a benefit in trials with lower, but not higher, baseline pregnancy rates (the only statistically significant subgroup finding in our earlier review) has been confirmed in this update, and was not explained by any confounding variables evaluated. However, this baseline pregnancy rate subgroup finding among published trials requires further confirmation and exploration in additional studies because of the multiple subgroup tests conducted, the risk of unidentified confounders, the multiple different factors that determine baseline rates, and the possibility of publication bias. PMID:23814102

  7. Trends in statistical methods in articles published in Archives of Plastic Surgery between 2012 and 2017.

    PubMed

    Han, Kyunghwa; Jung, Inkyung

    2018-05-01

    This review article presents an assessment of trends in statistical methods and an evaluation of their appropriateness in articles published in the Archives of Plastic Surgery (APS) from 2012 to 2017. We reviewed 388 original articles published in APS between 2012 and 2017. We categorized the articles that used statistical methods according to the type of statistical method, the number of statistical methods, and the type of statistical software used. We checked whether there were errors in the description of statistical methods and results. A total of 230 articles (59.3%) published in APS between 2012 and 2017 used one or more statistical method. Within these articles, there were 261 applications of statistical methods with continuous or ordinal outcomes, and 139 applications of statistical methods with categorical outcome. The Pearson chi-square test (17.4%) and the Mann-Whitney U test (14.4%) were the most frequently used methods. Errors in describing statistical methods and results were found in 133 of the 230 articles (57.8%). Inadequate description of P-values was the most common error (39.1%). Among the 230 articles that used statistical methods, 71.7% provided details about the statistical software programs used for the analyses. SPSS was predominantly used in the articles that presented statistical analyses. We found that the use of statistical methods in APS has increased over the last 6 years. It seems that researchers have been paying more attention to the proper use of statistics in recent years. It is expected that these positive trends will continue in APS.

  8. COGNATE: comparative gene annotation characterizer.

    PubMed

    Wilbrandt, Jeanne; Misof, Bernhard; Niehuis, Oliver

    2017-07-17

    The comparison of gene and genome structures across species has the potential to reveal major trends of genome evolution. However, such a comparative approach is currently hampered by a lack of standardization (e.g., Elliott TA, Gregory TR, Philos Trans Royal Soc B: Biol Sci 370:20140331, 2015). For example, testing the hypothesis that the total amount of coding sequences is a reliable measure of potential proteome diversity (Wang M, Kurland CG, Caetano-Anollés G, PNAS 108:11954, 2011) requires the application of standardized definitions of coding sequence and genes to create both comparable and comprehensive data sets and corresponding summary statistics. However, such standard definitions either do not exist or are not consistently applied. These circumstances call for a standard at the descriptive level using a minimum of parameters as well as an undeviating use of standardized terms, and for software that infers the required data under these strict definitions. The acquisition of a comprehensive, descriptive, and standardized set of parameters and summary statistics for genome publications and further analyses can thus greatly benefit from the availability of an easy to use standard tool. We developed a new open-source command-line tool, COGNATE (Comparative Gene Annotation Characterizer), which uses a given genome assembly and its annotation of protein-coding genes for a detailed description of the respective gene and genome structure parameters. Additionally, we revised the standard definitions of gene and genome structures and provide the definitions used by COGNATE as a working draft suggestion for further reference. Complete parameter lists and summary statistics are inferred using this set of definitions to allow down-stream analyses and to provide an overview of the genome and gene repertoire characteristics. COGNATE is written in Perl and freely available at the ZFMK homepage ( https://www.zfmk.de/en/COGNATE ) and on github ( https://github.com/ZFMK/COGNATE ). The tool COGNATE allows comparing genome assemblies and structural elements on multiples levels (e.g., scaffold or contig sequence, gene). It clearly enhances comparability between analyses. Thus, COGNATE can provide the important standardization of both genome and gene structure parameter disclosure as well as data acquisition for future comparative analyses. With the establishment of comprehensive descriptive standards and the extensive availability of genomes, an encompassing database will become possible.

  9. The Role of Gender in Neuropsychological Assessment in Healthy Adolescents.

    PubMed

    Mormile, Megan Elizabeth Evelyn; Langdon, Jody L; Hunt, Tamerah Nicole

    2018-01-01

    Research in college athletes has revealed significant gender differences in areas of verbal memory, visual memory, and reaction time. Additionally, research has focused on differences in neuropsychological components and gender in college populations; however, such differences in gender have not been documented in healthy adolescent populations. To identify potential differences between males and females using different components of a common computerized neuropsychological test. A computerized neuropsychological test battery (ImPACT®) was administered to 662 high-school age adolescent athletes (male: n = 451 female: n = 262). Differences between genders were calculated using a 1-way ANOVA. All statistical analyses were conducted using SPSS 23.0. Significance levels were set a priori at P < .05. A 1-way ANOVA revealed statistically significant differences between males and females for composite reaction time (F 1,660  = 10.68, P = .001) and total symptom score (F 1,660  = 81.20, P < .001). However, no statistically significant differences were found between males and females in composite verbal memory, visual memory, visual motor, or impulse control (P > .05). Significant differences between males and females were discovered for composite reaction time and total symptom scores, with females reporting more symptoms and slower reaction times at a baseline assessment. Increased symptom reporting by females may be attributed to both hormonal differences and increased honesty. Quicker reaction times in males may support theories that repetition of activities and quicker muscle contraction are gender dependent. However, additional research is necessary to understand gender differences in adolescent athletes during periods of cognitive and physical maturation.

  10. The correlation between relatives on the supposition of genomic imprinting.

    PubMed Central

    Spencer, Hamish G

    2002-01-01

    Standard genetic analyses assume that reciprocal heterozygotes are, on average, phenotypically identical. If a locus is subject to genomic imprinting, however, this assumption does not hold. We incorporate imprinting into the standard quantitative-genetic model for two alleles at a single locus, deriving expressions for the additive and dominance components of genetic variance, as well as measures of resemblance among relatives. We show that, in contrast to the case with Mendelian expression, the additive and dominance deviations are correlated. In principle, this correlation allows imprinting to be detected solely on the basis of different measures of familial resemblances, but in practice, the standard error of the estimate is likely to be too large for a test to have much statistical power. The effects of genomic imprinting will need to be incorporated into quantitative-genetic models of many traits, for example, those concerned with mammalian birthweight. PMID:12019254

  11. The correlation between relatives on the supposition of genomic imprinting.

    PubMed

    Spencer, Hamish G

    2002-05-01

    Standard genetic analyses assume that reciprocal heterozygotes are, on average, phenotypically identical. If a locus is subject to genomic imprinting, however, this assumption does not hold. We incorporate imprinting into the standard quantitative-genetic model for two alleles at a single locus, deriving expressions for the additive and dominance components of genetic variance, as well as measures of resemblance among relatives. We show that, in contrast to the case with Mendelian expression, the additive and dominance deviations are correlated. In principle, this correlation allows imprinting to be detected solely on the basis of different measures of familial resemblances, but in practice, the standard error of the estimate is likely to be too large for a test to have much statistical power. The effects of genomic imprinting will need to be incorporated into quantitative-genetic models of many traits, for example, those concerned with mammalian birthweight.

  12. Best Linear Unbiased Prediction (BLUP) for regional yield trials: a comparison to additive main effects and multiplicative interaction (AMMI) analysis.

    PubMed

    Piepho, H P

    1994-11-01

    Multilocation trials are often used to analyse the adaptability of genotypes in different environments and to find for each environment the genotype that is best adapted; i.e. that is highest yielding in that environment. For this purpose, it is of interest to obtain a reliable estimate of the mean yield of a cultivar in a given environment. This article compares two different statistical estimation procedures for this task: the Additive Main Effects and Multiplicative Interaction (AMMI) analysis and Best Linear Unbiased Prediction (BLUP). A modification of a cross validation procedure commonly used with AMMI is suggested for trials that are laid out as a randomized complete block design. The use of these procedure is exemplified using five faba bean datasets from German registration trails. BLUP was found to outperform AMMI in four of five faba bean datasets.

  13. Pooling sexes when assessing ground reaction forces during walking: Statistical Parametric Mapping versus traditional approach.

    PubMed

    Castro, Marcelo P; Pataky, Todd C; Sole, Gisela; Vilas-Boas, Joao Paulo

    2015-07-16

    Ground reaction force (GRF) data from men and women are commonly pooled for analyses. However, it may not be justifiable to pool sexes on the basis of discrete parameters extracted from continuous GRF gait waveforms because this can miss continuous effects. Forty healthy participants (20 men and 20 women) walked at a cadence of 100 steps per minute across two force plates, recording GRFs. Two statistical methods were used to test the null hypothesis of no mean GRF differences between sexes: (i) Statistical Parametric Mapping-using the entire three-component GRF waveform; and (ii) traditional approach-using the first and second vertical GRF peaks. Statistical Parametric Mapping results suggested large sex differences, which post-hoc analyses suggested were due predominantly to higher anterior-posterior and vertical GRFs in early stance in women compared to men. Statistically significant differences were observed for the first GRF peak and similar values for the second GRF peak. These contrasting results emphasise that different parts of the waveform have different signal strengths and thus that one may use the traditional approach to choose arbitrary metrics and make arbitrary conclusions. We suggest that researchers and clinicians consider both the entire gait waveforms and sex-specificity when analysing GRF data. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Perceived Effectiveness among College Students of Selected Statistical Measures in Motivating Exercise Behavior

    ERIC Educational Resources Information Center

    Merrill, Ray M.; Chatterley, Amanda; Shields, Eric C.

    2005-01-01

    This study explored the effectiveness of selected statistical measures at motivating or maintaining regular exercise among college students. The study also considered whether ease in understanding these statistical measures was associated with perceived effectiveness at motivating or maintaining regular exercise. Analyses were based on a…

  15. Statistical Diversions

    ERIC Educational Resources Information Center

    Petocz, Peter; Sowey, Eric

    2012-01-01

    The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the…

  16. The Empirical Nature and Statistical Treatment of Missing Data

    ERIC Educational Resources Information Center

    Tannenbaum, Christyn E.

    2009-01-01

    Introduction. Missing data is a common problem in research and can produce severely misleading analyses, including biased estimates of statistical parameters, and erroneous conclusions. In its 1999 report, the APA Task Force on Statistical Inference encouraged authors to report complications such as missing data and discouraged the use of…

  17. Statistical Significance Testing in Second Language Research: Basic Problems and Suggestions for Reform

    ERIC Educational Resources Information Center

    Norris, John M.

    2015-01-01

    Traditions of statistical significance testing in second language (L2) quantitative research are strongly entrenched in how researchers design studies, select analyses, and interpret results. However, statistical significance tests using "p" values are commonly misinterpreted by researchers, reviewers, readers, and others, leading to…

  18. 75 FR 24718 - Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-05

    ...] Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability AGENCY... Programs and Data Files.'' This guidance is provided to inform study statisticians of recommendations for documenting statistical analyses and data files submitted to the Center for Veterinary Medicine (CVM) for the...

  19. Statistical alignment: computational properties, homology testing and goodness-of-fit.

    PubMed

    Hein, J; Wiuf, C; Knudsen, B; Møller, M B; Wibling, G

    2000-09-08

    The model of insertions and deletions in biological sequences, first formulated by Thorne, Kishino, and Felsenstein in 1991 (the TKF91 model), provides a basis for performing alignment within a statistical framework. Here we investigate this model.Firstly, we show how to accelerate the statistical alignment algorithms several orders of magnitude. The main innovations are to confine likelihood calculations to a band close to the similarity based alignment, to get good initial guesses of the evolutionary parameters and to apply an efficient numerical optimisation algorithm for finding the maximum likelihood estimate. In addition, the recursions originally presented by Thorne, Kishino and Felsenstein can be simplified. Two proteins, about 1500 amino acids long, can be analysed with this method in less than five seconds on a fast desktop computer, which makes this method practical for actual data analysis.Secondly, we propose a new homology test based on this model, where homology means that an ancestor to a sequence pair can be found finitely far back in time. This test has statistical advantages relative to the traditional shuffle test for proteins.Finally, we describe a goodness-of-fit test, that allows testing the proposed insertion-deletion (indel) process inherent to this model and find that real sequences (here globins) probably experience indels longer than one, contrary to what is assumed by the model. Copyright 2000 Academic Press.

  20. Weighting of the data and analytical approaches may account for differences in overcoming the inadequate representativeness of the respondents to the third wave of a cohort study.

    PubMed

    Taylor, Anne W; Dal Grande, Eleonora; Grant, Janet; Appleton, Sarah; Gill, Tiffany K; Shi, Zumin; Adams, Robert J

    2013-04-01

    Attrition in cohort studies can cause the data to be nonreflective of the original population. Although of little concern if intragroup comparisons are being made or cause and effect assessed, the assessment of bias was undertaken in this study so that intergroup or descriptive analyses could be undertaken. The North West Adelaide Health Study is a chronic disease and risk factor cohort study undertaken in Adelaide, South Australia. In the original wave (1999), clinical and self-report data were collected from 4,056 adults. In the third wave (2008-2010), 2,710 adults were still actively involved. Comparisons were made against two other data sources: Australian Bureau of Statistics Estimated Residential Population and a regular conducted chronic disease and risk factor surveillance system. Comparisons of demographics (age, sex, area, education, work status, and income) proved to be statistically significantly different. In addition, smoking status, body mass index, and general health status were statistically significant from the comparison group. No statistically significant differences were found for alcohol risk. Although the third wave of this cohort study is not representative of the broader population on the variables assessed, weighting of the data and analytical approaches can account for differences. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. GIS and statistical analysis for landslide susceptibility mapping in the Daunia area, Italy

    NASA Astrophysics Data System (ADS)

    Mancini, F.; Ceppi, C.; Ritrovato, G.

    2010-09-01

    This study focuses on landslide susceptibility mapping in the Daunia area (Apulian Apennines, Italy) and achieves this by using a multivariate statistical method and data processing in a Geographical Information System (GIS). The Logistic Regression (hereafter LR) method was chosen to produce a susceptibility map over an area of 130 000 ha where small settlements are historically threatened by landslide phenomena. By means of LR analysis, the tendency to landslide occurrences was, therefore, assessed by relating a landslide inventory (dependent variable) to a series of causal factors (independent variables) which were managed in the GIS, while the statistical analyses were performed by means of the SPSS (Statistical Package for the Social Sciences) software. The LR analysis produced a reliable susceptibility map of the investigated area and the probability level of landslide occurrence was ranked in four classes. The overall performance achieved by the LR analysis was assessed by local comparison between the expected susceptibility and an independent dataset extrapolated from the landslide inventory. Of the samples classified as susceptible to landslide occurrences, 85% correspond to areas where landslide phenomena have actually occurred. In addition, the consideration of the regression coefficients provided by the analysis demonstrated that a major role is played by the "land cover" and "lithology" causal factors in determining the occurrence and distribution of landslide phenomena in the Apulian Apennines.

  2. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI).

    PubMed

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non-expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI's robustness and sensitivity in capturing useful data relating to the students' conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. © 2016 T. Deane et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  3. Holocaust exposure and subsequent suicide risk: a population-based study.

    PubMed

    Bursztein Lipsicas, Cendrine; Levav, Itzhak; Levine, Stephen Z

    2017-03-01

    To examine the association between the extent of genocide exposure and subsequent suicide risk among Holocaust survivors. Persons born in Holocaust-exposed European countries during the years 1922-1945 that immigrated to Israel by 1965 were identified in the Population Registry (N = 209,429), and followed up for suicide (1950-2014). They were divided into three groups based on likely exposure to Nazi persecution: those who immigrated before (indirect; n = 20,229; 10%), during (partial direct; n = 17,189; 8%), and after (full direct; n = 172,061; 82%) World War II. Groups were contrasted for suicide risk, accounting for the extent of genocide in their respective countries of origin, high (>70%) or lower levels (<50%). Cox model survival analyses were computed examining calendar year at suicide. Sensitivity analyses were recomputed for two additional suicide-associated variables (age and years since immigration) for each exposure group. All analyses were adjusted for confounders. Survival analysis showed that compared to the indirect exposure group, the partial direct exposure group from countries with high genocide level had a statistically significant (P < .05) increased suicide risk for the main outcome (calendar year: HR 1.78, 95% CI 1.09, 2.90). This effect significantly (P < .05) replicated in two sensitivity analyses for countries with higher relative levels of genocide (age: HR 1.77, 95% CI 1.09, 2.89; years since immigration: HR 1.85, 95% CI 1.14, 3.02). The full direct exposure group was not at significant suicide risk compared to the indirect exposure group. Suicide associations for groups from countries with relative lower level of genocide were not statistically significant. This study partly converges with findings identifying Holocaust survivors (full direct exposure) as a resilient group. A tentative mechanism for higher vulnerability to suicide risk of the partial direct exposure group from countries with higher genocide exposure includes protracted guilt feelings, having directly witnessed atrocities and escaped death.

  4. ParallABEL: an R library for generalized parallelization of genome-wide association studies.

    PubMed

    Sangket, Unitsa; Mahasirimongkol, Surakameth; Chantratita, Wasun; Tandayya, Pichaya; Aulchenko, Yurii S

    2010-04-29

    Genome-Wide Association (GWA) analysis is a powerful method for identifying loci associated with complex traits and drug response. Parts of GWA analyses, especially those involving thousands of individuals and consuming hours to months, will benefit from parallel computation. It is arduous acquiring the necessary programming skills to correctly partition and distribute data, control and monitor tasks on clustered computers, and merge output files. Most components of GWA analysis can be divided into four groups based on the types of input data and statistical outputs. The first group contains statistics computed for a particular Single Nucleotide Polymorphism (SNP), or trait, such as SNP characterization statistics or association test statistics. The input data of this group includes the SNPs/traits. The second group concerns statistics characterizing an individual in a study, for example, the summary statistics of genotype quality for each sample. The input data of this group includes individuals. The third group consists of pair-wise statistics derived from analyses between each pair of individuals in the study, for example genome-wide identity-by-state or genomic kinship analyses. The input data of this group includes pairs of SNPs/traits. The final group concerns pair-wise statistics derived for pairs of SNPs, such as the linkage disequilibrium characterisation. The input data of this group includes pairs of individuals. We developed the ParallABEL library, which utilizes the Rmpi library, to parallelize these four types of computations. ParallABEL library is not only aimed at GenABEL, but may also be employed to parallelize various GWA packages in R. The data set from the North American Rheumatoid Arthritis Consortium (NARAC) includes 2,062 individuals with 545,080, SNPs' genotyping, was used to measure ParallABEL performance. Almost perfect speed-up was achieved for many types of analyses. For example, the computing time for the identity-by-state matrix was linearly reduced from approximately eight hours to one hour when ParallABEL employed eight processors. Executing genome-wide association analysis using the ParallABEL library on a computer cluster is an effective way to boost performance, and simplify the parallelization of GWA studies. ParallABEL is a user-friendly parallelization of GenABEL.

  5. Reporting and methodological quality of meta-analyses in urological literature

    PubMed Central

    Xu, Jing

    2017-01-01

    Purpose To assess the overall quality of published urological meta-analyses and identify predictive factors for high quality. Materials and Methods We systematically searched PubMed to identify meta-analyses published from January 1st, 2011 to December 31st, 2015 in 10 predetermined major paper-based urology journals. The characteristics of the included meta-analyses were collected, and their reporting and methodological qualities were assessed by the PRISMA checklist (27 items) and AMSTAR tool (11 items), respectively. Descriptive statistics were used for individual items as a measure of overall compliance, and PRISMA and AMSTAR scores were calculated as the sum of adequately reported domains. Logistic regression was used to identify predictive factors for high qualities. Results A total of 183 meta-analyses were included. The mean PRISMA and AMSTAR scores were 22.74 ± 2.04 and 7.57 ± 1.41, respectively. PRISMA item 5, protocol and registration, items 15 and 22, risk of bias across studies, items 16 and 23, additional analysis had less than 50% adherence. AMSTAR item 1, “a priori” design, item 5, list of studies and item 10, publication bias had less than 50% adherence. Logistic regression analyses showed that funding support and “a priori” design were associated with superior reporting quality, following PRISMA guideline and “a priori” design were associated with superior methodological quality. Conclusions Reporting and methodological qualities of recently published meta-analyses in major paper-based urology journals are generally good. Further improvement could potentially be achieved by strictly adhering to PRISMA guideline and having “a priori” protocol. PMID:28439452

  6. Statistical analysis plan for the Alveolar Recruitment for Acute Respiratory Distress Syndrome Trial (ART). A randomized controlled trial

    PubMed Central

    Damiani, Lucas Petri; Berwanger, Otavio; Paisani, Denise; Laranjeira, Ligia Nasi; Suzumura, Erica Aranha; Amato, Marcelo Britto Passos; Carvalho, Carlos Roberto Ribeiro; Cavalcanti, Alexandre Biasi

    2017-01-01

    Background The Alveolar Recruitment for Acute Respiratory Distress Syndrome Trial (ART) is an international multicenter randomized pragmatic controlled trial with allocation concealment involving 120 intensive care units in Brazil, Argentina, Colombia, Italy, Poland, Portugal, Malaysia, Spain, and Uruguay. The primary objective of ART is to determine whether maximum stepwise alveolar recruitment associated with PEEP titration, adjusted according to the static compliance of the respiratory system (ART strategy), is able to increase 28-day survival in patients with acute respiratory distress syndrome compared to conventional treatment (ARDSNet strategy). Objective To describe the data management process and statistical analysis plan. Methods The statistical analysis plan was designed by the trial executive committee and reviewed and approved by the trial steering committee. We provide an overview of the trial design with a special focus on describing the primary (28-day survival) and secondary outcomes. We describe our data management process, data monitoring committee, interim analyses, and sample size calculation. We describe our planned statistical analyses for primary and secondary outcomes as well as pre-specified subgroup analyses. We also provide details for presenting results, including mock tables for baseline characteristics, adherence to the protocol and effect on clinical outcomes. Conclusion According to best trial practice, we report our statistical analysis plan and data management plan prior to locking the database and beginning analyses. We anticipate that this document will prevent analysis bias and enhance the utility of the reported results. Trial registration ClinicalTrials.gov number, NCT01374022. PMID:28977255

  7. Formalizing the definition of meta-analysis in Molecular Ecology.

    PubMed

    ArchMiller, Althea A; Bauer, Eric F; Koch, Rebecca E; Wijayawardena, Bhagya K; Anil, Ammu; Kottwitz, Jack J; Munsterman, Amelia S; Wilson, Alan E

    2015-08-01

    Meta-analysis, the statistical synthesis of pertinent literature to develop evidence-based conclusions, is relatively new to the field of molecular ecology, with the first meta-analysis published in the journal Molecular Ecology in 2003 (Slate & Phua 2003). The goal of this article is to formalize the definition of meta-analysis for the authors, editors, reviewers and readers of Molecular Ecology by completing a review of the meta-analyses previously published in this journal. We also provide a brief overview of the many components required for meta-analysis with a more specific discussion of the issues related to the field of molecular ecology, including the use and statistical considerations of Wright's FST and its related analogues as effect sizes in meta-analysis. We performed a literature review to identify articles published as 'meta-analyses' in Molecular Ecology, which were then evaluated by at least two reviewers. We specifically targeted Molecular Ecology publications because as a flagship journal in this field, meta-analyses published in Molecular Ecology have the potential to set the standard for meta-analyses in other journals. We found that while many of these reviewed articles were strong meta-analyses, others failed to follow standard meta-analytical techniques. One of these unsatisfactory meta-analyses was in fact a secondary analysis. Other studies attempted meta-analyses but lacked the fundamental statistics that are considered necessary for an effective and powerful meta-analysis. By drawing attention to the inconsistency of studies labelled as meta-analyses, we emphasize the importance of understanding the components of traditional meta-analyses to fully embrace the strengths of quantitative data synthesis in the field of molecular ecology. © 2015 John Wiley & Sons Ltd.

  8. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    PubMed

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  9. Multielement geochemistry identifies the spatial pattern of soil and sediment contamination in an urban parkland, Western Australia.

    PubMed

    Rate, Andrew W

    2018-06-15

    Urban environments are dynamic and highly heterogeneous, and multiple additions of potential contaminants are likely on timescales which are short relative to natural processes. The likely sources and location of soil or sediment contamination in urban environment should therefore be detectable using multielement geochemical composition combined with rigorously applied multivariate statistical techniques. Soil, wetland sediment, and street dust was sampled along intersecting transects in Robertson Park in metropolitan Perth, Western Australia. Samples were analysed for near-total concentrations of multiple elements (including Cd, Ce, Co, Cr, Cu, Fe, Gd, La, Mn, Nd, Ni, Pb, Y, and Zn), as well as pH, and electrical conductivity. Samples at some locations within Robertson Park had high concentrations of potentially toxic elements (Pb above Health Investigation Limits; As, Ba, Cu, Mn, Ni, Pb, V, and Zn above Ecological Investigation Limits). However, these concentrations carry low risk due to the main land use as recreational open space, the low proportion of samples exceeding guideline values, and a tendency for the highest concentrations to be located within the less accessible wetland basin. The different spatial distributions of different groups of contaminants was consistent with different inputs of contaminants related to changes in land use and technology over the history of the site. Multivariate statistical analyses reinforced the spatial information, with principal component analysis identifying geochemical associations of elements which were also spatially related. A multivariate linear discriminant model was able to discriminate samples into a-priori types, and could predict sample type with 84% accuracy based on multielement composition. The findings suggest substantial advantages of characterising a site using multielement and multivariate analyses, an approach which could benefit investigations of other sites of concern. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Relationship between water quality and macro-scale parameters (land use, erosion, geology, and population density) in the Siminehrood River Basin.

    PubMed

    Bostanmaneshrad, Farshid; Partani, Sadegh; Noori, Roohollah; Nachtnebel, Hans-Peter; Berndtsson, Ronny; Adamowski, Jan Franklin

    2018-10-15

    To date, few studies have investigated the simultaneous effects of macro-scale parameters (MSPs) such as land use, population density, geology, and erosion layers on micro-scale water quality variables (MSWQVs). This research focused on an evaluation of the relationship between MSPs and MSWQVs in the Siminehrood River Basin, Iran. In addition, we investigated the importance of water particle travel time (hydrological distance) on this relationship. The MSWQVs included 13 physicochemical and biochemical parameters observed at 15 stations during three seasons. Primary screening was performed by utilizing three multivariate statistical analyses (Pearson's correlation, cluster and discriminant analyses) in seven series of observed data. These series included three separate seasonal data, three two-season data, and aggregated three-season data for investigation of relationships between MSPs and MSWQVs. Coupled data (pairs of MSWQVs and MSPs) repeated in at least two out of three statistical analyses were selected for final screening. The primary screening results demonstrated significant relationships between land use and phosphorus, total solids and turbidity, erosion levels and electrical conductivity, and erosion and total solids. Furthermore, water particle travel time effects were considered through three geographical pattern definitions of distance for each MSP by using two weighting methods. To find effective MSP factors on MSWQVs, a multivariate linear regression analysis was employed. Then, preliminary equations that estimated MSWQVs were developed. The preliminary equations were modified to adaptive equations to obtain the final models. The final models indicated that a new metric, referred to as hydrological distance, provided better MSWQV estimation and water quality prediction compared to the National Sanitation Foundation Water Quality Index. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  11. Epidemiology of Skin Cancer in the German Population: Impact of Socioeconomic and Geographic Factors.

    PubMed

    Augustin, J; Kis, A; Sorbe, C; Schäfer, I; Augustin, M

    2018-04-06

    Skin cancer being the most common cancer in Germany has shown increasing incidence in the past decade. Since mostly caused by excessive UV exposure, skin cancer is largely related to behaviour. So far, the impact of regional and sociodemographic factors on the development of skin cancer in Germany is unclear. The current study aimed to investigate the association of potential predictive factors with the prevalence of skin cancers in Germany. Nationwide ambulatory care claims data from persons insured in statutory health insurances (SHI) with malignant melanoma (MM, ICD-10 C43) and non-melanoma skin cancer (NMSC, ICD-10 C44) in the years 2009-2015 were analysed. In addition, sociodemographic population data and satellite based UV and solar radiation data were associated. Descriptive as well as multivariate (spatial) statistical analyses (for example Bayes' Smoothing) were conducted on county level. Data from 70.1 million insured persons were analysed. Age standardized prevalences per 100,000 SHI insured persons for MM and NMSC were 284.7 and 1126.9 in 2009 and 378.5 and 1708.2 in 2015. Marked regional variations were observed with prevalences between 32.9% and 51.6%. Multivariate analysis show statistically significant positive correlations between higher income and education, and MM/NMSC prevalence. Prevalence of MM and NMSC in Germany shows spatio-temporal dynamics. Our results show that regional UV radiation, sunshine hours and sociodemographic factors have significant impact on skin cancer prevalence in Germany. Individual behaviour obviously is a major determinant which should be subject to preventive interventions. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  12. Subsurface microbial diversity in deep-granitic-fracture water in Colorado

    USGS Publications Warehouse

    Sahl, J.W.; Schmidt, R.; Swanner, E.D.; Mandernack, K.W.; Templeton, A.S.; Kieft, Thomas L.; Smith, R.L.; Sanford, W.E.; Callaghan, R.L.; Mitton, J.B.; Spear, J.R.

    2008-01-01

    A microbial community analysis using 16S rRNA gene sequencing was performed on borehole water and a granite rock core from Henderson Mine, a >1,000-meter-deep molybdenum mine near Empire, CO. Chemical analysis of borehole water at two separate depths (1,044 m and 1,004 m below the mine entrance) suggests that a sharp chemical gradient exists, likely from the mixing of two distinct subsurface fluids, one metal rich and one relatively dilute; this has created unique niches for microorganisms. The microbial community analyzed from filtered, oxic borehole water indicated an abundance of sequences from iron-oxidizing bacteria (Gallionella spp.) and was compared to the community from the same borehole after 2 weeks of being plugged with an expandable packer. Statistical analyses with UniFrac revealed a significant shift in community structure following the addition of the packer. Phospholipid fatty acid (PLFA) analysis suggested that Nitrosomonadales dominated the oxic borehole, while PLFAs indicative of anaerobic bacteria were most abundant in the samples from the plugged borehole. Microbial sequences were represented primarily by Firmicutes, Proteobacteria, and a lineage of sequences which did not group with any identified bacterial division; phylogenetic analyses confirmed the presence of a novel candidate division. This "Henderson candidate division" dominated the clone libraries from the dilute anoxic fluids. Sequences obtained from the granitic rock core (1,740 m below the surface) were represented by the divisions Proteobacteria (primarily the family Ralstoniaceae) and Firmicutes. Sequences grouping within Ralstoniaceae were also found in the clone libraries from metal-rich fluids yet were absent in more dilute fluids. Lineage-specific comparisons, combined with phylogenetic statistical analyses, show that geochemical variance has an important effect on microbial community structure in deep, subsurface systems. Copyright ?? 2008, American Society for Microbiology. All Rights Reserved.

  13. Subsurface Microbial Diversity in Deep-Granitic-Fracture Water in Colorado▿

    PubMed Central

    Sahl, Jason W.; Schmidt, Raleigh; Swanner, Elizabeth D.; Mandernack, Kevin W.; Templeton, Alexis S.; Kieft, Thomas L.; Smith, Richard L.; Sanford, William E.; Callaghan, Robert L.; Mitton, Jeffry B.; Spear, John R.

    2008-01-01

    A microbial community analysis using 16S rRNA gene sequencing was performed on borehole water and a granite rock core from Henderson Mine, a >1,000-meter-deep molybdenum mine near Empire, CO. Chemical analysis of borehole water at two separate depths (1,044 m and 1,004 m below the mine entrance) suggests that a sharp chemical gradient exists, likely from the mixing of two distinct subsurface fluids, one metal rich and one relatively dilute; this has created unique niches for microorganisms. The microbial community analyzed from filtered, oxic borehole water indicated an abundance of sequences from iron-oxidizing bacteria (Gallionella spp.) and was compared to the community from the same borehole after 2 weeks of being plugged with an expandable packer. Statistical analyses with UniFrac revealed a significant shift in community structure following the addition of the packer. Phospholipid fatty acid (PLFA) analysis suggested that Nitrosomonadales dominated the oxic borehole, while PLFAs indicative of anaerobic bacteria were most abundant in the samples from the plugged borehole. Microbial sequences were represented primarily by Firmicutes, Proteobacteria, and a lineage of sequences which did not group with any identified bacterial division; phylogenetic analyses confirmed the presence of a novel candidate division. This “Henderson candidate division” dominated the clone libraries from the dilute anoxic fluids. Sequences obtained from the granitic rock core (1,740 m below the surface) were represented by the divisions Proteobacteria (primarily the family Ralstoniaceae) and Firmicutes. Sequences grouping within Ralstoniaceae were also found in the clone libraries from metal-rich fluids yet were absent in more dilute fluids. Lineage-specific comparisons, combined with phylogenetic statistical analyses, show that geochemical variance has an important effect on microbial community structure in deep, subsurface systems. PMID:17981950

  14. Expression of Vascular Notch Ligand Delta-Like 4 and Inflammatory Markers in Breast Cancer

    PubMed Central

    Jubb, Adrian M.; Soilleux, Elizabeth J.; Turley, Helen; Steers, Graham; Parker, Andrew; Low, Irene; Blades, Jennifer; Li, Ji-Liang; Allen, Paul; Leek, Russell; Noguera-Troise, Irene; Gatter, Kevin C.; Thurston, Gavin; Harris, Adrian L.

    2010-01-01

    Delta-like ligand 4 (Dll4) is a Notch ligand that is predominantly expressed in the endothelium. Evidence from xenografts suggests that inhibiting Dll4 may overcome resistance to antivascular endothelial growth factor therapy. The aims of this study were to characterize the expression of Dll4 in breast cancer and assess whether it is associated with inflammatory markers and prognosis. We examined 296 breast adenocarcinomas and 38 ductal carcinoma in situ tissues that were represented in tissue microarrays. Additional whole sections representing 10 breast adenocarcinomas, 10 normal breast tissues, and 16 angiosarcomas were included. Immunohistochemistry was then performed by using validated antibodies against Dll4, CD68, CD14, Dendritic Cell-Specific Intercellular adhesion molecule-3-Grabbing Non-integrin (DC-SIGN), CD123, neutrophil elastase, CD31, and carbonic anhydrase 9. Dll4 was selectively expressed by intratumoral endothelial cells in 73% to 100% of breast adenocarcinomas, 18% of in situ ductal carcinomas, and all lactating breast cases, but not normal nonlactating breast. High intensity of endothelial Dll4 expression was a statistically significant adverse prognostic factor in univariate (P = 0.002 and P = 0.01) and multivariate analyses (P = 0.03 and P = 0.04) of overall survival and relapse-free survival, respectively. Among the inflammatory markers, only CD68 and DC-SIGN were significant prognostic factors in univariate (but not multivariate) analyses of overall survival (P = 0.01 and 0.002, respectively). In summary, Dll4 was expressed by endothelium associated with breast cancer cells. In these retrospective subset analyses, endothelial Dll4 expression was a statistically significant multivariate prognostic factor. PMID:20167860

  15. Plaque Echolucency and Stroke Risk in Asymptomatic Carotid Stenosis: A Systematic Review and Meta-Analysis

    PubMed Central

    Gupta, Ajay; Kesavabhotla, Kartik; Baradaran, Hediyeh; Kamel, Hooman; Pandya, Ankur; Giambrone, Ashley E.; Wright, Drew; Pain, Kevin J.; Mtui, Edward E.; Suri, Jasjit S.; Sanelli, Pina C.; Mushlin, Alvin I.

    2014-01-01

    Background and Purpose Ultrasonographic plaque echolucency has been studied as a stroke risk marker in carotid atherosclerotic disease. We performed a systematic review and meta-analysis to summarize the association between ultrasound determined carotid plaque echolucency and future ipsilateral stroke risk. Methods We searched the medical literature for studies evaluating the association between carotid plaque echolucency and future stroke in asymptomatic patients. We included prospective observational studies with stroke outcome ascertainment after baseline carotid plaque echolucency assessment. We performed a meta-analysis and assessed study heterogeneity and publication bias. We also performed subgroup analyses limited to patients with stenosis ≥50%, studies in which plaque echolucency was determined via subjective visual interpretation, studies with a relatively lower risk of bias, and studies published after the year 2000. Results We analyzed data from 7 studies on 7557 subjects with a mean follow up of 37.2 months. We found a significant positive relationship between predominantly echolucent (compared to predominantly echogenic) plaques and the risk of future ipsilateral stroke across all stenosis severities (0-99%) (relative risk [RR], 2.31, 95% CI, 1.58-3.39, P<.001) and in subjects with ≥50% stenosis (RR, 2.61 95% CI, 1.47-4.63, P=.001). A statistically significant increased RR for future stroke was preserved in all additional subgroup analyses. No statistically significant heterogeneity or publication bias was present in any of the meta-analyses. Conclusions The presence of ultrasound-determined carotid plaque echolucency provides predictive information in asymptomatic carotid artery stenosis beyond luminal stenosis. However, the magnitude of the increased risk is not sufficient on its own to identify patients likely to benefit from surgical revascularization. PMID:25406150

  16. Adaptation of Chain Event Graphs for use with Case-Control Studies in Epidemiology.

    PubMed

    Keeble, Claire; Thwaites, Peter Adam; Barber, Stuart; Law, Graham Richard; Baxter, Paul David

    2017-09-26

    Case-control studies are used in epidemiology to try to uncover the causes of diseases, but are a retrospective study design known to suffer from non-participation and recall bias, which may explain their decreased popularity in recent years. Traditional analyses report usually only the odds ratio for given exposures and the binary disease status. Chain event graphs are a graphical representation of a statistical model derived from event trees which have been developed in artificial intelligence and statistics, and only recently introduced to the epidemiology literature. They are a modern Bayesian technique which enable prior knowledge to be incorporated into the data analysis using the agglomerative hierarchical clustering algorithm, used to form a suitable chain event graph. Additionally, they can account for missing data and be used to explore missingness mechanisms. Here we adapt the chain event graph framework to suit scenarios often encountered in case-control studies, to strengthen this study design which is time and financially efficient. We demonstrate eight adaptations to the graphs, which consist of two suitable for full case-control study analysis, four which can be used in interim analyses to explore biases, and two which aim to improve the ease and accuracy of analyses. The adaptations are illustrated with complete, reproducible, fully-interpreted examples, including the event tree and chain event graph. Chain event graphs are used here for the first time to summarise non-participation, data collection techniques, data reliability, and disease severity in case-control studies. We demonstrate how these features of a case-control study can be incorporated into the analysis to provide further insight, which can help to identify potential biases and lead to more accurate study results.

  17. Facilitating the Transition from Bright to Dim Environments

    DTIC Science & Technology

    2016-03-04

    For the parametric data, a multivariate ANOVA was used in determining the systematic presence of any statistically significant performance differences...performed. All significance levels were p < 0.05, and statistical analyses were performed with the Statistical Package for Social Sciences ( SPSS ...1950. Age changes in rate and level of visual dark adaptation. Journal of Applied Physiology, 2, 407–411. Field, A. 2009. Discovering statistics

  18. Detecting differential DNA methylation from sequencing of bisulfite converted DNA of diverse species.

    PubMed

    Huh, Iksoo; Wu, Xin; Park, Taesung; Yi, Soojin V

    2017-07-21

    DNA methylation is one of the most extensively studied epigenetic modifications of genomic DNA. In recent years, sequencing of bisulfite-converted DNA, particularly via next-generation sequencing technologies, has become a widely popular method to study DNA methylation. This method can be readily applied to a variety of species, dramatically expanding the scope of DNA methylation studies beyond the traditionally studied human and mouse systems. In parallel to the increasing wealth of genomic methylation profiles, many statistical tools have been developed to detect differentially methylated loci (DMLs) or differentially methylated regions (DMRs) between biological conditions. We discuss and summarize several key properties of currently available tools to detect DMLs and DMRs from sequencing of bisulfite-converted DNA. However, the majority of the statistical tools developed for DML/DMR analyses have been validated using only mammalian data sets, and less priority has been placed on the analyses of invertebrate or plant DNA methylation data. We demonstrate that genomic methylation profiles of non-mammalian species are often highly distinct from those of mammalian species using examples of honey bees and humans. We then discuss how such differences in data properties may affect statistical analyses. Based on these differences, we provide three specific recommendations to improve the power and accuracy of DML and DMR analyses of invertebrate data when using currently available statistical tools. These considerations should facilitate systematic and robust analyses of DNA methylation from diverse species, thus advancing our understanding of DNA methylation. © The Author 2017. Published by Oxford University Press.

  19. Imaging Depression in Adults with ASD

    DTIC Science & Technology

    2017-10-01

    collected temporally close enough to imaging data in Phase 2 to be confidently incorporated in the planned statistical analyses, and (b) not unduly risk...Phase 2 to be confidently incorporated in the planned statistical analyses, and (b) not unduly risk attrition between Phase 1 and 2, we chose to hold...supervision is ongoing (since 9/2014). • Co-l Dr. Lerner’s 2nd year Clinical Psychology PhD students have participated in ADOS- 2 Introductory Clinical

  20. A multi-criteria evaluation system for marine litter pollution based on statistical analyses of OSPAR beach litter monitoring time series.

    PubMed

    Schulz, Marcus; Neumann, Daniel; Fleet, David M; Matthies, Michael

    2013-12-01

    During the last decades, marine pollution with anthropogenic litter has become a worldwide major environmental concern. Standardized monitoring of litter since 2001 on 78 beaches selected within the framework of the Convention for the Protection of the Marine Environment of the North-East Atlantic (OSPAR) has been used to identify temporal trends of marine litter. Based on statistical analyses of this dataset a two-part multi-criteria evaluation system for beach litter pollution of the North-East Atlantic and the North Sea is proposed. Canonical correlation analyses, linear regression analyses, and non-parametric analyses of variance were used to identify different temporal trends. A classification of beaches was derived from cluster analyses and served to define different states of beach quality according to abundances of 17 input variables. The evaluation system is easily applicable and relies on the above-mentioned classification and on significant temporal trends implied by significant rank correlations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. A new statistical method for design and analyses of component tolerance

    NASA Astrophysics Data System (ADS)

    Movahedi, Mohammad Mehdi; Khounsiavash, Mohsen; Otadi, Mahmood; Mosleh, Maryam

    2017-03-01

    Tolerancing conducted by design engineers to meet customers' needs is a prerequisite for producing high-quality products. Engineers use handbooks to conduct tolerancing. While use of statistical methods for tolerancing is not something new, engineers often use known distributions, including the normal distribution. Yet, if the statistical distribution of the given variable is unknown, a new statistical method will be employed to design tolerance. In this paper, we use generalized lambda distribution for design and analyses component tolerance. We use percentile method (PM) to estimate the distribution parameters. The findings indicated that, when the distribution of the component data is unknown, the proposed method can be used to expedite the design of component tolerance. Moreover, in the case of assembled sets, more extensive tolerance for each component with the same target performance can be utilized.

  2. Methods in pharmacoepidemiology: a review of statistical analyses and data reporting in pediatric drug utilization studies.

    PubMed

    Sequi, Marco; Campi, Rita; Clavenna, Antonio; Bonati, Maurizio

    2013-03-01

    To evaluate the quality of data reporting and statistical methods performed in drug utilization studies in the pediatric population. Drug utilization studies evaluating all drug prescriptions to children and adolescents published between January 1994 and December 2011 were retrieved and analyzed. For each study, information on measures of exposure/consumption, the covariates considered, descriptive and inferential analyses, statistical tests, and methods of data reporting was extracted. An overall quality score was created for each study using a 12-item checklist that took into account the presence of outcome measures, covariates of measures, descriptive measures, statistical tests, and graphical representation. A total of 22 studies were reviewed and analyzed. Of these, 20 studies reported at least one descriptive measure. The mean was the most commonly used measure (18 studies), but only five of these also reported the standard deviation. Statistical analyses were performed in 12 studies, with the chi-square test being the most commonly performed test. Graphs were presented in 14 papers. Sixteen papers reported the number of drug prescriptions and/or packages, and ten reported the prevalence of the drug prescription. The mean quality score was 8 (median 9). Only seven of the 22 studies received a score of ≥10, while four studies received a score of <6. Our findings document that only a few of the studies reviewed applied statistical methods and reported data in a satisfactory manner. We therefore conclude that the methodology of drug utilization studies needs to be improved.

  3. Quantitative Methods for Analysing Joint Questionnaire Data: Exploring the Role of Joint in Force Design

    DTIC Science & Technology

    2015-08-01

    the nine questions. The Statistical Package for the Social Sciences ( SPSS ) [11] was used to conduct statistical analysis on the sample. Two types...constructs. SPSS was again used to conduct statistical analysis on the sample. This time factor analysis was conducted. Factor analysis attempts to...Business Research Methods and Statistics using SPSS . P432. 11 IBM SPSS Statistics . (2012) 12 Burns, R.B., Burns, R.A. (2008) ‘Business Research

  4. In Vitro Evaluation of Shear Bond Strength of Nanocomposites to Dentin

    PubMed Central

    Vellanki, Vinay Kumar; Shetty, Vikram K; Kushwah, Sudhanshu; Goyal, Geeta; Chandra, S.M. Sharath

    2015-01-01

    Aims: To compare the shear bond strength of nanocomposites to dentin using three different types of adhesive systems; and to test few specimens under Scanning Electron Microscope (SEM) for analysing whether the bond failure is adhesive or cohesive. Materials and Methods: Sixty human premolar teeth were selected and were randomly grouped, with 20 specimens in each group: group 1 - fluoride releasing dentin bonding agent; group 2 - antibacterial containing dentin bonding agent; and group 3 - one step conventional self etch adhesive. Each group was treated with its respective bonding agents, composite resin build up was done, and shear bond strengths were tested using Instron Universal testing machine. Few of the specimens were tested under SEM. Results: The results were statistically analysed using One-way ANOVA and paired t-test. It was observed that group 3 has the highest shear bond strength followed by group 2, and then group 1. Adhesive failures and mixed failures were most frequent types of failures as seen under SEM. Conclusion: Addition of antimicrobial agent decreases the bond strength of dentin bonding agent and addition of fluoride further decreases the bond strength. From SEM results it can be concluded that the zone of failure could not be defined and also that the failure mode was independent of the dentin bonding agent used. PMID:25738077

  5. The Effect of Light Rail Transit on Physical Activity: Design and Methods of the Travel-Related Activity in Neighborhoods Study.

    PubMed

    Durand, Casey P; Oluyomi, Abiodun O; Gabriel, Kelley Pettee; Salvo, Deborah; Sener, Ipek N; Hoelscher, Deanna M; Knell, Gregory; Tang, Xiaohui; Porter, Anna K; Robertson, Michael C; Kohl, Harold W

    2016-01-01

    Use of mass transit has been proposed as a way to incorporate regular physical activity into daily life because transit use typically requires additional travel to access and depart the stop or station. If this additional travel is active, a small but potentially important amount of physical activity can be achieved daily. Although prior research has shown that transit use is associated with physical activity, important questions remain unanswered. Utilizing a major expansion of the Houston, TX, USA light-rail system as a natural experiment, the Houston Travel-Related Activity in Neighborhoods (TRAIN) Study was developed to address these unanswered questions. The purpose of the TRAIN Study is to determine if the development of light-rail lines in Houston, TX, USA will prospectively affect both transit use and physical activity over 4 years. We also aim to understand how contextual effects (i.e., moderators or interaction effects), such as the neighborhood built environment and socioeconomic factors, affect the primary relations under study. The TRAIN Study is a longitudinal cohort design, in which participants are recruited at baseline from a 3-mile buffer around each of the three new lines and measured annually four times. Recruitment is accomplished via telephone contact, ads in newspapers and advertising circulars, and targeted community outreach. Data are collected via mail and include questionnaire-assessed factors, such as perceived neighborhood characteristics, attitudes about transportation, demographics, and reported physical activity; a travel diary; and accelerometry. Additionally, field-based neighborhood audits are conducted to capture micro-scale environmental features. To assess macro-scale environmental characteristics, we utilize GIS mapping and spatial analyses. Statistical analyses will be conducted using latent growth curve modeling and discrete choice models, with a focus on identifying moderating factors (i.e., statistical interaction effects). Selection bias will be controlled via propensity score analysis. The TRAIN study is a unique opportunity to study how a multi-billion dollar investment in mass transit can simultaneously affect transportation needs and physical activity behavior. This comprehensive evaluation will provide needed evidence for policy makers, and can inform health impact assessments of future transportation projects around the world.

  6. Alcohol intake and gastric cancer: Meta-analyses of published data versus individual participant data pooled analyses (StoP Project).

    PubMed

    Ferro, Ana; Morais, Samantha; Rota, Matteo; Pelucchi, Claudio; Bertuccio, Paola; Bonzi, Rossella; Galeone, Carlotta; Zhang, Zuo-Feng; Matsuo, Keitaro; Ito, Hidemi; Hu, Jinfu; Johnson, Kenneth C; Yu, Guo-Pei; Palli, Domenico; Ferraroni, Monica; Muscat, Joshua; Malekzadeh, Reza; Ye, Weimin; Song, Huan; Zaridze, David; Maximovitch, Dmitry; Fernández de Larrea, Nerea; Kogevinas, Manolis; Vioque, Jesus; Navarrete-Muñoz, Eva M; Pakseresht, Mohammadreza; Pourfarzi, Farhad; Wolk, Alicja; Orsini, Nicola; Bellavia, Andrea; Håkansson, Niclas; Mu, Lina; Pastorino, Roberta; Kurtz, Robert C; Derakhshan, Mohammad H; Lagiou, Areti; Lagiou, Pagona; Boffetta, Paolo; Boccia, Stefania; Negri, Eva; La Vecchia, Carlo; Peleteiro, Bárbara; Lunet, Nuno

    2018-05-01

    Individual participant data pooled analyses allow access to non-published data and statistical reanalyses based on more homogeneous criteria than meta-analyses based on systematic reviews. We quantified the impact of publication-related biases and heterogeneity in data analysis and presentation in summary estimates of the association between alcohol drinking and gastric cancer. We compared estimates obtained from conventional meta-analyses, using only data available in published reports from studies that take part in the Stomach Cancer Pooling (StoP) Project, with individual participant data pooled analyses including the same studies. A total of 22 studies from the StoP Project assessed the relation between alcohol intake and gastric cancer, 19 had specific data for levels of consumption and 18 according to cancer location; published reports addressing these associations were available from 18, 5 and 5 studies, respectively. The summary odds ratios [OR, (95%CI)] estimate obtained with published data for drinkers vs. non-drinkers was 10% higher than the one obtained with individual StoP data [18 vs. 22 studies: 1.21 (1.07-1.36) vs. 1.10 (0.99-1.23)] and more heterogeneous (I 2 : 63.6% vs 54.4%). In general, published data yielded less precise summary estimates (standard errors up to 2.6 times higher). Funnel plot analysis suggested publication bias. Meta-analyses of the association between alcohol drinking and gastric cancer tended to overestimate the magnitude of the effects, possibly due to publication bias. Additionally, individual participant data pooled analyses yielded more precise estimates for different levels of exposure or cancer subtypes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Research Design and Statistical Methods in Indian Medical Journals: A Retrospective Survey

    PubMed Central

    Hassan, Shabbeer; Yellur, Rajashree; Subramani, Pooventhan; Adiga, Poornima; Gokhale, Manoj; Iyer, Manasa S.; Mayya, Shreemathi S.

    2015-01-01

    Good quality medical research generally requires not only an expertise in the chosen medical field of interest but also a sound knowledge of statistical methodology. The number of medical research articles which have been published in Indian medical journals has increased quite substantially in the past decade. The aim of this study was to collate all evidence on study design quality and statistical analyses used in selected leading Indian medical journals. Ten (10) leading Indian medical journals were selected based on impact factors and all original research articles published in 2003 (N = 588) and 2013 (N = 774) were categorized and reviewed. A validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation of the articles. Main outcomes considered in the present study were – study design types and their frequencies, error/defects proportion in study design, statistical analyses, and implementation of CONSORT checklist in RCT (randomized clinical trials). From 2003 to 2013: The proportion of erroneous statistical analyses did not decrease (χ2=0.592, Φ=0.027, p=0.4418), 25% (80/320) in 2003 compared to 22.6% (111/490) in 2013. Compared with 2003, significant improvement was seen in 2013; the proportion of papers using statistical tests increased significantly (χ2=26.96, Φ=0.16, p<0.0001) from 42.5% (250/588) to 56.7 % (439/774). The overall proportion of errors in study design decreased significantly (χ2=16.783, Φ=0.12 p<0.0001), 41.3% (243/588) compared to 30.6% (237/774). In 2013, randomized clinical trials designs has remained very low (7.3%, 43/588) with majority showing some errors (41 papers, 95.3%). Majority of the published studies were retrospective in nature both in 2003 [79.1% (465/588)] and in 2013 [78.2% (605/774)]. Major decreases in error proportions were observed in both results presentation (χ2=24.477, Φ=0.17, p<0.0001), 82.2% (263/320) compared to 66.3% (325/490) and interpretation (χ2=25.616, Φ=0.173, p<0.0001), 32.5% (104/320) compared to 17.1% (84/490), though some serious ones were still present. Indian medical research seems to have made no major progress regarding using correct statistical analyses, but error/defects in study designs have decreased significantly. Randomized clinical trials are quite rarely published and have high proportion of methodological problems. PMID:25856194

  8. Research design and statistical methods in Indian medical journals: a retrospective survey.

    PubMed

    Hassan, Shabbeer; Yellur, Rajashree; Subramani, Pooventhan; Adiga, Poornima; Gokhale, Manoj; Iyer, Manasa S; Mayya, Shreemathi S

    2015-01-01

    Good quality medical research generally requires not only an expertise in the chosen medical field of interest but also a sound knowledge of statistical methodology. The number of medical research articles which have been published in Indian medical journals has increased quite substantially in the past decade. The aim of this study was to collate all evidence on study design quality and statistical analyses used in selected leading Indian medical journals. Ten (10) leading Indian medical journals were selected based on impact factors and all original research articles published in 2003 (N = 588) and 2013 (N = 774) were categorized and reviewed. A validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation of the articles. Main outcomes considered in the present study were - study design types and their frequencies, error/defects proportion in study design, statistical analyses, and implementation of CONSORT checklist in RCT (randomized clinical trials). From 2003 to 2013: The proportion of erroneous statistical analyses did not decrease (χ2=0.592, Φ=0.027, p=0.4418), 25% (80/320) in 2003 compared to 22.6% (111/490) in 2013. Compared with 2003, significant improvement was seen in 2013; the proportion of papers using statistical tests increased significantly (χ2=26.96, Φ=0.16, p<0.0001) from 42.5% (250/588) to 56.7 % (439/774). The overall proportion of errors in study design decreased significantly (χ2=16.783, Φ=0.12 p<0.0001), 41.3% (243/588) compared to 30.6% (237/774). In 2013, randomized clinical trials designs has remained very low (7.3%, 43/588) with majority showing some errors (41 papers, 95.3%). Majority of the published studies were retrospective in nature both in 2003 [79.1% (465/588)] and in 2013 [78.2% (605/774)]. Major decreases in error proportions were observed in both results presentation (χ2=24.477, Φ=0.17, p<0.0001), 82.2% (263/320) compared to 66.3% (325/490) and interpretation (χ2=25.616, Φ=0.173, p<0.0001), 32.5% (104/320) compared to 17.1% (84/490), though some serious ones were still present. Indian medical research seems to have made no major progress regarding using correct statistical analyses, but error/defects in study designs have decreased significantly. Randomized clinical trials are quite rarely published and have high proportion of methodological problems.

  9. Analyzing phenological extreme events over the past five decades in Germany

    NASA Astrophysics Data System (ADS)

    Schleip, Christoph; Menzel, Annette; Estrella, Nicole; Graeser, Philipp

    2010-05-01

    As climate change may alter the frequency and intensity of extreme temperatures, we analysed whether warming of the last 5 decades has already changed the statistics of phenological extreme events. In this context, two extreme value statistical concepts are discussed and applied to existing phenological datasets of German Weather Service (DWD) in order to derive probabilities of occurrence for extreme early or late phenological events. We analyse four phenological groups; "begin of flowering, "leaf foliation", "fruit ripening" and "leaf colouring" as well as DWD indicator phases of the "phenological year". Additionally we put an emphasis on a between-species analysis; a comparison of differences in extreme onsets between three common northern conifers. Furthermore we conducted a within-species analysis with different phases of horse chestnut throughout a year. The first statistical approach fits data to a Gaussian model using traditional statistical techniques, and then analyses the extreme quantile. The key point of this approach is the adoption of an appropriate probability density function (PDF) to the observed data and the assessment of the PDF parameters change in time. The full analytical description in terms of the estimated PDF for defined time steps of the observation period allows probability assessments of extreme values for e.g. annual or decadal time steps. Related with this approach is the possibility of counting out the onsets which fall in our defined extreme percentiles. The estimation of the probability of extreme events on the basis of the whole data set is in contrast to analyses with the generalized extreme value distribution (GEV). The second approach deals with the extreme PDFs itself and fits the GEV distribution to annual minima of phenological series to provide useful estimates about return levels. For flowering and leaf unfolding phases exceptionally early extremes are seen since the mid 1980s and especially for the single years 1961, 1990 and 2007 whereas exceptionally extreme late events are seen in the year 1970. Summer phases such as fruit ripening exhibit stronger shifts to early extremes than spring phases. Leaf colouring phases reveal increasing probability for late extremes. The with GEV estimated 100-year event of Picea, Pinus and Larix amount to extreme early events of about -27, -31.48 and -32.79 days, respectively. If we assume non-stationary minimum data we get a more extreme 100-year event of about -35.40 for Picea but associated with wider confidence intervals. The GEV is simply another probability distribution but for purposes of extreme analysis in phenology it should be considered as equally important as (if not more important than) the Gaussian PDF approach.

  10. Increased hospital length of stay attributable to Clostridium difficile infection in patients with four co-morbidities: an analysis of hospital episode statistics in four European countries.

    PubMed

    Eckmann, Christian; Wasserman, Matthew; Latif, Faisal; Roberts, Graeme; Beriot-Mathiot, Axelle

    2013-10-01

    Hospital-onset Clostridium difficile infection (CDI) places a significant burden on health care systems throughout Europe, estimated at around €3 billion per annum. This burden is shared between national payers and hospitals that support additional bed days for patients diagnosed with CDI while in hospital or patients re-admitted from a previous hospitalisation. This study was performed to quantify additional hospital stay attributable to CDI in four countries, England, Germany, Spain, and The Netherlands, by analysing nationwide hospital-episode data. We focused upon patients at increased risk of CDI: with chronic obstructive pulmonary disease, heart failure, diabetes, or chronic kidney disease, and aged 50 years or over. Multivariate regression and propensity score matching models were developed to investigate the impact of CDI on additional length of hospital stay, controlling for confounding factors such as underlying disease severity. Patients in England had the longest additional hospital stay attributable to CDI at 16.09 days, followed by Germany at 15.47 days, Spain at 13.56 days, and The Netherlands at 12.58 days, derived using regression analysis. Propensity score matching indicated a higher attributable length of stay of 32.42 days in England, 15.31 days in Spain, and 18.64 days in The Netherlands. Outputs from this study consistently demonstrate that in European countries, for patients whose hospitalisation is complicated by CDI, the infection causes a statistically significant increase in hospital length of stay. This has implications for optimising resource allocation and budget setting at both the national and hospital level to ensure that levels of CDI-complicated hospitalisations are minimised.

  11. Statistical Literacy in the Data Science Workplace

    ERIC Educational Resources Information Center

    Grant, Robert

    2017-01-01

    Statistical literacy, the ability to understand and make use of statistical information including methods, has particular relevance in the age of data science, when complex analyses are undertaken by teams from diverse backgrounds. Not only is it essential to communicate to the consumers of information but also within the team. Writing from the…

  12. Reporting Practices and Use of Quantitative Methods in Canadian Journal Articles in Psychology.

    PubMed

    Counsell, Alyssa; Harlow, Lisa L

    2017-05-01

    With recent focus on the state of research in psychology, it is essential to assess the nature of the statistical methods and analyses used and reported by psychological researchers. To that end, we investigated the prevalence of different statistical procedures and the nature of statistical reporting practices in recent articles from the four major Canadian psychology journals. The majority of authors evaluated their research hypotheses through the use of analysis of variance (ANOVA), t -tests, and multiple regression. Multivariate approaches were less common. Null hypothesis significance testing remains a popular strategy, but the majority of authors reported a standardized or unstandardized effect size measure alongside their significance test results. Confidence intervals on effect sizes were infrequently employed. Many authors provided minimal details about their statistical analyses and less than a third of the articles presented on data complications such as missing data and violations of statistical assumptions. Strengths of and areas needing improvement for reporting quantitative results are highlighted. The paper concludes with recommendations for how researchers and reviewers can improve comprehension and transparency in statistical reporting.

  13. The SPARC Intercomparison of Middle Atmosphere Climatologies

    NASA Technical Reports Server (NTRS)

    Randel, William; Fleming, Eric; Geller, Marvin; Gelman, Mel; Hamilton, Kevin; Karoly, David; Ortland, Dave; Pawson, Steve; Swinbank, Richard; Udelhofen, Petra

    2003-01-01

    Our current confidence in 'observed' climatological winds and temperatures in the middle atmosphere (over altitudes approx. 10-80 km) is assessed by detailed intercomparisons of contemporary and historic data sets. These data sets include global meteorological analyses and assimilations, climatologies derived from research satellite measurements, and historical reference atmosphere circulation statistics. We also include comparisons with historical rocketsonde wind and temperature data, and with more recent lidar temperature measurements. The comparisons focus on a few basic circulation statistics, such as temperature, zonal wind, and eddy flux statistics. Special attention is focused on tropical winds and temperatures, where large differences exist among separate analyses. Assimilated data sets provide the most realistic tropical variability, but substantial differences exist among current schemes.

  14. Computer program for prediction of fuel consumption statistical data for an upper stage three-axes stabilized on-off control system

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A FORTRAN coded computer program and method to predict the reaction control fuel consumption statistics for a three axis stabilized rocket vehicle upper stage is described. A Monte Carlo approach is used which is more efficient by using closed form estimates of impulses. The effects of rocket motor thrust misalignment, static unbalance, aerodynamic disturbances, and deviations in trajectory, mass properties and control system characteristics are included. This routine can be applied to many types of on-off reaction controlled vehicles. The pseudorandom number generation and statistical analyses subroutines including the output histograms can be used for other Monte Carlo analyses problems.

  15. Geospatial Characterization of Fluvial Wood Arrangement in a Semi-confined Alluvial River

    NASA Astrophysics Data System (ADS)

    Martin, D. J.; Harden, C. P.; Pavlowsky, R. T.

    2014-12-01

    Large woody debris (LWD) has become universally recognized as an integral component of fluvial systems, and as a result, has become increasingly common as a river restoration tool. However, "natural" processes of wood recruitment and the subsequent arrangement of LWD within the river network are poorly understood. This research used a suite of spatial statistics to investigate longitudinal arrangement patterns of LWD in a low-gradient, Midwestern river. First, a large-scale GPS inventory of LWD, performed on the Big River in the eastern Missouri Ozarks, resulted in over 4,000 logged positions of LWD along seven river segments that covered nearly 100 km of the 237 km river system. A global Moran's I analysis indicates that LWD density is spatially autocorrelated and displays a clustering tendency within all seven river segments (P-value range = 0.000 to 0.054). A local Moran's I analysis identified specific locations along the segments where clustering occurs and revealed that, on average, clusters of LWD density (high or low) spanned 400 m. Spectral analyses revealed that, in some segments, LWD density is spatially periodic. Two segments displayed strong periodicity, while the remaining segments displayed varying degrees of noisiness. Periodicity showed a positive association with gravel bar spacing and meander wavelength, although there were insufficient data to statistically confirm the relationship. A wavelet analysis was then performed to investigate periodicity relative to location along the segment. The wavelet analysis identified significant (α = 0.05) periodicity at discrete locations along each of the segments. Those reaches yielding strong periodicity showed stronger relationships between LWD density and the geomorphic/riparian independent variables tested. Analyses consistently identified valley width and sinuosity as being associated with LWD density. The results of these analyses contribute a new perspective on the longitudinal distribution of LWD in a river system, which should help identify physical and/or riparian control mechanisms of LWD arrangement and support the development of models of LWD arrangement. Additionally, the spatial statistical tools presented here have shown to be valuable for identifying longitudinal patterns in river system components.

  16. Methods for meta-analysis of multiple traits using GWAS summary statistics.

    PubMed

    Ray, Debashree; Boehnke, Michael

    2018-03-01

    Genome-wide association studies (GWAS) for complex diseases have focused primarily on single-trait analyses for disease status and disease-related quantitative traits. For example, GWAS on risk factors for coronary artery disease analyze genetic associations of plasma lipids such as total cholesterol, LDL-cholesterol, HDL-cholesterol, and triglycerides (TGs) separately. However, traits are often correlated and a joint analysis may yield increased statistical power for association over multiple univariate analyses. Recently several multivariate methods have been proposed that require individual-level data. Here, we develop metaUSAT (where USAT is unified score-based association test), a novel unified association test of a single genetic variant with multiple traits that uses only summary statistics from existing GWAS. Although the existing methods either perform well when most correlated traits are affected by the genetic variant in the same direction or are powerful when only a few of the correlated traits are associated, metaUSAT is designed to be robust to the association structure of correlated traits. metaUSAT does not require individual-level data and can test genetic associations of categorical and/or continuous traits. One can also use metaUSAT to analyze a single trait over multiple studies, appropriately accounting for overlapping samples, if any. metaUSAT provides an approximate asymptotic P-value for association and is computationally efficient for implementation at a genome-wide level. Simulation experiments show that metaUSAT maintains proper type-I error at low error levels. It has similar and sometimes greater power to detect association across a wide array of scenarios compared to existing methods, which are usually powerful for some specific association scenarios only. When applied to plasma lipids summary data from the METSIM and the T2D-GENES studies, metaUSAT detected genome-wide significant loci beyond the ones identified by univariate analyses. Evidence from larger studies suggest that the variants additionally detected by our test are, indeed, associated with lipid levels in humans. In summary, metaUSAT can provide novel insights into the genetic architecture of a common disease or traits. © 2017 WILEY PERIODICALS, INC.

  17. Systematics and distribution of Cristaria plicata (Bivalvia, Unionidae) from the Russian Far East

    PubMed Central

    Klishko, Olga K.; Lopes-Lima, Manuel; Froufe, Elsa; Bogan, Arthur E.; Abakumova, Vera Y.

    2016-01-01

    Abstract The number of anodontine bivalve species placed in the genus Cristaria (Bivalvia, Unionidae) from the Russian Far East is still not stable among authors. Some recognize only one valid species Cristaria plicata (Leach, 1815) while others accept two additional species, Cristaria tuberculata Schumacher, 1817 and Cristaria herculea (Middendorff, 1847). In the present study, these taxonomic doubts are addressed using analyses of mitochondrial DNA sequences and shell morphometry. No significant differences have been revealed by the COI DNA sequences or the main statistical morphometric indices from the three Cristaria forms. In the specimens analysed, changes in shell morphometry with age suggest that original descriptions of the different forms may be attributed solely to differences in age and sex. We consider that Cristaria plicata, Cristaria tuberculata and Cristaria herculea from the Russian Far East should be considered as a single species, namely Cristaria plicata (Leach, 1815), with Cristaria tuberculata and Cristaria herculea as junior synonyms. The geographic range of Cristaria plicata and its conservation status are also presented here. PMID:27110206

  18. Assessing the clinical utility of cancer genomic and proteomic data across tumor types.

    PubMed

    Yuan, Yuan; Van Allen, Eliezer M; Omberg, Larsson; Wagle, Nikhil; Amin-Mansour, Ali; Sokolov, Artem; Byers, Lauren A; Xu, Yanxun; Hess, Kenneth R; Diao, Lixia; Han, Leng; Huang, Xuelin; Lawrence, Michael S; Weinstein, John N; Stuart, Josh M; Mills, Gordon B; Garraway, Levi A; Margolin, Adam A; Getz, Gad; Liang, Han

    2014-07-01

    Molecular profiling of tumors promises to advance the clinical management of cancer, but the benefits of integrating molecular data with traditional clinical variables have not been systematically studied. Here we retrospectively predict patient survival using diverse molecular data (somatic copy-number alteration, DNA methylation and mRNA, microRNA and protein expression) from 953 samples of four cancer types from The Cancer Genome Atlas project. We find that incorporating molecular data with clinical variables yields statistically significantly improved predictions (FDR < 0.05) for three cancers but those quantitative gains were limited (2.2-23.9%). Additional analyses revealed little predictive power across tumor types except for one case. In clinically relevant genes, we identified 10,281 somatic alterations across 12 cancer types in 2,928 of 3,277 patients (89.4%), many of which would not be revealed in single-tumor analyses. Our study provides a starting point and resources, including an open-access model evaluation platform, for building reliable prognostic and therapeutic strategies that incorporate molecular data.

  19. Legality, Racialization, and Immigrants’ Experience of Ethnoracial Harassment in Russia

    PubMed Central

    Agadjanian, Victor; Menjívar, Cecilia; Zotova, Natalya

    2017-01-01

    Using data from a structured survey and in-depth interviews in three Russian cities, our study engages the scholarship on immigration legal regimes and racialization practices to examine the experiences of ethnoracially motivated harassment among working migrant women from Kyrgyzstan, Tajikistan, and Uzbekistan in Russia. The results of statistical analyses show that regularized legal status is associated with a significantly lower likelihood of experiencing harassment at the hands of law enforcement agents and other actors alike. Regardless of legal status, however, the analyses reveal significant variations across the three migrant groups, with members of the group that is seen as racially most distinct from the host population having the highest odds of reporting harassment. The analysis of in-depth interviews confirms and expands on these patterns, providing additional insights into the complex expressions and interplay of legality and race in migrants’ everyday experiences. The study findings are situated within the cross-national literature on migrants’ legal and ethnoracial exclusion in receiving contexts. PMID:29109593

  20. Analogy as a strategy for supporting complex problem solving under uncertainty.

    PubMed

    Chan, Joel; Paletz, Susannah B F; Schunn, Christian D

    2012-11-01

    Complex problem solving in naturalistic environments is fraught with uncertainty, which has significant impacts on problem-solving behavior. Thus, theories of human problem solving should include accounts of the cognitive strategies people bring to bear to deal with uncertainty during problem solving. In this article, we present evidence that analogy is one such strategy. Using statistical analyses of the temporal dynamics between analogy and expressed uncertainty in the naturalistic problem-solving conversations among scientists on the Mars Rover Mission, we show that spikes in expressed uncertainty reliably predict analogy use (Study 1) and that expressed uncertainty reduces to baseline levels following analogy use (Study 2). In addition, in Study 3, we show with qualitative analyses that this relationship between uncertainty and analogy is not due to miscommunication-related uncertainty but, rather, is primarily concentrated on substantive problem-solving issues. Finally, we discuss a hypothesis about how analogy might serve as an uncertainty reduction strategy in naturalistic complex problem solving.

  1. Uranium hydrogeochemical and stream sediment reconnaissance of the Philip Smith Mountains NTMS quadrangle, Alaska

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1981-09-01

    Results of a hydrogeochemical and stream sediment reconnaissance of the Philip Smith Mountains NTMS quadrangle, Alaska are presented. In addition to this abbreviated data release, more complete data are available to the public in machine-readable form. In this data release are location data, field analyses, and laboratory analyses of several different sample media. For the sake of brevity, many field site observations have not been included in this volume. These data are, however, available on the magnetic tape. Appendices A and B describe the sample media and summarize the analytical results for each medium. The data were subsetted by onemore » of the Los Alamos National Laboratory (LANL) sorting programs into groups of stream sediment and lake sediment samples. For each group which contains a sufficient number of observations, statistical tables, tables of raw data, and 1:1000000 scale maps of pertinent elements have been included in this report.« less

  2. Microplate-based filter paper assay to measure total cellulase activity.

    PubMed

    Xiao, Zhizhuang; Storms, Reginald; Tsang, Adrian

    2004-12-30

    The standard filter paper assay (FPA) published by the International Union of Pure and Applied Chemistry (IUPAC) is widely used to determine total cellulase activity. However, the IUPAC method is not suitable for the parallel analyses of large sample numbers. We describe here a microplate-based method for assaying large sample numbers. To achieve this, we reduced the enzymatic reaction volume to 60 microl from the 1.5 ml used in the IUPAC method. The modified 60-microl format FPA can be carried out in 96-well assay plates. Statistical analyses showed that the cellulase activities of commercial cellulases from Trichoderma reesei and Aspergillus species determined with our 60-microl format FPA were not significantly different from the activities measured with the standard FPA. Our results also indicate that the 60-microl format FPA is quantitative and highly reproducible. Moreover, the addition of excess beta-glucosidase increased the sensitivity of the assay by up to 60%. 2004 Wiley Periodicals, Inc.

  3. Elastic properties and apparent density of human edentulous maxilla and mandible

    PubMed Central

    Seong, Wook-Jin; Kim, Uk-Kyu; Swift, James Q.; Heo, Young-Cheul; Hodges, James S.; Ko, Ching-Chang

    2009-01-01

    The aim of this study aim was to determine whether elastic properties and apparent density of bone differ in different anatomical regions of the maxilla and mandible. Additional analyses assessed how elastic properties and apparent density were related. Four pairs of edentulous maxilla and mandibles were retrieved from fresh human cadavers. Bone samples from four anatomical regions (maxillary anterior, maxillary posterior, mandibular anterior, mandibular posterior) were obtained. Elastic modulus (EM) and hardness (H) were measured using the nano-indentation technique. Bone samples containing cortical and trabecular bone were used to measure composite apparent density (cAD) using Archimedes’ principle. Statistical analyses used repeated measures ANOVA and Pearson correlations. Bone physical properties differed between regions of the maxilla and mandible. Generally, mandible had higher physical property measurements than maxilla. EM and H were higher in posterior than in anterior regions; the reverse was true for cAD. Posterior maxillary cAD was significantly lower than that in the three other regions. PMID:19647417

  4. Elastic properties and apparent density of human edentulous maxilla and mandible.

    PubMed

    Seong, W-J; Kim, U-K; Swift, J Q; Heo, Y-C; Hodges, J S; Ko, C-C

    2009-10-01

    The aim of this study was to determine whether elastic properties and apparent density of bone differ in different anatomical regions of the maxilla and mandible. Additional analyses assessed how elastic properties and apparent density were related. Four pairs of edentulous maxilla and mandibles were retrieved from fresh human cadavers. Bone samples from four anatomical regions (maxillary anterior, maxillary posterior, mandibular anterior, mandibular posterior) were obtained. Elastic modulus (EM) and hardness (H) were measured using the nano-indentation technique. Bone samples containing cortical and trabecular bone were used to measure composite apparent density (cAD) using Archimedes' principle. Statistical analyses used repeated measures ANOVA and Pearson correlations. Bone physical properties differed between regions of the maxilla and mandible. Generally, mandible had higher physical property measurements than maxilla. EM and H were higher in posterior than in anterior regions; the reverse was true for cAD. Posterior maxillary cAD was significantly lower than that in the three other regions.

  5. [Chagas' disease as main cause of death in the southeastern region of Brazil: presence of contributory causes].

    PubMed

    Wanderley, D M; Litvoc, J

    1994-02-01

    Death certificates of all persons who died in in the State of S.Paulo, Brazil and which presented Chagas' disease as the principal cause of death, were studied with a view to analysing the existing additional information available as to contributory causes. After a direct reading of the 1,308 death certificates, the contributory causes were identified and registered. They were mentioned in 261 (20%) of the certificates, 185 of them presenting only one, and 75 two of them. The 6 more frequent contributory causes were: "megas", embolism, chronic pulmonary disease, infections (other than Chagas' disease), arterial hypertension and malnutrition. When analysing the presence of the contributory causes in two groups-persons of less than 50 years old, and those older than 50 a higher proportion of them was observed in the older group and a distinct profile of causes was found for each group. No statistic association was observed between contributory causes and sex or site of residence.

  6. [Musculoskeletal disorders in piano students of a conservatory].

    PubMed

    Bruno, S; Lorusso, A; Caputo, F; Pranzo, S; L'Abbate, N

    2006-01-01

    A four parts questionnaire administered to piano students of Conservatory "T. Schipa" of Lecce, southern Italy, was used to determine the prevalence of instrument-related problems. Among 121 responders, 48 (39.6%) were considered affected according to pre-established criteria. Univariate analyses showed statistical differences for mean age, number of hours spent playing per week, interval without breaks, lack of sport practice and acceptability of "No pain, no gain" criteria in students with music-related pains compared with not affected pianists. No association with hand site was found in pianists with only upper limbs diseases. The multivariate analyses performed by logistic regression confirmed the independent association for the risk factors age, lack of sport practice and acceptability of "No pain, no gain" criteria. Differently from several studies older students were more frequently affected and no difference in the prevalence rate was found in females. Findings suggest a probable causal contribution of fixed postures in the development of PRMDs in pianists in addition to repetitive movements of upper limbs.

  7. Time pressure and regulations on hospital-in-the-home (HITH) nurses: An on-the-road study.

    PubMed

    Cœugnet, Stéphanie; Forrierre, Justine; Naveteur, Janick; Dubreucq, Catherine; Anceaux, Françoise

    2016-05-01

    This study investigated both causal factors and consequences of time pressure in hospital-in-the-home (HITH) nurses. These nurses may experience additional stress from the time pressure they encounter while driving to patients' homes, which may result in greater risk taking while driving. From observation in natural settings, data related to the nurses' driving behaviours and emotions were collected and analysed statistically; semi-directed interviews with the nurses were analysed qualitatively. The results suggest that objective time constraints alone do not necessarily elicit subjective time pressure. The challenges and uncertainty associated with healthcare and the driving period contribute to the emergence of this time pressure, which has a negative impact on both the nurses' driving and their emotions. Finally, the study focuses on anticipated and in situ regulations. These findings provide guidelines for organizational and technical solutions allowing the reduction of time pressure among HITH nurses. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  8. Legality, Racialization, and Immigrants' Experience of Ethnoracial Harassment in Russia.

    PubMed

    Agadjanian, Victor; Menjívar, Cecilia; Zotova, Natalya

    2017-11-01

    Using data from a structured survey and in-depth interviews in three Russian cities, our study engages the scholarship on immigration legal regimes and racialization practices to examine the experiences of ethnoracially motivated harassment among working migrant women from Kyrgyzstan, Tajikistan, and Uzbekistan in Russia. The results of statistical analyses show that regularized legal status is associated with a significantly lower likelihood of experiencing harassment at the hands of law enforcement agents and other actors alike. Regardless of legal status, however, the analyses reveal significant variations across the three migrant groups, with members of the group that is seen as racially most distinct from the host population having the highest odds of reporting harassment. The analysis of in-depth interviews confirms and expands on these patterns, providing additional insights into the complex expressions and interplay of legality and race in migrants' everyday experiences. The study findings are situated within the cross-national literature on migrants' legal and ethnoracial exclusion in receiving contexts.

  9. Identifying and characterizing hepatitis C virus hotspots in Massachusetts: a spatial epidemiological approach.

    PubMed

    Stopka, Thomas J; Goulart, Michael A; Meyers, David J; Hutcheson, Marga; Barton, Kerri; Onofrey, Shauna; Church, Daniel; Donahue, Ashley; Chui, Kenneth K H

    2017-04-20

    Hepatitis C virus (HCV) infections have increased during the past decade but little is known about geographic clustering patterns. We used a unique analytical approach, combining geographic information systems (GIS), spatial epidemiology, and statistical modeling to identify and characterize HCV hotspots, statistically significant clusters of census tracts with elevated HCV counts and rates. We compiled sociodemographic and HCV surveillance data (n = 99,780 cases) for Massachusetts census tracts (n = 1464) from 2002 to 2013. We used a five-step spatial epidemiological approach, calculating incremental spatial autocorrelations and Getis-Ord Gi* statistics to identify clusters. We conducted logistic regression analyses to determine factors associated with the HCV hotspots. We identified nine HCV clusters, with the largest in Boston, New Bedford/Fall River, Worcester, and Springfield (p < 0.05). In multivariable analyses, we found that HCV hotspots were independently and positively associated with the percent of the population that was Hispanic (adjusted odds ratio [AOR]: 1.07; 95% confidence interval [CI]: 1.04, 1.09) and the percent of households receiving food stamps (AOR: 1.83; 95% CI: 1.22, 2.74). HCV hotspots were independently and negatively associated with the percent of the population that were high school graduates or higher (AOR: 0.91; 95% CI: 0.89, 0.93) and the percent of the population in the "other" race/ethnicity category (AOR: 0.88; 95% CI: 0.85, 0.91). We identified locations where HCV clusters were a concern, and where enhanced HCV prevention, treatment, and care can help combat the HCV epidemic in Massachusetts. GIS, spatial epidemiological and statistical analyses provided a rigorous approach to identify hotspot clusters of disease, which can inform public health policy and intervention targeting. Further studies that incorporate spatiotemporal cluster analyses, Bayesian spatial and geostatistical models, spatially weighted regression analyses, and assessment of associations between HCV clustering and the built environment are needed to expand upon our combined spatial epidemiological and statistical methods.

  10. Power considerations for λ inflation factor in meta-analyses of genome-wide association studies.

    PubMed

    Georgiopoulos, Georgios; Evangelou, Evangelos

    2016-05-19

    The genomic control (GC) approach is extensively used to effectively control false positive signals due to population stratification in genome-wide association studies (GWAS). However, GC affects the statistical power of GWAS. The loss of power depends on the magnitude of the inflation factor (λ) that is used for GC. We simulated meta-analyses of different GWAS. Minor allele frequency (MAF) ranged from 0·001 to 0·5 and λ was sampled from two scenarios: (i) random scenario (empirically-derived distribution of real λ values) and (ii) selected scenario from simulation parameter modification. Adjustment for λ was considered under single correction (within study corrected standard errors) and double correction (additional λ corrected summary estimate). MAF was a pivotal determinant of observed power. In random λ scenario, double correction induced a symmetric power reduction in comparison to single correction. For MAF 1·2 and MAF >5%. Our results provide a quick but detailed index for power considerations of future meta-analyses of GWAS that enables a more flexible design from early steps based on the number of studies accumulated in different groups and the λ values observed in the single studies.

  11. Does educational status impact adult mortality in Denmark? A twin approach.

    PubMed

    Madsen, Mia; Andersen, Anne-Marie Nybo; Christensen, Kaare; Andersen, Per Kragh; Osler, Merete

    2010-07-15

    To disentangle an independent effect of educational status on mortality risk from direct and indirect selection mechanisms, the authors used a discordant twin pair design, which allowed them to isolate the effect of education by means of adjustment for genetic and environmental confounding per design. The study is based on data from the Danish Twin Registry and Statistics Denmark. Using Cox regression, they estimated hazard ratios for mortality according to the highest attained education among 5,260 monozygotic and 11,088 dizygotic same-sex twin pairs born during 1921-1950 and followed during 1980-2008. Both standard cohort and intrapair analyses were conducted separately for zygosity, gender, and birth cohort. Educational differences in mortality were demonstrated in the standard cohort analyses but attenuated in the intrapair analyses in all subgroups but men born during 1921-1935, and no effect modification by zygosity was observed. Hence, the results are most compatible with an effect of early family environment in explaining the educational inequality in mortality. However, large educational differences were still reflected in mortality risk differences within twin pairs, thus supporting some degree of independent effect of education. In addition, the effect of education may be more pronounced in older cohorts of Danish men.

  12. Differentiation of women with premenstrual dysphoric disorder, recurrent brief depression, and healthy controls by daily mood rating dynamics.

    PubMed

    Pincus, Steven M; Schmidt, Peter J; Palladino-Negro, Paula; Rubinow, David R

    2008-04-01

    Enhanced statistical characterization of mood-rating data holds the potential to more precisely classify and sub-classify recurrent mood disorders like premenstrual dysphoric disorder (PMDD) and recurrent brief depressive disorder (RBD). We applied several complementary statistical methods to differentiate mood rating dynamics among women with PMDD, RBD, and normal controls (NC). We compared three subgroups of women: NC (n=8); PMDD (n=15); and RBD (n=9) on the basis of daily self-ratings of sadness, study lengths between 50 and 120 days. We analyzed mean levels; overall variability, SD; sequential irregularity, approximate entropy (ApEn); and a quantification of the extent of brief and staccato dynamics, denoted 'Spikiness'. For each of SD, irregularity (ApEn), and Spikiness, we showed highly significant subgroup differences, ANOVA0.001 for each statistic; additionally, many paired subgroup comparisons showed highly significant differences. In contrast, mean levels were indistinct among the subgroups. For SD, normal controls had much smaller levels than the other subgroups, with RBD intermediate. ApEn showed PMDD to be significantly more regular than the other subgroups. Spikiness showed NC and RBD data sets to be much more staccato than their PMDD counterparts, and appears to suitably characterize the defining feature of RBD dynamics. Compound criteria based on these statistical measures discriminated diagnostic subgroups with high sensitivity and specificity. Taken together, the statistical suite provides well-defined specifications of each subgroup. This can facilitate accurate diagnosis, and augment the prediction and evaluation of response to treatment. The statistical methodologies have broad and direct applicability to behavioral studies for many psychiatric disorders, and indeed to similar analyses of associated biological signals across multiple axes.

  13. Digital literacy of youth and young adults with intellectual disability predicted by support needs and social maturity.

    PubMed

    Seok, Soonhwa; DaCosta, Boaventura

    2017-01-01

    This study investigated relationships between digital propensity and support needs as well as predictors of digital propensity in the context of support intensity, age, gender, and social maturity. A total of 118 special education teachers rated the support intensity, digital propensity, and social maturity of 352 students with intellectual disability. Leveraging the Digital Propensity Index, Supports Intensity Scale, and the Social Maturity Scale, descriptive statistics, correlations, multiple regressions, and regression analyses were employed. The findings revealed significant relationships between digital propensity and support needs. In addition, significant predictors of digital propensity were found with regard to support intensity, age, gender, and social maturity.

  14. Current Status and Challenges of Atmospheric Data Assimilation

    NASA Astrophysics Data System (ADS)

    Atlas, R. M.; Gelaro, R.

    2016-12-01

    The issues of modern atmospheric data assimilation are fairly simple to comprehend but difficult to address, involving the combination of literally billions of model variables and tens of millions of observations daily. In addition to traditional meteorological variables such as wind, temperature pressure and humidity, model state vectors are being expanded to include explicit representation of precipitation, clouds, aerosols and atmospheric trace gases. At the same time, model resolutions are approaching single-kilometer scales globally and new observation types have error characteristics that are increasingly non-Gaussian. This talk describes the current status and challenges of atmospheric data assimilation, including an overview of current methodologies, the difficulty of estimating error statistics, and progress toward coupled earth system analyses.

  15. The Thurgood Marshall School of Law Empirical Findings: A Report of the Statistical Analysis of the July 2010 TMSL Texas Bar Results

    ERIC Educational Resources Information Center

    Kadhi, Tau; Holley, D.

    2010-01-01

    The following report gives the statistical findings of the July 2010 TMSL Bar results. Procedures: Data is pre-existing and was given to the Evaluator by email from the Registrar and Dean. Statistical analyses were run using SPSS 17 to address the following research questions: 1. What are the statistical descriptors of the July 2010 overall TMSL…

  16. Evaluating the consistency of gene sets used in the analysis of bacterial gene expression data.

    PubMed

    Tintle, Nathan L; Sitarik, Alexandra; Boerema, Benjamin; Young, Kylie; Best, Aaron A; Dejongh, Matthew

    2012-08-08

    Statistical analyses of whole genome expression data require functional information about genes in order to yield meaningful biological conclusions. The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) are common sources of functionally grouped gene sets. For bacteria, the SEED and MicrobesOnline provide alternative, complementary sources of gene sets. To date, no comprehensive evaluation of the data obtained from these resources has been performed. We define a series of gene set consistency metrics directly related to the most common classes of statistical analyses for gene expression data, and then perform a comprehensive analysis of 3581 Affymetrix® gene expression arrays across 17 diverse bacteria. We find that gene sets obtained from GO and KEGG demonstrate lower consistency than those obtained from the SEED and MicrobesOnline, regardless of gene set size. Despite the widespread use of GO and KEGG gene sets in bacterial gene expression data analysis, the SEED and MicrobesOnline provide more consistent sets for a wide variety of statistical analyses. Increased use of the SEED and MicrobesOnline gene sets in the analysis of bacterial gene expression data may improve statistical power and utility of expression data.

  17. A Monte Carlo Analysis of the Thrust Imbalance for the RSRMV Booster During Both the Ignition Transient and Steady State Operation

    NASA Technical Reports Server (NTRS)

    Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.

    2014-01-01

    This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle

  18. A Monte Carlo Analysis of the Thrust Imbalance for the Space Launch System Booster During Both the Ignition Transient and Steady State Operation

    NASA Technical Reports Server (NTRS)

    Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.

    2014-01-01

    This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle.

  19. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less

  20. Harmonizing health information systems with information systems in other social and economic sectors.

    PubMed Central

    Macfarlane, Sarah B.

    2005-01-01

    Efforts to strengthen health information systems in low- and middle-income countries should include forging links with systems in other social and economic sectors. Governments are seeking comprehensive socioeconomic data on the basis of which to implement strategies for poverty reduction and to monitor achievement of the Millennium Development Goals. The health sector is looking to take action on the social factors that determine health outcomes. But there are duplications and inconsistencies between sectors in the collection, reporting, storage and analysis of socioeconomic data. National offices of statistics give higher priority to collection and analysis of economic than to social statistics. The Report of the Commission for Africa has estimated that an additional US$ 60 million a year is needed to improve systems to collect and analyse statistics in Africa. Some donors recognize that such systems have been weakened by numerous international demands for indicators, and have pledged support for national initiatives to strengthen statistical systems, as well as sectoral information systems such as those in health and education. Many governments are working to coordinate information systems to monitor and evaluate poverty reduction strategies. There is therefore an opportunity for the health sector to collaborate with other sectors to lever international resources to rationalize definition and measurement of indicators common to several sectors; streamline the content, frequency and timing of household surveys; and harmonize national and subnational databases that store socioeconomic data. Without long-term commitment to improve training and build career structures for statisticians and information technicians working in the health and other sectors, improvements in information and statistical systems cannot be sustained. PMID:16184278

  1. An Easy Tool to Predict Survival in Patients Receiving Radiation Therapy for Painful Bone Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westhoff, Paulien G., E-mail: p.g.westhoff@umcutrecht.nl; Graeff, Alexander de; Monninkhof, Evelyn M.

    2014-11-15

    Purpose: Patients with bone metastases have a widely varying survival. A reliable estimation of survival is needed for appropriate treatment strategies. Our goal was to assess the value of simple prognostic factors, namely, patient and tumor characteristics, Karnofsky performance status (KPS), and patient-reported scores of pain and quality of life, to predict survival in patients with painful bone metastases. Methods and Materials: In the Dutch Bone Metastasis Study, 1157 patients were treated with radiation therapy for painful bone metastases. At randomization, physicians determined the KPS; patients rated general health on a visual analogue scale (VAS-gh), valuation of life on amore » verbal rating scale (VRS-vl) and pain intensity. To assess the predictive value of the variables, we used multivariate Cox proportional hazard analyses and C-statistics for discriminative value. Of the final model, calibration was assessed. External validation was performed on a dataset of 934 patients who were treated with radiation therapy for vertebral metastases. Results: Patients had mainly breast (39%), prostate (23%), or lung cancer (25%). After a maximum of 142 weeks' follow-up, 74% of patients had died. The best predictive model included sex, primary tumor, visceral metastases, KPS, VAS-gh, and VRS-vl (C-statistic = 0.72, 95% CI = 0.70-0.74). A reduced model, with only KPS and primary tumor, showed comparable discriminative capacity (C-statistic = 0.71, 95% CI = 0.69-0.72). External validation showed a C-statistic of 0.72 (95% CI = 0.70-0.73). Calibration of the derivation and the validation dataset showed underestimation of survival. Conclusion: In predicting survival in patients with painful bone metastases, KPS combined with primary tumor was comparable to a more complex model. Considering the amount of variables in complex models and the additional burden on patients, the simple model is preferred for daily use. In addition, a risk table for survival is provided.« less

  2. Coordinate based random effect size meta-analysis of neuroimaging studies.

    PubMed

    Tench, C R; Tanasescu, Radu; Constantinescu, C S; Auer, D P; Cottam, W J

    2017-06-01

    Low power in neuroimaging studies can make them difficult to interpret, and Coordinate based meta-analysis (CBMA) may go some way to mitigating this issue. CBMA has been used in many analyses to detect where published functional MRI or voxel-based morphometry studies testing similar hypotheses report significant summary results (coordinates) consistently. Only the reported coordinates and possibly t statistics are analysed, and statistical significance of clusters is determined by coordinate density. Here a method of performing coordinate based random effect size meta-analysis and meta-regression is introduced. The algorithm (ClusterZ) analyses both coordinates and reported t statistic or Z score, standardised by the number of subjects. Statistical significance is determined not by coordinate density, but by a random effects meta-analyses of reported effects performed cluster-wise using standard statistical methods and taking account of censoring inherent in the published summary results. Type 1 error control is achieved using the false cluster discovery rate (FCDR), which is based on the false discovery rate. This controls both the family wise error rate under the null hypothesis that coordinates are randomly drawn from a standard stereotaxic space, and the proportion of significant clusters that are expected under the null. Such control is necessary to avoid propagating and even amplifying the very issues motivating the meta-analysis in the first place. ClusterZ is demonstrated on both numerically simulated data and on real data from reports of grey matter loss in multiple sclerosis (MS) and syndromes suggestive of MS, and of painful stimulus in healthy controls. The software implementation is available to download and use freely. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Evidence for the Selective Reporting of Analyses and Discrepancies in Clinical Trials: A Systematic Review of Cohort Studies of Clinical Trials

    PubMed Central

    Dwan, Kerry; Altman, Douglas G.; Clarke, Mike; Gamble, Carrol; Higgins, Julian P. T.; Sterne, Jonathan A. C.; Williamson, Paula R.; Kirkham, Jamie J.

    2014-01-01

    Background Most publications about selective reporting in clinical trials have focussed on outcomes. However, selective reporting of analyses for a given outcome may also affect the validity of findings. If analyses are selected on the basis of the results, reporting bias may occur. The aims of this study were to review and summarise the evidence from empirical cohort studies that assessed discrepant or selective reporting of analyses in randomised controlled trials (RCTs). Methods and Findings A systematic review was conducted and included cohort studies that assessed any aspect of the reporting of analyses of RCTs by comparing different trial documents, e.g., protocol compared to trial report, or different sections within a trial publication. The Cochrane Methodology Register, Medline (Ovid), PsycInfo (Ovid), and PubMed were searched on 5 February 2014. Two authors independently selected studies, performed data extraction, and assessed the methodological quality of the eligible studies. Twenty-two studies (containing 3,140 RCTs) published between 2000 and 2013 were included. Twenty-two studies reported on discrepancies between information given in different sources. Discrepancies were found in statistical analyses (eight studies), composite outcomes (one study), the handling of missing data (three studies), unadjusted versus adjusted analyses (three studies), handling of continuous data (three studies), and subgroup analyses (12 studies). Discrepancy rates varied, ranging from 7% (3/42) to 88% (7/8) in statistical analyses, 46% (36/79) to 82% (23/28) in adjusted versus unadjusted analyses, and 61% (11/18) to 100% (25/25) in subgroup analyses. This review is limited in that none of the included studies investigated the evidence for bias resulting from selective reporting of analyses. It was not possible to combine studies to provide overall summary estimates, and so the results of studies are discussed narratively. Conclusions Discrepancies in analyses between publications and other study documentation were common, but reasons for these discrepancies were not discussed in the trial reports. To ensure transparency, protocols and statistical analysis plans need to be published, and investigators should adhere to these or explain discrepancies. Please see later in the article for the Editors' Summary PMID:24959719

  4. Using the U.S. Geological Survey National Water Quality Laboratory LT-MDL to Evaluate and Analyze Data

    USGS Publications Warehouse

    Bonn, Bernadine A.

    2008-01-01

    A long-term method detection level (LT-MDL) and laboratory reporting level (LRL) are used by the U.S. Geological Survey?s National Water Quality Laboratory (NWQL) when reporting results from most chemical analyses of water samples. Changing to this method provided data users with additional information about their data and often resulted in more reported values in the low concentration range. Before this method was implemented, many of these values would have been censored. The use of the LT-MDL and LRL presents some challenges for the data user. Interpreting data in the low concentration range increases the need for adequate quality assurance because even small contamination or recovery problems can be relatively large compared to concentrations near the LT-MDL and LRL. In addition, the definition of the LT-MDL, as well as the inclusion of low values, can result in complex data sets with multiple censoring levels and reported values that are less than a censoring level. Improper interpretation or statistical manipulation of low-range results in these data sets can result in bias and incorrect conclusions. This document is designed to help data users use and interpret data reported with the LTMDL/ LRL method. The calculation and application of the LT-MDL and LRL are described. This document shows how to extract statistical information from the LT-MDL and LRL and how to use that information in USGS investigations, such as assessing the quality of field data, interpreting field data, and planning data collection for new projects. A set of 19 detailed examples are included in this document to help data users think about their data and properly interpret lowrange data without introducing bias. Although this document is not meant to be a comprehensive resource of statistical methods, several useful methods of analyzing censored data are demonstrated, including Regression on Order Statistics and Kaplan-Meier Estimation. These two statistical methods handle complex censored data sets without resorting to substitution, thereby avoiding a common source of bias and inaccuracy.

  5. How the Mastery Rubric for Statistical Literacy Can Generate Actionable Evidence about Statistical and Quantitative Learning Outcomes

    ERIC Educational Resources Information Center

    Tractenberg, Rochelle E.

    2017-01-01

    Statistical literacy is essential to an informed citizenry; and two emerging trends highlight a growing need for training that achieves this literacy. The first trend is towards "big" data: while automated analyses can exploit massive amounts of data, the interpretation--and possibly more importantly, the replication--of results are…

  6. Using DEWIS and R for Multi-Staged Statistics e-Assessments

    ERIC Educational Resources Information Center

    Gwynllyw, D. Rhys; Weir, Iain S.; Henderson, Karen L.

    2016-01-01

    We demonstrate how the DEWIS e-Assessment system may use embedded R code to facilitate the assessment of students' ability to perform involved statistical analyses. The R code has been written to emulate SPSS output and thus the statistical results for each bespoke data set can be generated efficiently and accurately using standard R routines.…

  7. Statistical power of intervention analyses: simulation and empirical application to treated lumber prices

    Treesearch

    Jeffrey P. Prestemon

    2009-01-01

    Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...

  8. Fundamentals and Catalytic Innovation: The Statistical and Data Management Center of the Antibacterial Resistance Leadership Group.

    PubMed

    Huvane, Jacqueline; Komarow, Lauren; Hill, Carol; Tran, Thuy Tien T; Pereira, Carol; Rosenkranz, Susan L; Finnemeyer, Matt; Earley, Michelle; Jiang, Hongyu Jeanne; Wang, Rui; Lok, Judith; Evans, Scott R

    2017-03-15

    The Statistical and Data Management Center (SDMC) provides the Antibacterial Resistance Leadership Group (ARLG) with statistical and data management expertise to advance the ARLG research agenda. The SDMC is active at all stages of a study, including design; data collection and monitoring; data analyses and archival; and publication of study results. The SDMC enhances the scientific integrity of ARLG studies through the development and implementation of innovative and practical statistical methodologies and by educating research colleagues regarding the application of clinical trial fundamentals. This article summarizes the challenges and roles, as well as the innovative contributions in the design, monitoring, and analyses of clinical trials and diagnostic studies, of the ARLG SDMC. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.

  9. P-MartCancer-Interactive Online Software to Enable Analysis of Shotgun Cancer Proteomic Datasets.

    PubMed

    Webb-Robertson, Bobbie-Jo M; Bramer, Lisa M; Jensen, Jeffrey L; Kobold, Markus A; Stratton, Kelly G; White, Amanda M; Rodland, Karin D

    2017-11-01

    P-MartCancer is an interactive web-based software environment that enables statistical analyses of peptide or protein data, quantitated from mass spectrometry-based global proteomics experiments, without requiring in-depth knowledge of statistical programming. P-MartCancer offers a series of statistical modules associated with quality assessment, peptide and protein statistics, protein quantification, and exploratory data analyses driven by the user via customized workflows and interactive visualization. Currently, P-MartCancer offers access and the capability to analyze multiple cancer proteomic datasets generated through the Clinical Proteomics Tumor Analysis Consortium at the peptide, gene, and protein levels. P-MartCancer is deployed as a web service (https://pmart.labworks.org/cptac.html), alternatively available via Docker Hub (https://hub.docker.com/r/pnnl/pmart-web/). Cancer Res; 77(21); e47-50. ©2017 AACR . ©2017 American Association for Cancer Research.

  10. Antimicrobial susceptibility of Escherichia coli F4, Pasteurella multocida, and Streptococcus suis isolates from a diagnostic veterinary laboratory and recommendations for a surveillance system

    PubMed Central

    Glass-Kaastra, Shiona K.; Pearl, David L.; Reid-Smith, Richard J.; McEwen, Beverly; Slavic, Durda; McEwen, Scott A.; Fairles, Jim

    2014-01-01

    Antimicrobial susceptibility data on Escherichia coli F4, Pasteurella multocida, and Streptococcus suis isolates from Ontario swine (January 1998 to October 2010) were acquired from a comprehensive diagnostic veterinary laboratory in Ontario, Canada. In relation to the possible development of a surveillance system for antimicrobial resistance, data were assessed for ease of management, completeness, consistency, and applicability for temporal and spatial statistical analyses. Limited farm location data precluded spatial analyses and missing demographic data limited their use as predictors within multivariable statistical models. Changes in the standard panel of antimicrobials used for susceptibility testing reduced the number of antimicrobials available for temporal analyses. Data consistency and quality could improve over time in this and similar diagnostic laboratory settings by encouraging complete reporting with sample submission and by modifying database systems to limit free-text data entry. These changes could make more statistical methods available for disease surveillance and cluster detection. PMID:24688133

  11. Antimicrobial susceptibility of Escherichia coli F4, Pasteurella multocida, and Streptococcus suis isolates from a diagnostic veterinary laboratory and recommendations for a surveillance system.

    PubMed

    Glass-Kaastra, Shiona K; Pearl, David L; Reid-Smith, Richard J; McEwen, Beverly; Slavic, Durda; McEwen, Scott A; Fairles, Jim

    2014-04-01

    Antimicrobial susceptibility data on Escherichia coli F4, Pasteurella multocida, and Streptococcus suis isolates from Ontario swine (January 1998 to October 2010) were acquired from a comprehensive diagnostic veterinary laboratory in Ontario, Canada. In relation to the possible development of a surveillance system for antimicrobial resistance, data were assessed for ease of management, completeness, consistency, and applicability for temporal and spatial statistical analyses. Limited farm location data precluded spatial analyses and missing demographic data limited their use as predictors within multivariable statistical models. Changes in the standard panel of antimicrobials used for susceptibility testing reduced the number of antimicrobials available for temporal analyses. Data consistency and quality could improve over time in this and similar diagnostic laboratory settings by encouraging complete reporting with sample submission and by modifying database systems to limit free-text data entry. These changes could make more statistical methods available for disease surveillance and cluster detection.

  12. Ratio index variables or ANCOVA? Fisher's cats revisited.

    PubMed

    Tu, Yu-Kang; Law, Graham R; Ellison, George T H; Gilthorpe, Mark S

    2010-01-01

    Over 60 years ago Ronald Fisher demonstrated a number of potential pitfalls with statistical analyses using ratio variables. Nonetheless, these pitfalls are largely overlooked in contemporary clinical and epidemiological research, which routinely uses ratio variables in statistical analyses. This article aims to demonstrate how very different findings can be generated as a result of less than perfect correlations among the data used to generate ratio variables. These imperfect correlations result from measurement error and random biological variation. While the former can often be reduced by improvements in measurement, random biological variation is difficult to estimate and eliminate in observational studies. Moreover, wherever the underlying biological relationships among epidemiological variables are unclear, and hence the choice of statistical model is also unclear, the different findings generated by different analytical strategies can lead to contradictory conclusions. Caution is therefore required when interpreting analyses of ratio variables whenever the underlying biological relationships among the variables involved are unspecified or unclear. (c) 2009 John Wiley & Sons, Ltd.

  13. Design and Execution of make-like, distributed Analyses based on Spotify’s Pipelining Package Luigi

    NASA Astrophysics Data System (ADS)

    Erdmann, M.; Fischer, B.; Fischer, R.; Rieger, M.

    2017-10-01

    In high-energy particle physics, workflow management systems are primarily used as tailored solutions in dedicated areas such as Monte Carlo production. However, physicists performing data analyses are usually required to steer their individual workflows manually which is time-consuming and often leads to undocumented relations between particular workloads. We present a generic analysis design pattern that copes with the sophisticated demands of end-to-end HEP analyses and provides a make-like execution system. It is based on the open-source pipelining package Luigi which was developed at Spotify and enables the definition of arbitrary workloads, so-called Tasks, and the dependencies between them in a lightweight and scalable structure. Further features are multi-user support, automated dependency resolution and error handling, central scheduling, and status visualization in the web. In addition to already built-in features for remote jobs and file systems like Hadoop and HDFS, we added support for WLCG infrastructure such as LSF and CREAM job submission, as well as remote file access through the Grid File Access Library. Furthermore, we implemented automated resubmission functionality, software sandboxing, and a command line interface with auto-completion for a convenient working environment. For the implementation of a t \\overline{{{t}}} H cross section measurement, we created a generic Python interface that provides programmatic access to all external information such as datasets, physics processes, statistical models, and additional files and values. In summary, the setup enables the execution of the entire analysis in a parallelized and distributed fashion with a single command.

  14. Soil functional diversity analysis of a bauxite-mined restoration chronosequence.

    PubMed

    Lewis, Dawn E; White, John R; Wafula, Denis; Athar, Rana; Dickerson, Tamar; Williams, Henry N; Chauhan, Ashvini

    2010-05-01

    Soil microorganisms are sensitive to environmental perturbations such that changes in microbial community structure and function can provide early signs of anthropogenic disturbances and even predict restoration success. We evaluated the bacterial functional diversity of un-mined and three chronosequence sites at various stages of rehabilitation (0, 10, and 20 years old) located in the Mocho Mountains of Jamaica. Samples were collected during the dry and wet seasons and analyzed for metal concentrations, microbial biomass carbon, bacterial numbers, and functional responses of soil microbiota using community-level physiological profile (CLPP) assays. Regardless of the season, un-mined soils consisted of higher microbial biomass and numbers than any of the rehabilitated sites. Additionally, the number and rate of substrates utilized and substrate evenness (the distribution of color development between the substrates) were significantly greater in the un-mined soils with carbohydrates being preferentially utilized than amino acids, polymers, carboxylic acids, and esters. To some extent, functional responses varied with the seasons but the least physiological activity was shown by the site rehabilitated in 1987 indicating long-term perturbation to this ecosystem. Small subunit ribosomal DNA (SSUrDNA)-denaturing gradient-gel electrophoresis analyses on the microbiota collected from the most preferred CLPP substrates followed by taxonomic analyses showed Proteobacteria, specifically the gamma-proteobacteria, as the most functionally active phyla, indicating a propensity of this phyla to out-compete other groups under the prevailing conditions. Additionally, multivariate statistical analyses, Shannon's diversity, and evenness indices, principal component analysis, biplot and un-weighted-pair-group method with arithmetic averages dendrograms further confirmed that un-mined sites were distinctly different from the rehabilitated soils.

  15. Higher body mass index is associated with reduced posterior default mode connectivity in older adults.

    PubMed

    Beyer, Frauke; Kharabian Masouleh, Sharzhad; Huntenburg, Julia M; Lampe, Leonie; Luck, Tobias; Riedel-Heller, Steffi G; Loeffler, Markus; Schroeter, Matthias L; Stumvoll, Michael; Villringer, Arno; Witte, A Veronica

    2017-04-11

    Obesity is a complex neurobehavioral disorder that has been linked to changes in brain structure and function. However, the impact of obesity on functional connectivity and cognition in aging humans is largely unknown. Therefore, the association of body mass index (BMI), resting-state network connectivity, and cognitive performance in 712 healthy, well-characterized older adults of the Leipzig Research Center for Civilization Diseases (LIFE) cohort (60-80 years old, mean BMI 27.6 kg/m 2  ± 4.2 SD, main sample: n = 521, replication sample: n = 191) was determined. Statistical analyses included a multivariate model selection approach followed by univariate analyses to adjust for possible confounders. Results showed that a higher BMI was significantly associated with lower default mode functional connectivity in the posterior cingulate cortex and precuneus. The effect remained stable after controlling for age, sex, head motion, registration quality, cardiovascular, and genetic factors as well as in replication analyses. Lower functional connectivity in BMI-associated areas correlated with worse executive function. In addition, higher BMI correlated with stronger head motion. Using 3T neuroimaging in a large cohort of healthy older adults, independent negative associations of obesity and functional connectivity in the posterior default mode network were observed. In addition, a subtle link between lower resting-state connectivity in BMI-associated regions and cognitive function was found. The findings might indicate that obesity is associated with patterns of decreased default mode connectivity similar to those seen in populations at risk for Alzheimer's disease. Hum Brain Mapp, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Ultrasound analysis of mental artery flow in elderly patients: a case–control study

    PubMed Central

    Baladi, Marina G; Tucunduva Neto, Raul R C M; Aoki, Eduardo M; Arita, Emiko S; Freitas, Claudio F

    2015-01-01

    Objectives: Mental artery flow decreases with age and may have an aetiological role in alveolar ridge atrophy. The aim of this study was to identify factors associated with alterations of mental artery flow, assessed by ultrasonography. Methods: This case–control study was conducted on elderly patients (aged above 60 years) at the beginning of dental treatment. Intraoral B-mode Doppler ultrasonography was used to assess mental artery flow. The cases were defined as patients with a weak/absent ultrasound signal, whereas the controls presented a strong ultrasound signal. Demographics and radiographic findings (low bone mineral density on dual-energy X-ray absorptiometry and mandibular cortical index on panoramic radiographs) were analysed as risk factors for weak/absent ultrasound signal and were calculated as adjusted odds ratios (AORs) with 95% confidence intervals (CIs) using conditional logistic regression. In addition, the Student's t-test was used to compare the mean alveolar bone height of the analysed groups. A p-value <0.05 was considered statistically significant. Results: A total of 30 ultrasound examinations (12 cases and 18 controls) were analysed. A weak/absent mental artery pulse strength was significantly associated with edentulism (AOR = 3.67; 95% CI = 0.86–15.63; p = 0.046). In addition, there was a significant difference in alveolar bone height between edentulous cases and controls (p = 0.036). Conclusions: Within the limitations of this study, the present results indicate that edentulism is associated with diminished mental artery flow, which, in turn, affects alveolar bone height. PMID:26205777

  17. Lamb survival analysis from birth to weaning in Iranian Kermani sheep.

    PubMed

    Barazandeh, Arsalan; Moghbeli, Sadrollah Molaei; Vatankhah, Mahmood; Hossein-Zadeh, Navid Ghavi

    2012-04-01

    Survival records from 1,763 Kermani lambs born between 1996 and 2004 from 294 ewes and 81 rams were used to determine genetic and non-genetic factors affecting lamb survival. Traits included were lamb survival across five periods from birth to 7, 14, 56, 70, and 90 days of age. Traits were analyzed under Weibull proportional hazard sire models. Several binary analyses were also conducted using animal models. Statistical models included the fixed class effects of sex of lamb, month and year of birth, a covariate effect of birth weight, and random genetic effects of both sire (in survival analyses) and animal (in binary analyses). The average survival to 90 days of age was 94.8%. Hazard rates ranged from 1.00 (birth to 90 days of age) to 1.73 (birth to 7 days of age) between the two sexes indicating that male lambs were at higher risk of mortality than females (P < 0.01). This study also revealed a curvilinear relationship between lamb survival and lamb birth weight, suggesting that viability and birth weight could be considered simultaneously in the selection programs to obtain optimal birth weight in Kermani lambs. Estimates of heritabilities from survival analyses were medium and ranged from 0.23 to 0.29. In addition, heritability estimates obtained from binary analyses were low and varied from 0.04 to 0.09. The results of this study suggest that progress in survival traits could be possible through managerial strategies and genetic selection.

  18. Effects of dietary lipid composition and inulin-type fructans on mineral bioavailability in growing rats.

    PubMed

    Lobo, Alexandre Rodrigues; Filho, Jorge Mancini; Alvares, Eliana Parisi; Cocato, Maria Lucia; Colli, Célia

    2009-02-01

    This study reports the effects of feeding with a combination of inulin-type fructans (ITF) and fish oil (FO) on mineral absorption and bioavailability as part of a semipurified diet offered to rats. Male Wistar rats (n = 24) were fed a 15% lipid diet (soybean oil [SO] or a 1:0.3 fish:soybean oil mixture [FSO]) and diets containing the same sources of lipids supplemented with 10% ITF (Raftilose Synergy 1) ad libitum for 15 d. Feces and urine were collected for mineral analyses during the last 5 d of the test period. Fatty acid composition was determined in liver and cecal mucosa homogenates. Liver and bone mineral analyses were performed by atomic absorption spectrophotometry. Bone biomechanical analyses were evaluated by a 3-point bending test. Compared with the controls, ITF-fed rats had enlarged ceca and a significant decrease in cecal content pH (P < 0.001). The apparent mineral absorption was improved in these rats, and this effect was enhanced by dietary combination with FO for all minerals except for magnesium. Addition of ITF to the diet resulted in higher bone mineral content (calcium and zinc) and bone strength, but increased bone mineral content was only statistically significant in FO-fed animals. A decrease in liver iron stores (P = 0.015) was observed in rats fed FO, considering that ITF consumption returned to levels comparable to the SO control group. These findings confirm the positive influence of ITF on mineral bioavailability, which was potentiated by addition of FO to the diet.

  19. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in amore » stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  20. Not Just a Sum? Identifying Different Types of Interplay between Constituents in Combined Interventions

    PubMed Central

    Van Deun, Katrijn; Thorrez, Lieven; van den Berg, Robert A.; Smilde, Age K.; Van Mechelen, Iven

    2015-01-01

    Motivation Experiments in which the effect of combined manipulations is compared with the effects of their pure constituents have received a great deal of attention. Examples include the study of combination therapies and the comparison of double and single knockout model organisms. Often the effect of the combined manipulation is not a mere addition of the effects of its constituents, with quite different forms of interplay between the constituents being possible. Yet, a well-formalized taxonomy of possible forms of interplay is lacking, let alone a statistical methodology to test for their presence in empirical data. Results Starting from a taxonomy of a broad range of forms of interplay between constituents of a combined manipulation, we propose a sound statistical hypothesis testing framework to test for the presence of each particular form of interplay. We illustrate the framework with analyses of public gene expression data on the combined treatment of dendritic cells with curdlan and GM-CSF and show that these lead to valuable insights into the mode of action of the constituent treatments and their combination. Availability and Implementation R code implementing the statistical testing procedure for microarray gene expression data is available as supplementary material. The data are available from the Gene Expression Omnibus with accession number GSE32986. PMID:25965065

  1. Not Just a Sum? Identifying Different Types of Interplay between Constituents in Combined Interventions.

    PubMed

    Van Deun, Katrijn; Thorrez, Lieven; van den Berg, Robert A; Smilde, Age K; Van Mechelen, Iven

    2015-01-01

    Experiments in which the effect of combined manipulations is compared with the effects of their pure constituents have received a great deal of attention. Examples include the study of combination therapies and the comparison of double and single knockout model organisms. Often the effect of the combined manipulation is not a mere addition of the effects of its constituents, with quite different forms of interplay between the constituents being possible. Yet, a well-formalized taxonomy of possible forms of interplay is lacking, let alone a statistical methodology to test for their presence in empirical data. Starting from a taxonomy of a broad range of forms of interplay between constituents of a combined manipulation, we propose a sound statistical hypothesis testing framework to test for the presence of each particular form of interplay. We illustrate the framework with analyses of public gene expression data on the combined treatment of dendritic cells with curdlan and GM-CSF and show that these lead to valuable insights into the mode of action of the constituent treatments and their combination. R code implementing the statistical testing procedure for microarray gene expression data is available as supplementary material. The data are available from the Gene Expression Omnibus with accession number GSE32986.

  2. A common base method for analysis of qPCR data and the application of simple blocking in qPCR experiments.

    PubMed

    Ganger, Michael T; Dietz, Geoffrey D; Ewing, Sarah J

    2017-12-01

    qPCR has established itself as the technique of choice for the quantification of gene expression. Procedures for conducting qPCR have received significant attention; however, more rigorous approaches to the statistical analysis of qPCR data are needed. Here we develop a mathematical model, termed the Common Base Method, for analysis of qPCR data based on threshold cycle values (C q ) and efficiencies of reactions (E). The Common Base Method keeps all calculations in the logscale as long as possible by working with log 10 (E) ∙ C q , which we call the efficiency-weighted C q value; subsequent statistical analyses are then applied in the logscale. We show how efficiency-weighted C q values may be analyzed using a simple paired or unpaired experimental design and develop blocking methods to help reduce unexplained variation. The Common Base Method has several advantages. It allows for the incorporation of well-specific efficiencies and multiple reference genes. The method does not necessitate the pairing of samples that must be performed using traditional analysis methods in order to calculate relative expression ratios. Our method is also simple enough to be implemented in any spreadsheet or statistical software without additional scripts or proprietary components.

  3. Dependence of drivers affects risks associated with compound events

    NASA Astrophysics Data System (ADS)

    Zscheischler, Jakob; Seneviratne, Sonia I.

    2017-04-01

    Compound climate extremes are receiving increasing attention because of their disproportionate impacts on humans and ecosystems. Risks assessments, however, generally focus on univariate statistics even when multiple stressors are considered. Concurrent extreme droughts and heatwaves have been observed to cause a suite of extreme impacts on natural and human systems alike. For example, they can substantially affect vegetation health, prompting tree mortality, and thereby facilitating insect outbreaks and fires. In addition, hot droughts have the potential to trigger and intensify fires and can cause severe economical damage. By promoting disease spread, extremely hot and dry conditions also strongly affect human health. We analyse the co-occurrence of dry and hot summers and show that these are strongly correlated for many regions, inducing a much higher frequency of concurrent hot and dry summers than what would be assumed from the independent combination of the univariate statistics. Our results demonstrate how the dependence structure between variables affects the occurrence frequency of multivariate extremes. Assessments based on univariate statistics can thus strongly underestimate risks associated with given extremes, if impacts depend on multiple (dependent) variables. We conclude that a multivariate perspective is necessary in order to appropriately assess changes in climate extremes and their impacts, and to design adaptation strategies.

  4. The Thurgood Marshall School of Law Empirical Findings: A Report of the Statistical Analysis of the February 2010 TMSL Texas Bar Results

    ERIC Educational Resources Information Center

    Kadhi, T.; Holley, D.; Rudley, D.; Garrison, P.; Green, T.

    2010-01-01

    The following report gives the statistical findings of the 2010 Thurgood Marshall School of Law (TMSL) Texas Bar results. This data was pre-existing and was given to the Evaluator by email from the Dean. Then, in-depth statistical analyses were run using the SPSS 17 to address the following questions: 1. What are the statistical descriptors of the…

  5. A statistical package for computing time and frequency domain analysis

    NASA Technical Reports Server (NTRS)

    Brownlow, J.

    1978-01-01

    The spectrum analysis (SPA) program is a general purpose digital computer program designed to aid in data analysis. The program does time and frequency domain statistical analyses as well as some preanalysis data preparation. The capabilities of the SPA program include linear trend removal and/or digital filtering of data, plotting and/or listing of both filtered and unfiltered data, time domain statistical characterization of data, and frequency domain statistical characterization of data.

  6. Genetic variation maintained in multilocus models of additive quantitative traits under stabilizing selection.

    PubMed Central

    Bürger, R; Gimelfarb, A

    1999-01-01

    Stabilizing selection for an intermediate optimum is generally considered to deplete genetic variation in quantitative traits. However, conflicting results from various types of models have been obtained. While classical analyses assuming a large number of independent additive loci with individually small effects indicated that no genetic variation is preserved under stabilizing selection, several analyses of two-locus models showed the contrary. We perform a complete analysis of a generalization of Wright's two-locus quadratic-optimum model and investigate numerically the ability of quadratic stabilizing selection to maintain genetic variation in additive quantitative traits controlled by up to five loci. A statistical approach is employed by choosing randomly 4000 parameter sets (allelic effects, recombination rates, and strength of selection) for a given number of loci. For each parameter set we iterate the recursion equations that describe the dynamics of gamete frequencies starting from 20 randomly chosen initial conditions until an equilibrium is reached, record the quantities of interest, and calculate their corresponding mean values. As the number of loci increases from two to five, the fraction of the genome expected to be polymorphic declines surprisingly rapidly, and the loci that are polymorphic increasingly are those with small effects on the trait. As a result, the genetic variance expected to be maintained under stabilizing selection decreases very rapidly with increased number of loci. The equilibrium structure expected under stabilizing selection on an additive trait differs markedly from that expected under selection with no constraints on genotypic fitness values. The expected genetic variance, the expected polymorphic fraction of the genome, as well as other quantities of interest, are only weakly dependent on the selection intensity and the level of recombination. PMID:10353920

  7. Statistical strategies to quantify respiratory sinus arrhythmia: Are commonly used metrics equivalent?

    PubMed Central

    Lewis, Gregory F.; Furman, Senta A.; McCool, Martha F.; Porges, Stephen W.

    2011-01-01

    Three frequently used RSA metrics are investigated to document violations of assumptions for parametric analyses, moderation by respiration, influences of nonstationarity, and sensitivity to vagal blockade. Although all metrics are highly correlated, new findings illustrate that the metrics are noticeably different on the above dimensions. Only one method conforms to the assumptions for parametric analyses, is not moderated by respiration, is not influenced by nonstationarity, and reliably generates stronger effect sizes. Moreover, this method is also the most sensitive to vagal blockade. Specific features of this method may provide insights into improving the statistical characteristics of other commonly used RSA metrics. These data provide the evidence to question, based on statistical grounds, published reports using particular metrics of RSA. PMID:22138367

  8. Confidence crisis of results in biomechanics research.

    PubMed

    Knudson, Duane

    2017-11-01

    Many biomechanics studies have small sample sizes and incorrect statistical analyses, so reporting of inaccurate inferences and inflated magnitude of effects are common in the field. This review examines these issues in biomechanics research and summarises potential solutions from research in other fields to increase the confidence in the experimental effects reported in biomechanics. Authors, reviewers and editors of biomechanics research reports are encouraged to improve sample sizes and the resulting statistical power, improve reporting transparency, improve the rigour of statistical analyses used, and increase the acceptance of replication studies to improve the validity of inferences from data in biomechanics research. The application of sports biomechanics research results would also improve if a larger percentage of unbiased effects and their uncertainty were reported in the literature.

  9. Mortality and long-term exposure to ambient air pollution: ongoing analyses based on the American Cancer Society cohort.

    PubMed

    Krewski, Daniel; Burnett, Richard; Jerrett, Michael; Pope, C Arden; Rainham, Daniel; Calle, Eugenia; Thurston, George; Thun, Michael

    This article provides an overview of previous analysis and reanalysis of the American Cancer Society (ACS) cohort, along with an indication of current ongoing analyses of the cohort with additional follow-up information through to 2000. Results of the first analysis conducted by Pope et al. (1995) showed that higher average sulfate levels were associated with increased mortality, particularly from cardiopulmonary disease. A reanalysis of the ACS cohort, undertaken by Krewski et al. (2000), found the original risk estimates for fine-particle and sulfate air pollution to be highly robust against alternative statistical techniques and spatial modeling approaches. A detailed investigation of covariate effects found a significant modifying effect of education with risk of mortality associated with fine particles declining with increasing educational attainment. Pope et al. (2002) subsequently reported results of a subsequent study using an additional 10 yr of follow-up of the ACS cohort. This updated analysis included gaseous copollutant and new fine-particle measurements, more comprehensive information on occupational exposures, dietary variables, and the most recent developments in statistical modeling integrating random effects and nonparametric spatial smoothing into the Cox proportional hazards model. Robust associations between ambient fine particulate air pollution and elevated risks of cardiopulmonary and lung cancer mortality were clearly evident, providing the strongest evidence to date that long-term exposure to fine particles is an important health risk. Current ongoing analysis using the extended follow-up information will explore the role of ecologic, economic, and, demographic covariates in the particulate air pollution and mortality association. This analysis will also provide insight into the role of spatial autocorrelation at multiple geographic scales, and whether critical instances in time of exposure to fine particles influence the risk of mortality from cardiopulmonary and lung cancer. Information on the influence of covariates at multiple scales and of critical exposure time windows can assist policymakers in establishing timelines for regulatory interventions that maximize population health benefits.

  10. An exploratory examination of the relationships among emotional intelligence, elementary school science teacher self-efficacy, length of teaching experience, race/ethnicity, gender, and age

    NASA Astrophysics Data System (ADS)

    Okech, Allan P.

    The purpose of the study was to examine the relationships among emotional intelligence, teacher self-efficacy, length of teaching experience, and age in a sample of south Texas public school teachers. Additionally, the study examined differences in emotional intelligence between male teachers and female teachers, and among African American, Hispanics, and White teachers. Participants were 180 elementary science teachers from south Texas public schools. The sample was made up of 14 (7.8%) males and 166 (92.2%) females. Regarding race/ethnicity, the study sample consisted of 31 (17.2%) African Americans (3 males and 28 females), 49 (27.2) Hispanics (7 males and 42 females), 98 (54.4%) Whites (3 males and 95 females), and 2 (1.1%) "Other" (1 male and 1 female). Participants ranged in age from 23 years to 65 years. Five hypotheses were proposed and tested to address the relationships under investigation. The study employed a mixed methods---correlational and causal-comparative---research design approach. Three instruments, the Multifactor Emotional Intelligence Scale (Mayer, Caruso, & Salovey, 1999), the Science Teaching Efficacy Beliefs Instrument (Riggs & Enochs, 1990), and a demographics questionnaire were utilized to collect the data. An independent-measures t test, the Pearson r, and the one-way MANOVA were used to analyze the data. A Significant positive relationship was found between "emotional intelligence" and "teacher self-efficacy." Data analyses, however, failed to support hypothesized relationships between "emotional intelligence" and "length of teaching experience," and between "emotional intelligence" and "age". Additionally, statistical analyses of the data collected for this study supported predicted statistically significant differences in "emotional intelligence" between male and female teachers, and among the three race/ethnicity groupings. Based on these findings, recommendations for the application of the construct of "emotional intelligence" in Texas public schools are discussed.

  11. Development and validation of the Learning Disabilities Needs Assessment Tool (LDNAT), a HoNOS-based needs assessment tool for use with people with intellectual disability.

    PubMed

    Painter, J; Trevithick, L; Hastings, R P; Ingham, B; Roy, A

    2016-12-01

    In meeting the needs of individuals with intellectual disabilities (ID) who access health services, a brief, holistic assessment of need is useful. This study outlines the development and testing of the Learning Disabilities Needs Assessment Tool (LDNAT), a tool intended for this purpose. An existing mental health (MH) tool was extended by a multidisciplinary group of ID practitioners. Additional scales were drafted to capture needs across six ID treatment domains that the group identified. LDNAT ratings were analysed for the following: item redundancy, relevance, construct validity and internal consistency (n = 1692); test-retest reliability (n = 27); and concurrent validity (n = 160). All LDNAT scales were deemed clinically relevant with little redundancy apparent. Principal component analysis indicated three components (developmental needs, challenging behaviour, MH and well-being). Internal consistency was good (Cronbach alpha 0.80). Individual item test-retest reliability was substantial-near perfect for 20 scales and slight-fair for three scales. Overall reliability was near perfect (intra-class correlation = 0.91). There were significant associations with five of six condition-specific measures, i.e. the Waisman Activities of Daily Living Scale (general ability/disability), Threshold Assessment Grid (risk), Behaviour Problems Inventory for Individuals with Intellectual Disabilities-Short Form (challenging behaviour) Social Communication Questionnaire (autism) and a bespoke physical health questionnaire. Additionally, the statistically significant correlations between these tools and the LDNAT components made sense clinically. There were no statistically significant correlations with the Psychiatric Assessment Schedules for Adults with Developmental Disabilities (a measure of MH symptoms in people with ID). The LDNAT had clinically utility when rating the needs of people with ID prior to condition-specific assessment(s). Analyses of internal and external validity were promising. Further evaluation of its sensitivity to changes in needs is now required. © 2016 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.

  12. [E-learning and problem based learning integration in cardiology education].

    PubMed

    Gürpinar, Erol; Zayim, Neşe; Başarici, Ibrahim; Gündüz, Filiz; Asar, Mevlüt; Oğuz, Nurettin

    2009-06-01

    The aim of this study was to determine students' satisfaction with an e-learning environment which is developed to support classical problem-based learning (PBL) in medical education and its effect on academic achievement. In this cross-sectional study, students were provided with a web-based learning environment including learning materials related to objectives of the subject of PBL module, which could be used during independent study period. The study group comprised of all of the second year students (164 students) of Akdeniz University, Medical Faculty, during 2007-2008 education period. In order to gather data about students' satisfaction with learning environment, a questionnaire was administered to the students. Comparison of students' academic achievement was based on their performance score in PBL exam. Statistical analyses were performed using unpaired t test and Mann Whitney U test. Findings indicated that 72.6% of the students used e-learning practice. There is no statistically significant difference between mean PBL performance scores of users and non-users of e-learning practice (103.58 vs. 100.88) (t=-0.998, p=0.320). It is found that frequent users of e-learning application had statistically significant higher scores than non-frequent users (106.28 vs. 100.59) (t=-2.373, p=0.01). In addition, 72.6% of the students declared they were satisfied with the application. Our study demonstrated that the most of the students use e-learning application and are satisfied with it. In addition, it is observed that e-learning application positively affects the academic achievement of the students. This study gains special importance by providing contribution to limited literature in the area of instructional technology in PBL and Cardiology teaching.

  13. A Non-Destructive Method for Distinguishing Reindeer Antler (Rangifer tarandus) from Red Deer Antler (Cervus elaphus) Using X-Ray Micro-Tomography Coupled with SVM Classifiers

    PubMed Central

    Lefebvre, Alexandre; Rochefort, Gael Y.; Santos, Frédéric; Le Denmat, Dominique; Salmon, Benjamin; Pétillon, Jean-Marc

    2016-01-01

    Over the last decade, biomedical 3D-imaging tools have gained widespread use in the analysis of prehistoric bone artefacts. While initial attempts to characterise the major categories used in osseous industry (i.e. bone, antler, and dentine/ivory) have been successful, the taxonomic determination of prehistoric artefacts remains to be investigated. The distinction between reindeer and red deer antler can be challenging, particularly in cases of anthropic and/or taphonomic modifications. In addition to the range of destructive physicochemical identification methods available (mass spectrometry, isotopic ratio, and DNA analysis), X-ray micro-tomography (micro-CT) provides convincing non-destructive 3D images and analyses. This paper presents the experimental protocol (sample scans, image processing, and statistical analysis) we have developed in order to identify modern and archaeological antler collections (from Isturitz, France). This original method is based on bone microstructure analysis combined with advanced statistical support vector machine (SVM) classifiers. A combination of six microarchitecture biomarkers (bone volume fraction, trabecular number, trabecular separation, trabecular thickness, trabecular bone pattern factor, and structure model index) were screened using micro-CT in order to characterise internal alveolar structure. Overall, reindeer alveoli presented a tighter mesh than red deer alveoli, and statistical analysis allowed us to distinguish archaeological antler by species with an accuracy of 96%, regardless of anatomical location on the antler. In conclusion, micro-CT combined with SVM classifiers proves to be a promising additional non-destructive method for antler identification, suitable for archaeological artefacts whose degree of human modification and cultural heritage or scientific value has previously made it impossible (tools, ornaments, etc.). PMID:26901355

  14. A quantitative study of nanoparticle skin penetration with interactive segmentation.

    PubMed

    Lee, Onseok; Lee, See Hyun; Jeong, Sang Hoon; Kim, Jaeyoung; Ryu, Hwa Jung; Oh, Chilhwan; Son, Sang Wook

    2016-10-01

    In the last decade, the application of nanotechnology techniques has expanded within diverse areas such as pharmacology, medicine, and optical science. Despite such wide-ranging possibilities for implementation into practice, the mechanisms behind nanoparticle skin absorption remain unknown. Moreover, the main mode of investigation has been qualitative analysis. Using interactive segmentation, this study suggests a method of objectively and quantitatively analyzing the mechanisms underlying the skin absorption of nanoparticles. Silica nanoparticles (SNPs) were assessed using transmission electron microscopy and applied to the human skin equivalent model. Captured fluorescence images of this model were used to evaluate degrees of skin penetration. These images underwent interactive segmentation and image processing in addition to statistical quantitative analyses of calculated image parameters including the mean, integrated density, skewness, kurtosis, and area fraction. In images from both groups, the distribution area and intensity of fluorescent silica gradually increased in proportion to time. Since statistical significance was achieved after 2 days in the negative charge group and after 4 days in the positive charge group, there is a periodic difference. Furthermore, the quantity of silica per unit area showed a dramatic change after 6 days in the negative charge group. Although this quantitative result is identical to results obtained by qualitative assessment, it is meaningful in that it was proven by statistical analysis with quantitation by using image processing. The present study suggests that the surface charge of SNPs could play an important role in the percutaneous absorption of NPs. These findings can help achieve a better understanding of the percutaneous transport of NPs. In addition, these results provide important guidance for the design of NPs for biomedical applications.

  15. Patterns of medicinal plant use: an examination of the Ecuadorian Shuar medicinal flora using contingency table and binomial analyses.

    PubMed

    Bennett, Bradley C; Husby, Chad E

    2008-03-28

    Botanical pharmacopoeias are non-random subsets of floras, with some taxonomic groups over- or under-represented. Moerman [Moerman, D.E., 1979. Symbols and selectivity: a statistical analysis of Native American medical ethnobotany, Journal of Ethnopharmacology 1, 111-119] introduced linear regression/residual analysis to examine these patterns. However, regression, the commonly-employed analysis, suffers from several statistical flaws. We use contingency table and binomial analyses to examine patterns of Shuar medicinal plant use (from Amazonian Ecuador). We first analyzed the Shuar data using Moerman's approach, modified to better meet requirements of linear regression analysis. Second, we assessed the exact randomization contingency table test for goodness of fit. Third, we developed a binomial model to test for non-random selection of plants in individual families. Modified regression models (which accommodated assumptions of linear regression) reduced R(2) to from 0.59 to 0.38, but did not eliminate all problems associated with regression analyses. Contingency table analyses revealed that the entire flora departs from the null model of equal proportions of medicinal plants in all families. In the binomial analysis, only 10 angiosperm families (of 115) differed significantly from the null model. These 10 families are largely responsible for patterns seen at higher taxonomic levels. Contingency table and binomial analyses offer an easy and statistically valid alternative to the regression approach.

  16. Validating Future Force Performance Measures (Army Class): Concluding Analyses

    DTIC Science & Technology

    2016-06-01

    32 Table 3.10. Descriptive Statistics and Intercorrelations for LV Final Predictor Factor Scores...55 Table 4.7. Descriptive Statistics for Analysis Criteria...Soldier attrition and performance: Dependability (Non- Delinquency ), Adjustment, Physical Conditioning, Leadership, Work Orientation, and Agreeableness

  17. FHWA statistical program : a customer's guide to using highway statistics

    DOT National Transportation Integrated Search

    1995-08-01

    The appropriate level of spatial and temporal data aggregation for highway vehicle emissions analyses is one of several important analytical questions that has received considerable interest following passage of the Clean Air Act Amendments (CAAA) of...

  18. Accident Source Terms for Pressurized Water Reactors with High-Burnup Cores Calculated using MELCOR 1.8.5.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauntt, Randall O.; Goldmann, Andrew; Kalinich, Donald A.

    2016-12-01

    In this study, risk-significant pressurized-water reactor severe accident sequences are examined using MELCOR 1.8.5 to explore the range of fission product releases to the reactor containment building. Advances in the understanding of fission product release and transport behavior and severe accident progression are used to render best estimate analyses of selected accident sequences. Particular emphasis is placed on estimating the effects of high fuel burnup in contrast with low burnup on fission product releases to the containment. Supporting this emphasis, recent data available on fission product release from high-burnup (HBU) fuel from the French VERCOR project are used in thismore » study. The results of these analyses are treated as samples from a population of accident sequences in order to employ approximate order statistics characterization of the results. These trends and tendencies are then compared to the NUREG-1465 alternative source term prescription used today for regulatory applications. In general, greater differences are observed between the state-of-the-art calculations for either HBU or low-burnup (LBU) fuel and the NUREG-1465 containment release fractions than exist between HBU and LBU release fractions. Current analyses suggest that retention of fission products within the vessel and the reactor coolant system (RCS) are greater than contemplated in the NUREG-1465 prescription, and that, overall, release fractions to the containment are therefore lower across the board in the present analyses than suggested in NUREG-1465. The decreased volatility of Cs 2 MoO 4 compared to CsI or CsOH increases the predicted RCS retention of cesium, and as a result, cesium and iodine do not follow identical behaviors with respect to distribution among vessel, RCS, and containment. With respect to the regulatory alternative source term, greater differences are observed between the NUREG-1465 prescription and both HBU and LBU predictions than exist between HBU and LBU analyses. Additionally, current analyses suggest that the NUREG-1465 release fractions are conservative by about a factor of 2 in terms of release fractions and that release durations for in-vessel and late in-vessel release periods are in fact longer than the NUREG-1465 durations. It is currently planned that a subsequent report will further characterize these results using more refined statistical methods, permitting a more precise reformulation of the NUREG-1465 alternative source term for both LBU and HBU fuels, with the most important finding being that the NUREG-1465 formula appears to embody significant conservatism compared to current best-estimate analyses. ACKNOWLEDGEMENTS This work was supported by the United States Nuclear Regulatory Commission, Office of Nuclear Regulatory Research. The authors would like to thank Dr. Ian Gauld and Dr. Germina Ilas, of Oak Ridge National Laboratory, for their contributions to this work. In addition to development of core fission product inventory and decay heat information for use in MELCOR models, their insights related to fuel management practices and resulting effects on spatial distribution of fission products in the core was instrumental in completion of our work.« less

  19. Technology Benefit Estimator (T/BEST): User's Manual

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Chamis, Christos C.; Abumeri, Galib

    1994-01-01

    The Technology Benefit Estimator (T/BEST) system is a formal method to assess advanced technologies and quantify the benefit contributions for prioritization. T/BEST may be used to provide guidelines to identify and prioritize high payoff research areas, help manage research and limited resources, show the link between advanced concepts and the bottom line, i.e., accrued benefit and value, and to communicate credibly the benefits of research. The T/BEST software computer program is specifically designed to estimating benefits, and benefit sensitivities, of introducing new technologies into existing propulsion systems. Key engine cycle, structural, fluid, mission and cost analysis modules are used to provide a framework for interfacing with advanced technologies. An open-ended, modular approach is used to allow for modification and addition of both key and advanced technology modules. T/BEST has a hierarchical framework that yields varying levels of benefit estimation accuracy that are dependent on the degree of input detail available. This hierarchical feature permits rapid estimation of technology benefits even when the technology is at the conceptual stage. As knowledge of the technology details increases the accuracy of the benefit analysis increases. Included in T/BEST's framework are correlations developed from a statistical data base that is relied upon if there is insufficient information given in a particular area, e.g., fuel capacity or aircraft landing weight. Statistical predictions are not required if these data are specified in the mission requirements. The engine cycle, structural fluid, cost, noise, and emissions analyses interact with the default or user material and component libraries to yield estimates of specific global benefits: range, speed, thrust, capacity, component life, noise, emissions, specific fuel consumption, component and engine weights, pre-certification test, mission performance engine cost, direct operating cost, life cycle cost, manufacturing cost, development cost, risk, and development time. Currently, T/BEST operates on stand-alone or networked workstations, and uses a UNIX shell or script to control the operation of interfaced FORTRAN based analyses. T/BEST's interface structure works equally well with non-FORTRAN or mixed software analysis. This interface structure is designed to maintain the integrity of the expert's analyses by interfacing with expert's existing input and output files. Parameter input and output data (e.g., number of blades, hub diameters, etc.) are passed via T/BEST's neutral file, while copious data (e.g., finite element models, profiles, etc.) are passed via file pointers that point to the expert's analyses output files. In order to make the communications between the T/BEST's neutral file and attached analyses codes simple, only two software commands, PUT and GET, are required. This simplicity permits easy access to all input and output variables contained within the neutral file. Both public domain and proprietary analyses codes may be attached with a minimal amount of effort, while maintaining full data and analysis integrity, and security. T/BESt's sotware framework, status, beginner-to-expert operation, interface architecture, analysis module addition, and key analysis modules are discussed. Representative examples of T/BEST benefit analyses are shown.

  20. Technology Benefit Estimator (T/BEST): User's manual

    NASA Astrophysics Data System (ADS)

    Generazio, Edward R.; Chamis, Christos C.; Abumeri, Galib

    1994-12-01

    The Technology Benefit Estimator (T/BEST) system is a formal method to assess advanced technologies and quantify the benefit contributions for prioritization. T/BEST may be used to provide guidelines to identify and prioritize high payoff research areas, help manage research and limited resources, show the link between advanced concepts and the bottom line, i.e., accrued benefit and value, and to communicate credibly the benefits of research. The T/BEST software computer program is specifically designed to estimating benefits, and benefit sensitivities, of introducing new technologies into existing propulsion systems. Key engine cycle, structural, fluid, mission and cost analysis modules are used to provide a framework for interfacing with advanced technologies. An open-ended, modular approach is used to allow for modification and addition of both key and advanced technology modules. T/BEST has a hierarchical framework that yields varying levels of benefit estimation accuracy that are dependent on the degree of input detail available. This hierarchical feature permits rapid estimation of technology benefits even when the technology is at the conceptual stage. As knowledge of the technology details increases the accuracy of the benefit analysis increases. Included in T/BEST's framework are correlations developed from a statistical data base that is relied upon if there is insufficient information given in a particular area, e.g., fuel capacity or aircraft landing weight. Statistical predictions are not required if these data are specified in the mission requirements. The engine cycle, structural fluid, cost, noise, and emissions analyses interact with the default or user material and component libraries to yield estimates of specific global benefits: range, speed, thrust, capacity, component life, noise, emissions, specific fuel consumption, component and engine weights, pre-certification test, mission performance engine cost, direct operating cost, life cycle cost, manufacturing cost, development cost, risk, and development time. Currently, T/BEST operates on stand-alone or networked workstations, and uses a UNIX shell or script to control the operation of interfaced FORTRAN based analyses. T/BEST's interface structure works equally well with non-FORTRAN or mixed software analysis. This interface structure is designed to maintain the integrity of the expert's analyses by interfacing with expert's existing input and output files. Parameter input and output data (e.g., number of blades, hub diameters, etc.) are passed via T/BEST's neutral file, while copious data (e.g., finite element models, profiles, etc.) are passed via file pointers that point to the expert's analyses output files. In order to make the communications between the T/BEST's neutral file and attached analyses codes simple, only two software commands, PUT and GET, are required. This simplicity permits easy access to all input and output variables contained within the neutral file. Both public domain and proprietary analyses codes may be attached with a minimal amount of effort, while maintaining full data and analysis integrity, and security.

Top